Datasets:
de-francophones
commited on
Commit
•
ac45eb3
1
Parent(s):
8f644d9
8ee6449b4b5fd37120a2859cb72f64948ac35017e2274647730f2a7495ce8ea6
Browse files- en/1133.html.txt +150 -0
- en/1134.html.txt +114 -0
- en/1135.html.txt +116 -0
- en/1136.html.txt +116 -0
- en/1137.html.txt +116 -0
- en/1138.html.txt +116 -0
- en/1139.html.txt +3 -0
- en/114.html.txt +18 -0
- en/1140.html.txt +195 -0
- en/1141.html.txt +143 -0
- en/1142.html.txt +377 -0
- en/1143.html.txt +130 -0
- en/1144.html.txt +130 -0
- en/1145.html.txt +53 -0
- en/1146.html.txt +53 -0
- en/1147.html.txt +112 -0
- en/1148.html.txt +53 -0
- en/1149.html.txt +183 -0
- en/115.html.txt +105 -0
- en/1150.html.txt +183 -0
- en/1151.html.txt +183 -0
- en/1152.html.txt +190 -0
- en/1153.html.txt +76 -0
- en/1154.html.txt +109 -0
- en/1155.html.txt +168 -0
- en/1156.html.txt +110 -0
- en/1157.html.txt +184 -0
- en/1158.html.txt +0 -0
- en/1159.html.txt +231 -0
- en/116.html.txt +105 -0
- en/1160.html.txt +231 -0
- en/1161.html.txt +165 -0
- en/1162.html.txt +70 -0
- en/1163.html.txt +70 -0
- en/1164.html.txt +70 -0
- en/1165.html.txt +182 -0
- en/1166.html.txt +70 -0
- en/1167.html.txt +70 -0
- en/1168.html.txt +198 -0
- en/1169.html.txt +198 -0
- en/117.html.txt +181 -0
- en/1170.html.txt +209 -0
- en/1171.html.txt +209 -0
- en/1172.html.txt +209 -0
- en/1173.html.txt +58 -0
- en/1174.html.txt +95 -0
- en/1175.html.txt +85 -0
- en/1176.html.txt +199 -0
- en/1177.html.txt +199 -0
- en/1178.html.txt +3 -0
en/1133.html.txt
ADDED
@@ -0,0 +1,150 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
A cemetery or graveyard is a place where the remains of dead people are buried or otherwise interred. The word cemetery (from Greek κοιμητήριον, "sleeping place")[1][2] implies that the land is specifically designated as a burial ground and originally applied to the Roman catacombs.[3] The term graveyard is often used interchangeably with cemetery, but a graveyard primarily refers to a burial ground within a churchyard.[4][5]
|
4 |
+
|
5 |
+
The intact or cremated remains of people may be interred in a grave, commonly referred to as burial, or in a tomb, an "above-ground grave" (resembling a sarcophagus), a mausoleum, columbarium, niche, or other edifice. In Western cultures, funeral ceremonies are often observed in cemeteries. These ceremonies or rites of passage differ according to cultural practices and religious beliefs. Modern cemeteries often include crematoria, and some grounds previously used for both, continue as crematoria as a principal use long after the interment areas have been filled.
|
6 |
+
|
7 |
+
Taforalt cave in Morocco is the oldest known cemetery in the world. It was the resting place of at least 34 Iberomaurusian individuals, the bulk of which have been dated to 15,100 to 14,000 years ago.
|
8 |
+
|
9 |
+
Neolithic cemeteries are sometimes referred to by the term "grave field". They are one of the chief sources of information on ancient and prehistoric cultures, and numerous archaeological cultures are defined by their burial customs, such as the
|
10 |
+
Urnfield culture of the European Bronze Age.
|
11 |
+
|
12 |
+
From about the 7th century, in Europe a burial was under the control of the Church and could only take place on consecrated church ground. Practices varied, but in continental Europe, bodies were usually buried in a mass grave until they had decomposed. The bones were then exhumed and stored in ossuaries, either along the arcaded bounding walls of the cemetery or within the church under floor slabs and behind walls.
|
13 |
+
|
14 |
+
In most cultures those who were vastly rich, had important professions, were part of the nobility or were of any other high social status were usually buried in individual crypts inside or beneath the relevant place of worship with an indication of their name, date of death and other biographical data. In Europe, this was often accompanied by a depiction of their coat of arms.
|
15 |
+
|
16 |
+
Most others were buried in graveyards again divided by social status. Mourners who could afford the work of a stonemason had a headstone engraved with a name, dates of birth and death and sometimes other biographical data, and set up over the place of burial. Usually, the more writing and symbols carved on the headstone, the more expensive it was. As with most other human property such as houses and means of transport, richer families used to compete for the artistic value of their family headstone in comparison to others around it, sometimes adding a statue (such as a weeping angel) on the top of the grave.
|
17 |
+
|
18 |
+
Those who could not pay for a headstone at all usually had some religious symbol made from wood on the place of burial such as a Christian cross; however, this would quickly deteriorate under the rain or snow. Some families hired a blacksmith and had large crosses made from various metals put on the places of burial.
|
19 |
+
|
20 |
+
Starting in the early 19th century, the burial of the dead in graveyards began to be discontinued, due to rapid population growth in the early stages of the Industrial Revolution, continued outbreaks of infectious disease near graveyards and the increasingly limited space in graveyards for new interments. In many European states, burial in graveyards was eventually outlawed altogether through legislation.
|
21 |
+
|
22 |
+
Instead of graveyards, completely new places of burial were established away from heavily populated areas and outside of old towns and city centers. Many new cemeteries became municipally owned or were run by their own corporations, and thus independent from churches and their churchyards.
|
23 |
+
|
24 |
+
In some cases, skeletons were exhumed from graveyards and moved into ossuaries or catacombs. A large action of this type occurred in 18th century Paris when human remains were transferred from graveyards all over the city to the Catacombs of Paris. The bones of an estimated 6 million people are to be found there.[6]
|
25 |
+
|
26 |
+
An early example of a landscape-style cemetery is Père Lachaise in Paris. This embodied the idea of state- rather than church-controlled burial, a concept that spread through the continent of Europe with the Napoleonic invasions. This could include the opening of cemeteries by private or joint stock companies. The shift to municipal cemeteries or those established by private companies was usually accompanied by the establishing of landscaped burial grounds outside the city (e.g. extramural).
|
27 |
+
|
28 |
+
In Britain the movement was driven by dissenters and public health concerns. The Rosary Cemetery in Norwich was opened in 1819 as a burial ground for all religious backgrounds. Similar private non-denominational cemeteries were established near industrialising towns with growing populations, such as Manchester (1821) and Liverpool (1825). Each cemetery required a separate Act of Parliament for authorisation, although the capital was raised through the formation of joint-stock companies.
|
29 |
+
|
30 |
+
In the first 50 years of the 19th century the population of London more than doubled from 1 million to 2.3 million. The small parish churchyards were rapidly becoming dangerously overcrowded, and decaying matter infiltrating the water supply was causing epidemics. The issue became particularly acute after the cholera epidemic of 1831, which killed 52,000 people in Britain alone, putting unprecedented pressure on the country's burial capacity. Concerns were also raised about the potential public health hazard arising from the inhalation of gases generated from human putrefaction under the then prevailing miasma theory of disease.
|
31 |
+
|
32 |
+
Legislative action was slow in coming, but in 1832 Parliament finally acknowledged the need for the establishment of large municipal cemeteries and encouraged their construction outside London. The same bill also closed all inner London churchyards to new deposits. The Magnificent Seven, seven large cemeteries around London, were established in the following decade, starting with Kensal Green in 1832.[7]
|
33 |
+
|
34 |
+
Urban planner and author John Claudius Loudon was one of the first professional cemetery designers, and his book On the Laying Out, Planting and Managing of Cemeteries (1843) was very influential on designers and architects of the period. Loudon himself designed three cemeteries – Bath Abbey Cemetery, Histon Road Cemetery, Cambridge, and Southampton Old Cemetery.[8]
|
35 |
+
|
36 |
+
The Metropolitan Burial Act of 1852 legislated for the establishment of the first national system of government-funded municipal cemeteries across the country, opening the way for a massive expansion of burial facilities throughout the late 19th century.[9]
|
37 |
+
|
38 |
+
There are a number of different styles of cemetery in use. Many cemeteries have areas based on different styles, reflecting the diversity of cultural practices around death and how it changes over time.
|
39 |
+
|
40 |
+
The urban cemetery is a burial ground located in the interior of a village, town, or city. Early urban cemeteries were churchyards, which filled quickly and exhibited a haphazard placement of burial markers as sextons tried to squeeze new burials into the remaining space. As new burying grounds were established in urban areas to compensate, burial plots were often laid out in a grid to replace the chaotic appearance of the churchyard.[11] Urban cemeteries developed over time into a more landscaped form as part of civic development of beliefs and institutions that sought to portray the city as civilized and harmonious.[12]
|
41 |
+
|
42 |
+
Urban cemeteries were more sanitary (a place to safely dispose of decomposing corpses) than they were aesthetically pleasing. Corpses were usually buried wrapped in cloth, since coffins, burial vaults, and above-ground crypts inhibited the process of decomposition.[13] Nonetheless, urban cemeteries which were heavily used were often very unhealthy. Receiving vaults and crypts often needed to be aired before entering, as decomposing corpses used up so much oxygen that even candles could not remain lit.[14] The sheer stench from decomposing corpses, even when buried deeply, was overpowering in areas adjacent to the urban cemetery.[15][16] Decomposition of the human body releases significant pathogenic bacteria, fungi, protozoa, and viruses which can cause disease and illness, and many urban cemeteries were located without consideration for local groundwater. Modern burials in urban cemeteries also release toxic chemicals associated with embalming, such as arsenic, formaldehyde, and mercury. Coffins and burial equipment can also release significant amounts of toxic chemicals such as arsenic (used to preserve coffin wood) and formaldehyde (used in varnishes and as a sealant) and toxic metals such as copper, lead, and zinc (from coffin handles and flanges).[17]
|
43 |
+
|
44 |
+
Urban cemeteries relied heavily on the fact that the soft parts of the body would decompose in about 25 years (although, in moist soil, decomposition can take up to 70 years).[18] If room for new burials was needed, older bones could be dug up and interred elsewhere (such as in an ossuary) to make space for new interments.[13] It was not uncommon in some places, such as England, for fresher corpses to be chopped up to aid decomposition, and for bones to be burned to create fertilizer.[19] The re-use of graves allowed for a steady stream of income, which enabled the cemetery to remain well-maintained and in good repair.[20] Not all urban cemeteries engaged in re-use of graves, and cultural taboos often prevented it. Many urban cemeteries have fallen into disrepair and become overgrown, as they lacked endowments to fund perpetual care. Many urban cemeteries today are thus home to wildlife, birds, and plants which cannot be found anywhere else in the urban area, and many urban cemeteries in the late 20th century touted their role as an environmental refuge.[21][22]
|
45 |
+
|
46 |
+
Many urban cemeteries are characterized by multiple burials in the same grave. Multiple burials is a consequence of the limited size of the urban cemetery, which cannot easily expand due to adjacent building development. It was not uncommon for an urban cemetery to begin adding soil to the top of the cemetery to create new burial space.
|
47 |
+
|
48 |
+
A monumental cemetery is the traditional style of cemetery where headstones or other monuments made of marble, granite or similar materials rise vertically above the ground (typically around 50 cm but some can be over 2 metres high). Often the entire grave is covered by a slab, commonly concrete, but it can be more expensive materials such as marble or granite, and/or has its boundaries delimited by a fence which may be made of concrete, cast iron or timber. Where a number of family members are buried together (either vertically or horizontally), the slab or boundaries may encompass a number of graves. Monumental cemeteries are often regarded as unsightly due to the random collection of monuments and headstones they contain. Also, as maintenance of the headstones is the responsibility of family members (in the absence of a proscribed Perpetual Care and Maintenance Fund), over time many headstones are forgotten about and decay and become damaged. For cemetery authorities, monumental cemeteries are difficult to maintain. While cemeteries often have grassed areas between graves, the layout of graves makes it difficult to use modern equipment such as ride-on lawn mowers in the cemetery. Often the maintenance of grass must be done by more labour-intensive (and therefore expensive) methods. In order to reduce the labour cost, devices such as string trimmers are increasingly used in cemetery maintenance,[citation needed] but such devices can damage the monuments and headstones. Cemetery authorities dislike the criticism they receive for the deteriorating condition of the headstones, arguing that they have no responsibility for the upkeep of headstones, and typically disregard their own maintenance practices as being one of the causes of that deterioration.[citation needed]
|
49 |
+
|
50 |
+
|
51 |
+
|
52 |
+
The rural cemetery or garden cemetery[23] is a style of burial ground that uses landscaping in a park-like setting. It was conceived in 1711 by the British architect Sir Christopher Wren, who advocated the creation of landscaped burial grounds which featured well-planned walkways which gave extensive access to graves and planned plantings of trees, bushes, and flowers.[24] Wren's idea was not immediately accepted. But by the early 1800s, existing churchyards were growing overcrowded and unhealthy, with graves stacked upon each other or emptied and reused for new burials.[25] As a reaction to this, the first "garden" cemetery – Père Lachaise Cemetery in Paris – opened in 1804.[26] Because these cemeteries were usually on the outskirts of town (where land was plentiful and cheap), they were called "rural cemeteries", a term still used to describe them today.[25] The concept quickly spread across Europe.[27]
|
53 |
+
|
54 |
+
Garden/rural cemeteries were not necessarily outside city limits. When land within a city could be found, the cemetery was enclosed with a wall to give it a garden-like quality. These cemeteries were often not sectarian, nor co-located with a house of worship. Inspired by the English landscape garden movement,[28] they often looked like attractive parks. The first garden/rural cemetery in the United States was Mount Auburn Cemetery near Boston, Massachusetts, founded by the Massachusetts Horticultural Society in 1831.[29] Following the establishment of Mount Auburn, dozens of other "rural" cemeteries were established in the United States – perhaps in part because of Supreme Court Justice Joseph Story's dedication address – and there were dozens of dedication addresses,[30] including the famous Gettysburg Address of President Abraham Lincoln.
|
55 |
+
|
56 |
+
The cost of building a garden/rural cemetery often meant that only the wealthy could afford burial there.[31] Subsequently, garden/rural cemeteries often feature above-ground monuments and memorials, mausoleums, and columbaria. The excessive filling of rural/garden cemeteries with elaborate above-ground memorials, many of dubious artistic quality or taste, created a backlash which led to the development of the lawn cemetery.[32]
|
57 |
+
|
58 |
+
|
59 |
+
|
60 |
+
In a review of British burial and death practises, Julie Rugg wrote that there were "four closely interlinked factors that explain the 'invention' and widespread adoption of the lawn cemetery: the deterioration of the Victorian cemetery; a self-conscious rejection of Victorian aesthetics in favour of modern alternatives; resource difficulties that, particularly after World War II, increasingly constrained what might be achieved in terms of cemetery maintenance; and growing professionalism in the field of cemetery management."[33]
|
61 |
+
|
62 |
+
Typically, lawn cemeteries comprise a number of graves in a lawn setting with trees and gardens on the perimeter. Adolph Strauch introduced this style in 1855 in Cincinnati.[34]
|
63 |
+
While aesthetic appeal to family members has been the primary driver for the development of lawn cemeteries, cemetery authorities initially welcomed this new style of cemetery enthusiastically, expecting easier maintenance. Selecting (or grading) the land intended for a lawn cemetery so that it is completely flat allows the use of large efficient mowers (such as ride-on mowers or lawn tractors) - the plaques (being horizontally set in the ground) lie below the level of the blades and are not damaged by the blades. Unfortunately, in practice, while families are often initially attracted to the uncluttered appearance of a lawn cemetery, the common practice of placing flowers (sometimes in vases) and increasingly other items (e.g. small toys on children's graves) re-introduces some clutter to the cemetery and makes it difficult to use the larger mowers. While cemetery authorities increasingly impose restrictions on the nature and type of objects that can be placed on lawn graves and actively remove prohibited items, grieving families are often unwilling to comply with these restrictions and become very upset if the items are removed. Another problem with lawn cemeteries involves grass over-growth over time: the grass can grow over and cover the plaque, to the distress of families who can no longer easily locate the grave. Grasses that propagate by an above-ground stolon (runner) can cover a plaque very quickly. Grasses that propagate by a below-ground rhizome tend not to cover the plaque as easily.
|
64 |
+
|
65 |
+
The lawn beam cemetery, a recent development, seeks to solve the problems of the lawn cemetery while retaining many of its benefits. Low (10–15 cm) raised concrete slabs (beams) are placed across the cemetery. Commemorative plaques (usually standardised in terms of size and materials similar to lawn cemeteries) stand on these beams adjacent to each grave. As in a lawn cemetery, grass grows over the graves themselves. The areas between the beams are wide enough to permit easy mowing with a larger mower. As the mower blades are set lower than the top of the beam and the mowers do not go over the beam, the blades cannot damage the plaques. Up on the beam, the plaques cannot be easily overgrown by grass, and spaces between the plaques permit families to place flowers and other objects out of reach of the mowing.
|
66 |
+
|
67 |
+
A natural cemetery, eco-cemetery, green cemetery or conservation cemetery, is a new style of cemetery as an area set aside for natural burials (with or without coffins). Natural burials are motivated by a desire to be environmentally conscious with the body rapidly decomposing and becoming part of the natural environment without incurring the environmental cost of traditional burials. Certifications may be granted for various levels of green burial. Green burial certifications are issued in a tiered system reflecting level of natural burial practice. Green burial certification standards designate a cemetery as Hybrid, Natural, or Conservation Burial Grounds.
|
68 |
+
|
69 |
+
Many scientists have argued that natural burials would be a highly efficient use of land if designed specifically to save endangered habitats, ecosystems and species.[35]
|
70 |
+
|
71 |
+
The opposite has also been proposed. Instead of letting natural burials permanently protect wild landscapes, others have argued that the rapid decomposition of a natural burial, in principle, allows for the quick re-use of grave sites in comparison with conventional burials. However, it is unclear if reusing cemetery land will be culturally acceptable to most people.
|
72 |
+
|
73 |
+
In keeping with the intention of "returning to nature" and the early re-use potential, natural cemeteries do not normally have conventional grave markings such as headstones. Instead, exact GPS recordings and or the placing of a tree, bush or rock often marks the location of the dead, so grieving family and friends can visit the precise location of a grave.
|
74 |
+
|
75 |
+
Columbarium walls are a common feature of many cemeteries, reflecting the increasing use of cremation rather than burial. While cremated remains can be kept at home by families in urns or scattered in some significant or attractive place, neither of these approaches allows for a long-lasting commemorative plaque to honour the dead nor provide a place for the wider circle of friends and family to come to mourn or visit. Therefore, many cemeteries now provide walls (typically of brick or rendered brick construction) with a rectangular array of niches, with each niche being big enough to accommodate a person's cremated remains. Columbarium walls are a very space-efficient use of land in a cemetery compared with burials and a niche in a columbarium wall is a much cheaper alternative to a burial plot. A small plaque (about 15 cm x 10 cm) can be affixed across the front of each niche and is generally included as part of the price of a niche. As the writing on the plaques has to be fairly small to fit on the small size of the plaque, the design of columbarium walls is constrained by the ability of visitors to read the plaques. Thus, the niches are typically placed between 1 metre to 2 metres above the ground so the plaques can be easily read by an adult. Some columbarium walls have niches going close to ground level, but these niches are usually unpopular with families as it is difficult to read the plaque without bending down very low (something older people in particular find difficult or uncomfortable to do).
|
76 |
+
|
77 |
+
As with graves, the niches may be assigned by the cemetery authorities or families may choose from the unoccupied niches available. It is usually possible to purchase (or pay a deposit) to reserve the use of adjacent niches for other family members. The use of adjacent niches (vertically or horizontally) usually permits a larger plaque spanning all the niches involved, which provides more space for the writing. As with graves, there may be separate columbarium walls for different religions or for war veterans. As with lawn cemeteries, the original expectation was that people would prefer the uncluttered simplicity of a wall of plaques, but the practice of leaving flowers is very entrenched. Mourners leave flowers (and other objects) on top of columbarium walls or at the base, as close as they can to the plaque of their family member. In some cases, it is possible to squeeze a piece of wire or string under the plaque allowing a flower or small posy to be placed on the plaque itself or clips are glued onto the plaque for that purpose. Newer designs of columbarium walls take this desire to leave flowers into account by incorporating a metal clip or loop beside each plaque, typically designed to hold a single flower stem or a small posy. As the flowers decay, they simply fall to the ground and do not create a significant maintenance problem.
|
78 |
+
|
79 |
+
While uncommon today, family (or private) cemeteries were a matter of practicality during the settlement of America. If a municipal or religious cemetery had not been established, settlers would seek out a small plot of land, often in wooded areas bordering their fields, to begin a family plot. Sometimes, several families would arrange to bury their dead together. While some of these sites later grew into true cemeteries, many were forgotten after a family moved away or died out.
|
80 |
+
|
81 |
+
Today, it is not unheard of to discover groupings of tombstones, ranging from a few to a dozen or more, on undeveloped land. As late 20th-century suburban sprawl pressured the pace of development in formerly rural areas, it became increasingly common for larger exurban properties to be encumbered by "religious easements", which are legal requirements for the property owner to permit periodic maintenance of small burial plots located on the property but technically not owned with it. Often, cemeteries are relocated to accommodate building. However, if the cemetery is not relocated, descendants of people buried there may visit the cemetery.[36]
|
82 |
+
|
83 |
+
More recent is the practice of families with large estates choosing to create private cemeteries in the form of burial sites, monuments, crypts, or mausoleums on their property; the mausoleum at Fallingwater is an example of this practice. Burial of a body at a site may protect the location from redevelopment, with such estates often being placed in the care of a trust or foundation. Presently, state regulations have made it increasingly difficult, if not impossible, to start private cemeteries; many require a plan to care for the site in perpetuity. Private cemeteries are nearly always forbidden on incorporated residential zones.
|
84 |
+
Many people will bury a beloved pet on the family property.
|
85 |
+
|
86 |
+
All of the Saudis in Al Baha are Muslims, and this is reflected in their cemetery and funeral customs. "The southern tribal hinterland of Baha – home to especially the Al-Ghamdi and Al-Zahrani tribes – has been renowned for centuries for their tribal cemeteries that are now slowly vanishing", according to the Asharq Al-Awsat newspaper: "One old villager explained how tribal cemeteries came about. 'People used to die in large numbers and very rapidly one after the other because of diseases. So the villagers would dig graves close by burying members of the same family in one area. That was how the family and tribal burial grounds came about... If the family ran out of space, they would open old graves where family members had been buried before and add more people to them.
|
87 |
+
|
88 |
+
This process is known as khashf. During famines and outbreaks of epidemics huge numbers of people would die and many tribes faced difficulties in digging new graves because of the difficult weather. In the past, some Arab winters lasted for more than six months and would be accompanied with much rain and fog, impeding movement. But due to tribal rivalries many families would guard their cemeteries and put restrictions on who was buried in them. Across Baha, burial grounds have been constructed in different ways. Some cemeteries consist of underground vaults or concrete burial chambers with the capacity of holding many bodies simultaneously. Such vaults include windows for people to peer through and are usually decorated ornately with text, drawings, and patterns. At least one resident believes that the graves unique in the region because many are not oriented toward Mecca, and therefore must pre-date Islam.[37]
|
89 |
+
|
90 |
+
Graves are terraced in Yagoto Cemetery, which is an urban cemetery situated in a hilly area in Nagoya, Japan, effectively creating stone walls blanketing hillsides.[38]
|
91 |
+
|
92 |
+
The Cross Bones is a burial ground for prostitutes in London. The Neptune Memorial Reef is an underwater columbarium near Key Biscayne.[39]
|
93 |
+
|
94 |
+
In the 2000s and 2010s, it has become increasingly common for cemeteries and funeral homes to offer online services. There are also stand-alone online "cemeteries" such as Find a Grave, Canadian Headstones, Interment.net, and the World Wide Cemetery.[40][41]
|
95 |
+
|
96 |
+
In Western countries, and many others[quantify], visitors to graves commonly leave cut flowers, especially during major holidays and on birthdays or relevant anniversaries. Cemeteries usually dispose of these flowers after a few weeks in order to keep the space maintained. Some companies offer perpetual flower services, to ensure a grave is always decorated with fresh flowers.[42] Flowers may often be planted on the grave as well, usually immediately in front of the gravestone. For this purpose roses are highly common.
|
97 |
+
|
98 |
+
Visitors to loved ones interred in Jewish cemeteries often leave a small stone on the top of the headstone. There are prayers said at the gravesite, and the stone is left on the visitor's departure. It is done as a show of respect; as a general rule, flowers are not placed at Jewish graves. Flowers are fleeting; the symbology inherent in the use of a stone is to show that the love, honor, memories, and soul of the loved one are eternal. This practice is seen in the closing scene of the film Schindler's List, although in that case it is not on a Jewish grave.
|
99 |
+
|
100 |
+
War graves will commonly have small timber remembrance crosses left with a red poppy attached to its centre. These will often have messages written on the cross. More formal visits will often leave a poppy wreath. Jewish war graves are sometimes marked by a timber Star of David.
|
101 |
+
|
102 |
+
Placing burning grave candles on the cemetery to commemorate the dead is a very common tradition in Catholic nations, for example, Poland. It is mostly practised on All Souls' Day. The traditional grave candles are called znicz in Polish.[43] A similar practice of grave candles is also used in Eastern Orthodox Christian nations.
|
103 |
+
|
104 |
+
Traditionally cemetery management only involves the allocation of land for burial, the digging and filling of graves, and the maintenance of the grounds and landscaping. The construction and maintenance of headstones and other grave monuments are usually the responsibilities of surviving families and friends. However, increasingly, many people regard the resultant collection of individual headstones, concrete slabs and fences (some of which may be decayed or damaged) to be aesthetically unappealing, leading to new cemetery developments either standardising the shape or design of headstones or plaques, sometimes by providing a standard shaped marker as part of the service provided by the cemetery.
|
105 |
+
|
106 |
+
Cemetery authorities normally employ a full-time staff of caretakers to dig graves. The term "gravedigger" is still used in casual speech, though many cemeteries have adopted the term "caretaker", since their duties often involve maintenance of the cemetery grounds and facilities. The employment of skilled personnel for the preparation of graves is done not only to ensure the grave is dug in the correct location and at the correct depth, but also to relieve families from having to dig the grave for a recently dead relative, and as a matter of public safety, in order to prevent inexperienced visitors from injuring themselves, to ensure unused graves are properly covered, and to avoid legal liability that would result from an injury related to an improperly dug or uncovered grave. Preparation of the grave is usually done before the mourners arrive for the burial. The cemetery caretakers fill the grave after the burial, generally after the mourners have departed. Mechanical equipment, such as backhoes, are used to reduce labour cost of digging and filling, but some hand shovelling may still be required.
|
107 |
+
|
108 |
+
In the United Kingdom the minimum depth from the surface to the highest lid is 36 inches (91.4 cm). There must be 6 inches (15.2 cm) between each coffin, which on average is 15 inches (38.1 cm) high. If the soil is free-draining and porous, only 24 inches (61 cm) of soil on top is required. Coffins may be interred at lesser depths or even above ground as long as they are encased in a concrete chamber.[44] Before 1977, double graves were dug to 8 feet (243.8 cm) and singles to 6 feet (182.9 cm). As a single grave is now dug to 54 inches (137.2 cm), old cemeteries contain many areas where new single graves can be dug on "old ground". This is considered a valid method of resource management and provides income to keep older cemeteries viable, thus forestalling the need for permanent closure, which would result in a reduction of their work force.
|
109 |
+
|
110 |
+
Usually there is a legal requirement to maintain records regarding the burials (or interment of ashes) within a cemetery. These burial registers usually contain (at a minimum) the name of the person buried, the date of burial and the location of the burial plots within the cemetery, although some contain far more detail. The Arlington National Cemetery, one of the United States' largest military cemeteries, has a registry, The ANC Explorer, which contains details such as photographs of the front and back of the tombstones.[45] Burial registers are an important resource for genealogy.
|
111 |
+
|
112 |
+
In order to physically manage the space within the cemetery (to avoid burials in existing graves) and to record locations in the burial register, most cemeteries have some systematic layout of graves in rows, generally grouped into larger sections as required. Often the cemetery displays this information in the form of a map, which is used both by the cemetery administration in managing their land use and also by friends and family members seeking to locate a particular grave within the cemetery.
|
113 |
+
|
114 |
+
Cemetery authorities face a number of tensions in regard to the management of cemeteries.
|
115 |
+
|
116 |
+
One issue relates to cost. Traditionally a single payment is made at the time of burial, but the cemetery authority incurs expenses in cemetery maintenance over many decades. Many cemetery authorities find that their accumulated funds are not sufficient for the costs of long-term maintenance. This shortfall in funds for maintenance results in three main options: charge much higher prices for new burials, obtain some other kind of public subsidy, or neglect maintenance. For cemeteries without space for new burials, the options are even more limited. Public attitudes towards subsidies are highly variable. People with family buried in local cemeteries are usually quite concerned about neglect of cemetery maintenance and will usually argue in favour of public subsidy of local cemetery maintenance, whereas other people without personal connection to the cemetery often argue that public subsidies of private cemeteries is an inappropriate use of their taxes. Some jurisdictions require a certain amount of money be set aside in perpetuity and invested so that the interest earned can be used for maintenance.[47]
|
117 |
+
|
118 |
+
Another issue relates to limited amount of land. In many larger towns and cities, the older cemeteries which were initially considered to be large often run out of space for new burials and there is no vacant adjacent land available to extend the cemetery or even land in the same general area to create new cemeteries. New cemeteries are generally established on the periphery of towns and cities, where large tracts of land are still available. However, people often wish to be buried in the same cemetery as other relatives, and are not interested in being buried in new cemeteries with which there is no sense of connection to their family, creating pressure to find more space in existing cemeteries.
|
119 |
+
|
120 |
+
A third issue is the maintenance of monuments and headstones, which are generally the responsibility of families, but often become neglected over time. Decay and damage through vandalism or cemetery maintenance practices can render monuments and headstones either unsafe or at least unsightly. On the other hand, some families do not forget the grave but constantly visit, leaving behind flowers, plants, and other decorative items that create their own maintenance problem.
|
121 |
+
|
122 |
+
All of these issues tend to put pressure on the re-use of grave sites within cemeteries. The re-use of graves already used for burial can cause considerable upset to family members. Although the authorities might declare that the grave is sufficiently old that there will be no human remains still present, nonetheless many people regard the re-use of graves (particularly their family's graves) as a desecration. Also re-use of a used grave involves the removal of any monuments and headstones, which may cause further distress to families (although families will typically be allowed to take away the monuments and headstones if they wish).
|
123 |
+
|
124 |
+
On the other hand, cemetery authorities are well aware that many old graves are forgotten and not visited and that their re-use will not cause distress to anyone. However, there may be some older graves in a cemetery for whom there are local and vocal descendants who will mount a public campaign against re-use. One pragmatic strategy is to publicly announce plans to re-use older graves and invite families to respond if they are willing or not. Re-use then only occurs where there are no objections allowing the "forgotten" graves to be re-used. Sometimes the cemetery authorities request a further payment to avoid re-use of a grave, but often this backfires politically.
|
125 |
+
|
126 |
+
A practical problem with regard to contacting families is that the person who initially purchased the burial plot(s) may have subsequently died and locating living family members, if any, many decades later is virtually impossible (or at least prohibitively expensive). Public notice about the proposed re-use of graves may or may not reach family members living further afield who may object to such practices. Therefore, it is possible that re-use could occur without family awareness.
|
127 |
+
|
128 |
+
Some cemeteries did foresee the need for re-use and included in their original terms and conditions a limited tenure on a grave site and most new cemeteries follow this practice, having seen the problems faced by older cemeteries. Common practice in Europe is to place bones in an ossuary after the proscribed burial period is over.[47]
|
129 |
+
|
130 |
+
However, even when the cemetery has the legal right to re-use a grave, strong public opinion often forces the authorities to back down on that re-use. Also, even when cemeteries have a limited tenure provision in place, funding shortages can force them to contemplate re-use earlier than the original arrangements provided for.
|
131 |
+
|
132 |
+
Another type of grave site considered for re-use are empty plots purchased years ago but never used. In principle it would seem easier to "re-use" such grave sites as there can be no claims of desecration, but often this is made complicated by the legal rights to be buried obtained by the pre-purchase, as any limited tenure clause only takes effect after there has been a burial. Again, cemetery authorities suspect that in many cases the holders of these burial rights are probably dead and that nobody will exercise that burial right, but again some families are aware of the burial rights they possess and do intend to exercise them as and when family members die. Again the difficulty of being unable to locate the holders of these burial rights complicates the re-use of those graves.
|
133 |
+
|
134 |
+
As historic cemeteries begin to reach their capacity for full burials, alternative memorialization, such as collective memorials for cremated individuals, is becoming more common. Different cultures have different attitudes to destruction of cemeteries and use of the land for construction. In some countries it is considered normal to destroy the graves, while in others the graves are traditionally respected for a century or more. In many cases, after a suitable period of time has elapsed, the headstones are removed and the now former cemetery is converted to a recreational park or construction site. A more recent trend, particularly in South American cities, involves constructing high-rise buildings to house graves.[48]
|
135 |
+
|
136 |
+
Cemeteries in the United States may be relocated if the land is required for other reasons. For instance, many cemeteries in the southeastern United States were relocated by the Tennessee Valley Authority from areas about to be flooded by dam construction.[49] Cemeteries may also be moved so that the land can be reused for transportation structures,[50][51] public buildings,[52] or even private development.[53] Cemetery relocation is not necessarily possible in other parts of the world; in Alberta, Canada, for instance, the Cemetery Act expressly forbids the relocation of cemeteries or the mass exhumation of marked graves for any reason whatsoever.[54] This has caused significant problems in the provision of transportation services to the southern half of the City of Calgary, as the main southbound road connecting the south end of the city with downtown threads through a series of cemeteries founded in the 1930s. The light rail transit line running to the south end eventually had to be built directly under the road.
|
137 |
+
|
138 |
+
Cemetery authorities also face tension between the competing demands of efficient maintenance with the needs of mourners.
|
139 |
+
|
140 |
+
Labour costs in particular have risen substantially and so finding low-cost maintenance methods (meaning low-labour maintenance methods) is increasingly important.[citation needed] However, as discussed above, the use of large mowers and string trimmers might be efficient but often cannot be used in cemeteries because they physically are too large to fit between graves or because they can damage the monuments and headstones. In this regard, older cemeteries designed at a time of relatively low-cost labour and limited automation tend to present the greatest difficulties for maintenance.
|
141 |
+
|
142 |
+
On the other hand, newer cemeteries might be designed to be more efficiently maintained with lower labour through the increased use of equipment, e.g. lawn cemeteries where the maintenance can be performed with a ride-on mower or lawn tractor. European Commission regulations mandate placement of a portable toilet on every cemetery in the EU. However, efficient maintenance of newer graves is often frustrated by the actions of mourners who often place flowers and other objects on graves. These objects often require manual intervention; in some cases objects will be picked up and returned after maintenance, in other cases (e.g. dead flowers) they will be disposed of.
|
143 |
+
|
144 |
+
Again, although cemetery authorities try to prohibit the quantity and nature of objects placed on graves (a common restriction is to allow only fresh flowers, not in a vase or pot), but mourning families might ignore any such regulations and become very upset if other objects are removed. In particular, in an era in which the death of children is now relatively uncommon, some parents create quite large shrines at their child's grave, decorating them with toys, wind chimes, statues of angels and cherubs, etc. as a manifestation of their grief, adding items to the pile of objects on the grave progressively over time. Cemetery authorities have to try to deal with such situations sensitively, as strong emotions are involved. However, as well as their own maintenance problems with such "shrines", families with graves in the surrounding area often complain to cemetery authorities about the "mess", as they believe it detracts from the dignity of their family's graves. Therefore, the cemetery authorities must find a solution that satisfies both parties.
|
145 |
+
|
146 |
+
In many countries, cemeteries are places believed to hold both superstition and legend characteristics, being used, usually at night times, as an altar in supposed black magic ceremonies or similarly clandestine happenings, such as devil worshipping, grave-robbing (gold teeth and jewelry are preferred), thrilling sex encounters or drug and alcohol abuse not related to the cemetery aura (see below).
|
147 |
+
|
148 |
+
The legend of zombies, as romanticized by Wade Davis in The Serpent and the Rainbow, is not exceptional among cemetery myths, as cemeteries are believed to be places where witches and sorcerers get skulls and bones needed for their sinister rituals.
|
149 |
+
|
150 |
+
In the Afro-Brazilian urban mythos (such as Umbanda), there is a character loosely related to cemeteries and its aura: the Zé Pilintra (in fact, Zé Pilintra is more related to bohemianism and night life than with cemeteries, where the reigning entity is Exu Caveira or Exu Cemitério, similar to Voodoo Baron Samedi).
|
en/1134.html.txt
ADDED
@@ -0,0 +1,114 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Animation is a method in which figures are manipulated to appear as moving images. In traditional animation, images are drawn or painted by hand on transparent celluloid sheets to be photographed and exhibited on film. Today, most animations are made with computer-generated imagery (CGI). Computer animation can be very detailed 3D animation, while 2D computer animation can be used for stylistic reasons, low bandwidth or faster real-time renderings. Other common animation methods apply a stop motion technique to two and three-dimensional objects like paper cutouts, puppets or clay figures.
|
4 |
+
|
5 |
+
Commonly the effect of animation is achieved by a rapid succession of sequential images that minimally differ from each other. The illusion—as in motion pictures in general—is thought to rely on the phi phenomenon and beta movement, but the exact causes are still uncertain.
|
6 |
+
Analog mechanical animation media that rely on the rapid display of sequential images include the phénakisticope, zoetrope, flip book, praxinoscope and film. Television and video are popular electronic animation media that originally were analog and now operate digitally. For display on the computer, techniques like animated GIF and Flash animation were developed.
|
7 |
+
|
8 |
+
Animation is more pervasive than many people realize. Apart from short films, feature films, television series, animated GIFs and other media dedicated to the display of moving images, animation is also prevalent in video games, motion graphics, user interfaces and visual effects.[1]
|
9 |
+
|
10 |
+
The physical movement of image parts through simple mechanics—in for instance moving images in magic lantern shows—can also be considered animation. The mechanical manipulation of three-dimensional puppets and objects to emulate living beings has a very long history in automata. Electronic automata were popularized by Disney as animatronics.
|
11 |
+
|
12 |
+
Animators are artists who specialize in creating animation.
|
13 |
+
|
14 |
+
The word "animation" stems from the Latin "animātiōn", stem of "animātiō", meaning "a bestowing of life".[2] The primary meaning of the English word is "liveliness" and has been in use much longer than the meaning of "moving image medium".
|
15 |
+
|
16 |
+
Hundreds of years before the introduction of true animation, people from all over the world enjoyed shows with moving figures that were created and manipulated manually in puppetry, automata, shadow play and the magic lantern. The multi-media phantasmagoria shows that were very popular in West-European theatres from the late 18th century through the first half of the 19th century, featured lifelike projections of moving ghosts and other frightful imagery in motion.
|
17 |
+
|
18 |
+
In 1833, the stroboscopic disc (better known as the phénakisticope) introduced the principle of modern animation with sequential images that were shown one by one in quick succession to form an optical illusion of motion pictures. Series of sequential images had occasionally been made over thousands of years, but the stroboscopic disc provided the first method to represent such images in fluent motion and for the first time had artists creating series with a proper systematic breakdown of movements. The stroboscopic animation principle was also applied in the zoetrope (1866), the flip book (1868) and the praxinoscope (1877). The average 19th-century animation contained about 12 images that were displayed as a continuous loop by spinning a device manually. The flip book often contained more pictures and had a beginning and end, but its animation would not last longer than a few seconds. The first to create much longer sequences seems to have been Charles-Émile Reynaud, who between 1892 and 1900 had much success with his 10- to 15-minute-long Pantomimes Lumineuses.
|
19 |
+
|
20 |
+
When cinematography eventually broke through in 1895 after animated pictures had been known for decades, the wonder of the realistic details in the new medium was seen as its biggest accomplishment. Animation on film was not commercialized until a few years later by manufacturers of optical toys, with chromolithography film loops (often traced from live-action footage) for adapted toy magic lanterns intended for kids to use at home. It would take some more years before animation reached movie theatres.
|
21 |
+
|
22 |
+
After earlier experiments by movie pioneers J. Stuart Blackton, Arthur Melbourne-Cooper, Segundo de Chomón and Edwin S. Porter (among others), Blackton's The Haunted Hotel (1907) was the first huge stop motion success, baffling audiences by showing objects that apparently moved by themselves in full photographic detail, without signs of any known stage trick.
|
23 |
+
|
24 |
+
Émile Cohl's Fantasmagorie (1908) is the oldest known example of what became known as traditional (hand-drawn) animation. Other great artistic and very influential short films were created by Ladislas Starevich with his puppet animations since 1910 and by Winsor McCay with detailed drawn animation in films such as Little Nemo (1911) and Gertie the Dinosaur (1914).
|
25 |
+
|
26 |
+
During the 1910s, the production of animated "cartoons" became an industry in the US.[3] Successful producer John Randolph Bray and animator Earl Hurd, patented the cel animation process that dominated the animation industry for the rest of the century.[4][5] Felix the Cat, who debuted in 1919, became the first animated superstar.
|
27 |
+
|
28 |
+
In 1928, Steamboat Willie, featuring Mickey Mouse and Minnie Mouse, popularized film with synchronized sound and put Walt Disney's studio at the forefront of the animation industry. In 1932, Disney also introduced the innovation of full colour (in Flowers and Trees) as part of a three-year-long exclusive deal with Technicolor.
|
29 |
+
|
30 |
+
The enormous success of Mickey Mouse is seen as the start of the golden age of American animation that would last until the 1960s. The United States dominated the world market of animation with a plethora of cel-animated theatrical shorts. Several studios would introduce characters that would become very popular and would have long-lasting careers, including Walt Disney Productions' Goofy (1932) and Donald Duck (1934), Warner Bros. Cartoons' Looney Tunes characters like Daffy Duck (1937), Bugs Bunny (1938/1940), Tweety (1941/1942), Sylvester the Cat (1945), Wile E. Coyote and Road Runner (1949), Fleischer Studios/Paramount Cartoon Studios' Betty Boop (1930), Popeye (1933), Superman (1941) and Casper (1945), MGM cartoon studio's Tom and Jerry (1940) and Droopy, Walter Lantz Productions/Universal Studio Cartoons' Woody Woodpecker (1940), Terrytoons/20th Century Fox's Mighty Mouse (1942) and United Artists' Pink Panther (1963).
|
31 |
+
|
32 |
+
In 1917, Italian-Argentine director Quirino Cristiani made the first feature-length film El Apóstol (now lost), which became a critical and commercial success. It was followed by Cristiani's Sin dejar rastros in 1918, but one day after its premiere the film was confiscated by the government.
|
33 |
+
|
34 |
+
After working on it for three years, Lotte Reiniger released the German feature-length silhouette animation Die Abenteuer des Prinzen Achmed in 1926, the oldest extant animated feature.
|
35 |
+
|
36 |
+
In 1937, Walt Disney Studios premiered their first animated feature, Snow White and the Seven Dwarfs, still one of the highest-grossing traditional animation features as of May 2020[update].[7][8] The Fleischer studios followed this example in 1939 with Gulliver's Travels with some success. Partly due to foreign markets being cut off by the Second World War, Disney's next features Pinocchio, Fantasia (both 1940) and Fleischer Studios' second animated feature Mr. Bug Goes to Town (1941/1942) failed at the box office. For decades afterwards Disney would be the only American studio to regularly produce animated features, until Ralph Bakshi became the first to also release more than a handful features. Sullivan-Bluth Studios began to regularly produce animated features starting with An American Tail in 1986.
|
37 |
+
|
38 |
+
Although relatively few titles became as successful as Disney's features, other countries developed their own animation industries that produced both short and feature theatrical animations in a wide variety of styles, relatively often including stop motion and cutout animation techniques. Russia's Soyuzmultfilm animation studio, founded in 1936, produced 20 films (including shorts) per year on average and reached 1,582 titles in 2018. China, Czechoslovakia / Czech Republic, Italy, France and Belgium were other countries that more than occasionally released feature films, while Japan became a true powerhouse of animation production, with its own recognizable and influential anime style of effective limited animation.
|
39 |
+
|
40 |
+
Animation became very popular on television since the 1950s, when television sets started to become common in most wealthy countries. Cartoons were mainly programmed for children, on convenient time slots, and especially US youth spent many hours watching Saturday-morning cartoons. Many classic cartoons found a new life on the small screen and by the end of the 1950s, production of new animated cartoons started to shift from theatrical releases to TV series. Hanna-Barbera Productions was especially prolific and had huge hit series, such as The Flintstones (1960–1966) (the first prime time animated series), Scooby-Doo (since 1969) and Belgian co-production The Smurfs (1981–1989). The constraints of American television programming and the demand for an enormous quantity resulted in cheaper and quicker limited animation methods and much more formulaic scripts. Quality dwindled until more daring animation surfaced in the late 1980s and in the early 1990s with hit series such as The Simpsons (since 1989) as part of a "renaissance" of American animation.
|
41 |
+
|
42 |
+
While US animated series also spawned successes internationally, many other countries produced their own child-oriented programming, relatively often preferring stop motion and puppetry over cel animation. Japanese anime TV series became very successful internationally since the 1960s, and European producers looking for affordable cel animators relatively often started co-productions with Japanese studios, resulting in hit series such as Barbapapa (The Netherlands/Japan/France 1973–1977), Wickie und die starken Männer/小さなバイキング ビッケ (Vicky the Viking) (Austria/Germany/Japan 1974) and Il était une fois... (Once Upon a Time...) (France/Japan 1978).
|
43 |
+
|
44 |
+
Computer animation was gradually developed since the 1940s. 3D wireframe animation started popping up in the mainstream in the 1970s, with an early (short) appearance in the sci-fi thriller Futureworld (1976).
|
45 |
+
|
46 |
+
The Rescuers Down Under was the first feature film to be completely created digitally without a camera.[9] It was produced in a style that's very similar to traditional cel animation on the Computer Animation Production System (CAPS), developed by The Walt Disney Company in collaboration with Pixar in the late 1980s.
|
47 |
+
|
48 |
+
The so-called 3D style, more often associated with computer animation, has become extremely popular since Pixar's Toy Story (1995), the first computer-animated feature in this style.
|
49 |
+
|
50 |
+
Most of the cel animation studios switched to producing mostly computer animated films around the 1990s, as it proved cheaper and more profitable. Not only the very popular 3D animation style was generated with computers, but also most of the films and series with a more traditional hand-crafted appearance, in which the charming characteristics of cel animation could be emulated with software, while new digital tools helped developing new styles and effects.[10][11][12][13][14][15]
|
51 |
+
|
52 |
+
In 2008, the animation market was worth US$68.4 billion.[16] Animated feature-length films returned the highest gross margins (around 52%) of all film genres between 2004 and 2013.[17] Animation as an art and industry continues to thrive as of the early 2020s.
|
53 |
+
|
54 |
+
The clarity of animation makes it a powerful tool for instruction, while its total malleability also allows exaggeration that can be employed to convey strong emotions and to thwart reality. It has therefore been widely for other purposed than mere entertainment.
|
55 |
+
|
56 |
+
During World War II, animation was widely exploited for propaganda. Many American studios, including Warner Bros. and Disney, lent their talents and their cartoon characters to convey the public of certain war values. Some countries, including China, Japan and the United Kingdom, produced their first feature-length animation for their war efforts.
|
57 |
+
|
58 |
+
Animation has been very popular in television commercials, both due to its graphic appeal, and the humour it can provide. Some animated characters in commercials have survived for decades, such as Snap, Crackle and Pop in advertisements for Kellogg's cereals.[18] The legendary animation director Tex Avery was the producer of the first Raid "Kills Bugs Dead" commercials in 1966, which were very successful for the company.[19]
|
59 |
+
|
60 |
+
Apart from their success in movie theaters and television series, many cartoon characters would also prove extremely lucrative when licensed for all kinds of merchandise and for other media.
|
61 |
+
|
62 |
+
Animation has traditionally been very closely related to comic books. While many comic book characters found their way to the screen (which is often the case in Japan, where many manga are adapted into anime), original animated characters also commonly appear in comic books and magazines. Somewhat similarly, characters and plots for video games (an interactive animation medium) have been derived from films and vice versa.
|
63 |
+
|
64 |
+
Some of the original content produced for the screen can be used and marketed in other media. Stories and images can easily be adapted into children's books and other printed media. Songs and music have appeared on records and as streaming media.
|
65 |
+
|
66 |
+
While very many animation companies commercially exploit their creations outside moving image media, The Walt Disney Company is the best known and most extreme example. Since first being licensed for a children's writing tablet in 1929, their Mickey Mouse mascot has been depicted on an enormous amount of products, as have many other Disney characters. This may have influenced some pejorative use of Mickey's name, but licensed Disney products sell well, and the so-called Disneyana has many avid collectors, and even a dedicated Disneyana fanclub (since 1984).
|
67 |
+
|
68 |
+
Disneyland opened in 1955 and features many attractions that were based on Disney's cartoon characters. Its enormous success spawned several other Disney theme parks and resorts. Disney's earnings from the theme parks has relatively often been higher than those from their movies.
|
69 |
+
|
70 |
+
Criticism of animation has been common in media and cinema since its inception. With its popularity, a large amount of criticism has arisen, especially animated feature-length films.[20] Many concerns of cultural representation, psychological effects on children have been brought up around the animation industry, which has remained rather politically unchanged and stagnant since its inception into mainstream culture.[21]
|
71 |
+
|
72 |
+
As with any other form of media, animation has instituted awards for excellence in the field. The original awards for animation were presented by the Academy of Motion Picture Arts and Sciences for animated shorts from the year 1932, during the 5th Academy Awards function. The first winner of the Academy Award was the short Flowers and Trees,[22] a production by Walt Disney Productions.[23][24] The Academy Award for a feature-length animated motion picture was only instituted for the year 2001, and awarded during the 74th Academy Awards in 2002. It was won by the film Shrek, produced by DreamWorks and Pacific Data Images.[25] Disney Animation and Pixar has produced the most films either to win or be nominated for the award. Beauty and the Beast was the first animated film nominated for Best Picture. Up and Toy Story 3 also received Best Picture nominations after the Academy expanded the number of nominees from five to ten.
|
73 |
+
|
74 |
+
Several other countries have instituted an award for the best-animated feature film as part of their national film awards: Africa Movie Academy Award for Best Animation (since 2008), BAFTA Award for Best Animated Film (since 2006), César Award for Best Animated Film (since 2011), Golden Rooster Award for Best Animation (since 1981), Goya Award for Best Animated Film (since 1989), Japan Academy Prize for Animation of the Year (since 2007), National Film Award for Best Animated Film (since 2006). Also since 2007, the Asia Pacific Screen Award for Best Animated Feature Film has been awarded at the Asia Pacific Screen Awards. Since 2009, the European Film Awards have awarded the European Film Award for Best Animated Film.
|
75 |
+
|
76 |
+
The Annie Award is another award presented for excellence in the field of animation. Unlike the Academy Awards, the Annie Awards are only received for achievements in the field of animation and not for any other field of technical and artistic endeavour. They were re-organized in 1992 to create a new field for Best Animated Feature. The 1990s winners were dominated by Walt Disney; however, newer studios, led by Pixar & DreamWorks, have now begun to consistently vie for this award. The list of awardees is as follows:
|
77 |
+
|
78 |
+
The creation of non-trivial animation works (i.e., longer than a few seconds) has developed as a form of filmmaking, with certain unique aspects.[26] Traits common to both live-action and animated feature-length films are labor intensity and high production costs.[27]
|
79 |
+
|
80 |
+
The most important difference is that once a film is in the production phase, the marginal cost of one more shot is higher for animated films than live-action films.[28] It is relatively easy for a director to ask for one more take during principal photography of a live-action film, but every take on an animated film must be manually rendered by animators (although the task of rendering slightly different takes has been made less tedious by modern computer animation).[29] It is pointless for a studio to pay the salaries of dozens of animators to spend weeks creating a visually dazzling five-minute scene if that scene fails to effectively advance the plot of the film.[30] Thus, animation studios starting with Disney began the practice in the 1930s of maintaining story departments where storyboard artists develop every single scene through storyboards, then handing the film over to the animators only after the production team is satisfied that all the scenes make sense as a whole.[31] While live-action films are now also storyboarded, they enjoy more latitude to depart from storyboards (i.e., real-time improvisation).[32]
|
81 |
+
|
82 |
+
Another problem unique to animation is the requirement to maintain a film's consistency from start to finish, even as films have grown longer and teams have grown larger. Animators, like all artists, necessarily have individual styles, but must subordinate their individuality in a consistent way to whatever style is employed on a particular film.[33] Since the early 1980s, teams of about 500 to 600 people, of whom 50 to 70 are animators, typically have created feature-length animated films. It is relatively easy for two or three artists to match their styles; synchronizing those of dozens of artists is more difficult.[34]
|
83 |
+
|
84 |
+
This problem is usually solved by having a separate group of visual development artists develop an overall look and palette for each film before the animation begins. Character designers on the visual development team draw model sheets to show how each character should look like with different facial expressions, posed in different positions, and viewed from different angles.[35][36] On traditionally animated projects, maquettes were often sculpted to further help the animators see how characters would look from different angles.[37][35]
|
85 |
+
|
86 |
+
Unlike live-action films, animated films were traditionally developed beyond the synopsis stage through the storyboard format; the storyboard artists would then receive credit for writing the film.[38] In the early 1960s, animation studios began hiring professional screenwriters to write screenplays (while also continuing to use story departments) and screenplays had become commonplace for animated films by the late 1980s.
|
87 |
+
|
88 |
+
Traditional animation (also called cel animation or hand-drawn animation) was the process used for most animated films of the 20th century.[39] The individual frames of a traditionally animated film are photographs of drawings, first drawn on paper.[40] To create the illusion of movement, each drawing differs slightly from the one before it. The animators' drawings are traced or photocopied onto transparent acetate sheets called cels,[41] which are filled in with paints in assigned colors or tones on the side opposite the line drawings.[42] The completed character cels are photographed one-by-one against a painted background by a rostrum camera onto motion picture film.[43]
|
89 |
+
|
90 |
+
The traditional cel animation process became obsolete by the beginning of the 21st century. Today, animators' drawings and the backgrounds are either scanned into or drawn directly into a computer system.[44][45] Various software programs are used to color the drawings and simulate camera movement and effects.[46] The final animated piece is output to one of several delivery media, including traditional 35 mm film and newer media with digital video.[47][44] The "look" of traditional cel animation is still preserved, and the character animators' work has remained essentially the same over the past 70 years.[37] Some animation producers have used the term "tradigital" (a play on the words "traditional" and "digital") to describe cel animation that uses significant computer technology.
|
91 |
+
|
92 |
+
Examples of traditionally animated feature films include Pinocchio (United States, 1940),[48] Animal Farm (United Kingdom, 1954), Lucky and Zorba (Italy, 1998), and The Illusionist (British-French, 2010). Traditionally animated films produced with the aid of computer technology include The Lion King (US, 1994), The Prince of Egypt (US, 1998), Akira (Japan, 1988),[49] Spirited Away (Japan, 2001), The Triplets of Belleville (France, 2003), and The Secret of Kells (Irish-French-Belgian, 2009).
|
93 |
+
|
94 |
+
Full animation refers to the process of producing high-quality traditionally animated films that regularly use detailed drawings and plausible movement,[50] having a smooth animation.[51] Fully animated films can be made in a variety of styles, from more realistically animated works like those produced by the Walt Disney studio (The Little Mermaid, Beauty and the Beast, Aladdin, The Lion King) to the more 'cartoon' styles of the Warner Bros. animation studio. Many of the Disney animated features are examples of full animation, as are non-Disney works, The Secret of NIMH (US, 1982), The Iron Giant (US, 1999), and Nocturna (Spain, 2007). Fully animated films are animated at 24 frames per second, with a combination of animation on ones and twos, meaning that drawings can be held for one frame out of 24 or two frames out of 24.[52]
|
95 |
+
|
96 |
+
Limited animation involves the use of less detailed or more stylized drawings and methods of movement usually a choppy or "skippy" movement animation.[53] Limited animation uses fewer drawings per second, thereby limiting the fluidity of the animation. This is a more economic technique. Pioneered by the artists at the American studio United Productions of America,[54] limited animation can be used as a method of stylized artistic expression, as in Gerald McBoing-Boing (US, 1951), Yellow Submarine (UK, 1968), and certain anime produced in Japan.[55] Its primary use, however, has been in producing cost-effective animated content for media for television (the work of Hanna-Barbera,[56] Filmation,[57] and other TV animation studios[58]) and later the Internet (web cartoons).
|
97 |
+
|
98 |
+
Rotoscoping is a technique patented by Max Fleischer in 1917 where animators trace live-action movement, frame by frame.[59] The source film can be directly copied from actors' outlines into animated drawings,[60] as in The Lord of the Rings (US, 1978), or used in a stylized and expressive manner, as in Waking Life (US, 2001) and A Scanner Darkly (US, 2006). Some other examples are Fire and Ice (US, 1983), Heavy Metal (1981), and Aku no Hana (2013).
|
99 |
+
|
100 |
+
Live-action/animation is a technique combining hand-drawn characters into live action shots or live-action actors into animated shots.[61] One of the earlier uses was in Koko the Clown when Koko was drawn over live-action footage.[62] Walt Disney and Ub Iwerks created a series of Alice Comedies (1923–1927), in which a live-action girl enters an animated world. Other examples include Allegro Non Troppo (Italy, 1976), Who Framed Roger Rabbit (US, 1988), Volere volare (Italy 1991), Space Jam (US, 1996) and Osmosis Jones (US, 2001).
|
101 |
+
|
102 |
+
Stop-motion animation is used to describe animation created by physically manipulating real-world objects and photographing them one frame of film at a time to create the illusion of movement.[63] There are many different types of stop-motion animation, usually named after the medium used to create the animation.[64] Computer software is widely available to create this type of animation; traditional stop-motion animation is usually less expensive but more time-consuming to produce than current computer animation.[64]
|
103 |
+
|
104 |
+
Computer animation encompasses a variety of techniques, the unifying factor being that the animation is created digitally on a computer.[46][87] 2D animation techniques tend to focus on image manipulation while 3D techniques usually build virtual worlds in which characters and objects move and interact.[88] 3D animation can create images that seem real to the viewer.[89]
|
105 |
+
|
106 |
+
2D animation figures are created or edited on the computer using 2D bitmap graphics and 2D vector graphics.[90] This includes automated computerized versions of traditional animation techniques, interpolated morphing,[91] onion skinning[92] and interpolated rotoscoping.
|
107 |
+
|
108 |
+
2D animation has many applications, including analog computer animation, Flash animation, and PowerPoint animation. Cinemagraphs are still photographs in the form of an animated GIF file of which part is animated.[93]
|
109 |
+
|
110 |
+
Final line advection animation is a technique used in 2D animation,[94] to give artists and animators more influence and control over the final product as everything is done within the same department.[95] Speaking about using this approach in Paperman, John Kahrs said that "Our animators can change things, actually erase away the CG underlayer if they want, and change the profile of the arm."[96]
|
111 |
+
|
112 |
+
3D animation is digitally modeled and manipulated by an animator. The animator usually starts by creating a 3D polygon mesh to manipulate.[97] A mesh typically includes many vertices that are connected by edges and faces, which give the visual appearance of form to a 3D object or 3D environment.[97] Sometimes, the mesh is given an internal digital skeletal structure called an armature that can be used to control the mesh by weighting the vertices.[98][99] This process is called rigging and can be used in conjunction with key frames to create movement.[100]
|
113 |
+
|
114 |
+
Other techniques can be applied, mathematical functions (e.g., gravity, particle simulations), simulated fur or hair, and effects, fire and water simulations.[101] These techniques fall under the category of 3D dynamics.[102]
|
en/1135.html.txt
ADDED
@@ -0,0 +1,116 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
Film, also called movie, motion picture or moving picture, is a visual art-form used to simulate experiences that communicate ideas, stories, perceptions, feelings, beauty, or atmosphere through the use of moving images. These images are generally accompanied by sound, and more rarely, other sensory stimulations.[1] The word "cinema", short for cinematography, is often used to refer to filmmaking and the film industry, and to the art form that is the result of it.
|
6 |
+
|
7 |
+
The moving images of a film are created by photographing actual scenes with a motion-picture camera, by photographing drawings or miniature models using traditional animation techniques, by means of CGI and computer animation, or by a combination of some or all of these techniques, and other visual effects.
|
8 |
+
|
9 |
+
Traditionally, films were recorded onto celluloid film stock through a photochemical process and then shown through a movie projector onto a large screen. Contemporary films are often fully digital through the entire process of production, distribution, and exhibition, while films recorded in a photochemical form traditionally included an analogous optical soundtrack (a graphic recording of the spoken words, music and other sounds that accompany the images which runs along a portion of the film exclusively reserved for it, and is not projected).
|
10 |
+
|
11 |
+
Films are cultural artifacts created by specific cultures. They reflect those cultures, and, in turn, affect them. Film is considered to be an important art form, a source of popular entertainment, and a powerful medium for educating—or indoctrinating—citizens. The visual basis of film gives it a universal power of communication. Some films have become popular worldwide attractions through the use of dubbing or subtitles to translate the dialog into other languages.
|
12 |
+
|
13 |
+
The individual images that make up a film are called frames. In the projection of traditional celluloid films, a rotating shutter causes intervals of darkness as each frame, in turn, is moved into position to be projected, but the viewer does not notice the interruptions because of an effect known as persistence of vision, whereby the eye retains a visual image for a fraction of a second after its source disappears. The perception of motion is partly due to a psychological effect called the phi phenomenon.
|
14 |
+
|
15 |
+
The name "film" originates from the fact that photographic film (also called film stock) has historically been the medium for recording and displaying motion pictures. Many other terms exist for an individual motion-picture, including picture, picture show, moving picture, photoplay, and flick. The most common term in the United States is movie, while in Europe film is preferred. Common terms for the field in general include the big screen, the silver screen, the movies, and cinema; the last of these is commonly used, as an overarching term, in scholarly texts and critical essays. In early years, the word sheet was sometimes used instead of screen.
|
16 |
+
|
17 |
+
The art of film has drawn on several earlier traditions in fields such as oral storytelling, literature, theatre and visual arts. Forms of art and entertainment that had already featured moving and/or projected images include:
|
18 |
+
|
19 |
+
The stroboscopic animation principle was introduced in 1833 with the phénakisticope and also applied in the zoetrope since 1866, the flip book since 1868, and the praxinoscope since 1877, before it became the basic principle for cinematography.
|
20 |
+
|
21 |
+
Experiments with early phenakisticope-based animation projectors were made at least as early as 1843. Jules Duboscq marketed phénakisticope projection systems in France between 1853 and the 1890s.
|
22 |
+
|
23 |
+
Photography was introduced in 1839, but at first photographic emulsions needed such long exposures that the recording of moving subjects seemed impossible. At least as early as 1844, photographic series of subjects posed in different positions have been created to either suggest a motion sequence or to document a range of different viewing angles. The advent of stereoscopic photography, with early experiments in the 1840s and commercial success since the early 1850s, raised interest in completing the photographic medium with the addition of means to capture colour and motion. In 1849, Joseph Plateau published about the idea to combine his invention of the phénakisticope with the stereoscope, as suggested to him by stereoscope inventor Charles Wheatstone, and use photographs of plaster sculptures in different positions to be animated in the combined device. In 1852, Jules Duboscq patented such an instrument as the "Stéréoscope-fantascope, ou Bïoscope". He marginally advertised it for a short period. It was a commercial failure and no complete instrument has yet been located, but one bioscope disc has been preserved in the Plateau collection of the Ghent University. It has stereoscopic photographs of a machine.
|
24 |
+
|
25 |
+
By the late 1850s the first examples of instantaneous photography came about and provided hope that motion photography would soon be possible, but it took a few decades before it was successfully combined with a method to record series of sequential images in real-time. In 1878, Eadweard Muybridge eventually managed to take a series of photographs of a running horse with a battery of cameras in a line along the track and published the results as The Horse in Motion on cabinet cards. Muybridge, as well as Étienne-Jules Marey, Ottomar Anschütz and many others would create many more chronophotography studies. Muybridge had the contours of dozens of his chronophotographic series traced onto glass discs and projected them with his zoopraxiscope in his lectures from 1880 to 1895. Anschütz developed his own Electrotachyscope in 1887 to project 24 diapositive photographic images on glass disks as moving images, looped as long as deemed interesting for the audience.
|
26 |
+
|
27 |
+
Émile Reynaud already mentioned the possibility of projecting the images in his 1877 patent application for the praxinoscope. He presented a praxinoscope projection device at the Société française de photographie on 4 June 1880, but did not market his praxinoscope a projection before 1882. He then further developed the device into the Théâtre Optique which could project longer sequences with separate backgrounds, patented in 1888. He created several movies for the machine by painting images on hundreds of gelatin plates that were mounted into cardboard frames and attached to a cloth band. From 28 October 1892 to March 1900 Reynaud gave over 12,800 shows to a total of over 500,000 visitors at the Musée Grévin in Paris.
|
28 |
+
|
29 |
+
By the end of the 1880s, the introduction of lengths of celluloid photographic film and the invention of motion picture cameras, which could photograph an indefinitely long rapid sequence of images using only one lens, allowed several minutes of action to be captured and stored on a single compact reel of film. Some early films were made to be viewed by one person at a time through a "peep show" device such as the Kinetoscope and the mutoscope. Others were intended for a projector, mechanically similar to the camera and sometimes actually the same machine, which was used to shine an intense light through the processed and printed film and into a projection lens so that these "moving pictures" could be shown tremendously enlarged on a screen for viewing by an entire audience. The first kinetoscope film shown in public exhibition was Blacksmith Scene, produced by Edison Manufacturing Company in 1893. The following year the company would begin Edison Studios, which became an early leader in the film industry with notable early shorts including The Kiss, and would go on to produce close to 1,200 films.
|
30 |
+
|
31 |
+
The first public screenings of films at which admission was charged were made in 1895 by the American Woodville Latham and his sons, using films produced by their Eidoloscope company,[2] and by the – arguably better known – French brothers Auguste and Louis Lumière with ten of their own productions.[citation needed] Private screenings had preceded these by several months, with Latham's slightly predating the Lumière brothers'.[citation needed]
|
32 |
+
|
33 |
+
The earliest films were simply one static shot that showed an event or action with no editing or other cinematic techniques. Around the turn of the 20th century, films started stringing several scenes together to tell a story. The scenes were later broken up into multiple shots photographed from different distances and angles. Other techniques such as camera movement were developed as effective ways to tell a story with film. Until sound film became commercially practical in the late 1920s, motion pictures were a purely visual art, but these innovative silent films had gained a hold on the public imagination. Rather than leave audiences with only the noise of the projector as an accompaniment, theater owners hired a pianist or organist or, in large urban theaters, a full orchestra to play music that fit the mood of the film at any given moment. By the early 1920s, most films came with a prepared list of sheet music to be used for this purpose, and complete film scores were composed for major productions.
|
34 |
+
|
35 |
+
The rise of European cinema was interrupted by the outbreak of World War I, while the film industry in the United States flourished with the rise of Hollywood, typified most prominently by the innovative work of D. W. Griffith in The Birth of a Nation (1915) and Intolerance (1916). However, in the 1920s, European filmmakers such as Eisenstein, F. W. Murnau and Fritz Lang, in many ways inspired by the meteoric wartime progress of film through Griffith, along with the contributions of Charles Chaplin, Buster Keaton and others, quickly caught up with American film-making and continued to further advance the medium.
|
36 |
+
|
37 |
+
In the 1920s, the development of electronic sound recording technologies made it practical to incorporate a soundtrack of speech, music and sound effects synchronized with the action on the screen.[citation needed] The resulting sound films were initially distinguished from the usual silent "moving pictures" or "movies" by calling them "talking pictures" or "talkies."[citation needed] The revolution they wrought was swift. By 1930, silent film was practically extinct in the US and already being referred to as "the old medium."[citation needed]
|
38 |
+
|
39 |
+
Another major technological development was the introduction of "natural color," which meant color that was photographically recorded from nature rather than added to black-and-white prints by hand-coloring, stencil-coloring or other arbitrary procedures, although the earliest processes typically yielded colors which were far from "natural" in appearance.[citation needed] While the advent of sound films quickly made silent films and theater musicians obsolete, color replaced black-and-white much more gradually.[citation needed] The pivotal innovation was the introduction of the three-strip version of the Technicolor process, first used for animated cartoons in 1932, then also for live-action short films and isolated sequences in a few feature films, then for an entire feature film, Becky Sharp, in 1935. The expense of the process was daunting, but favorable public response in the form of increased box office receipts usually justified the added cost. The number of films made in color slowly increased year after year.
|
40 |
+
|
41 |
+
In the early 1950s, the proliferation of black-and-white television started seriously depressing North American theater attendance.[citation needed] In an attempt to lure audiences back into theaters, bigger screens were installed, widescreen processes, polarized 3D projection, and stereophonic sound were introduced, and more films were made in color, which soon became the rule rather than the exception. Some important mainstream Hollywood films were still being made in black-and-white as late as the mid-1960s, but they marked the end of an era. Color television receivers had been available in the US since the mid-1950s, but at first, they were very expensive and few broadcasts were in color. During the 1960s, prices gradually came down, color broadcasts became common, and sales boomed. The overwhelming public verdict in favor of color was clear. After the final flurry of black-and-white films had been released in mid-decade, all Hollywood studio productions were filmed in color, with the usual exceptions made only at the insistence of "star" filmmakers such as Peter Bogdanovich and Martin Scorsese.[citation needed]
|
42 |
+
|
43 |
+
The decades following the decline of the studio system in the 1960s saw changes in the production and style of film. Various New Wave movements (including the French New Wave, Indian New Wave, Japanese New Wave, and New Hollywood) and the rise of film-school-educated independent filmmakers contributed to the changes the medium experienced in the latter half of the 20th century. Digital technology has been the driving force for change throughout the 1990s and into the 2000s. Digital 3D projection largely replaced earlier problem-prone 3D film systems and has become popular in the early 2010s.[citation needed]
|
44 |
+
|
45 |
+
"Film theory" seeks to develop concise and systematic concepts that apply to the study of film as art. The concept of film as an art-form began in 1911 with Ricciotto Canudo's The Birth of the Sixth Art. Formalist film theory, led by Rudolf Arnheim, Béla Balázs, and Siegfried Kracauer, emphasized how film differed from reality and thus could be considered a valid fine art. André Bazin reacted against this theory by arguing that film's artistic essence lay in its ability to mechanically reproduce reality, not in its differences from reality, and this gave rise to realist theory. More recent analysis spurred by Jacques Lacan's psychoanalysis and Ferdinand de Saussure's semiotics among other things has given rise to psychoanalytic film theory, structuralist film theory, feminist film theory, and others. On the other hand, critics from the analytical philosophy tradition, influenced by Wittgenstein, try to clarify misconceptions used in theoretical studies and produce analysis of a film's vocabulary and its link to a form of life.
|
46 |
+
|
47 |
+
Film is considered to have its own language. James Monaco wrote a classic text on film theory, titled "How to Read a Film," that addresses this. Director Ingmar Bergman famously said, "Andrei Tarkovsky for me is the greatest director, the one who invented a new language, true to the nature of film, as it captures life as a reflection, life as a dream." An example of the language is a sequence of back and forth images of one speaking actor's left profile, followed by another speaking actor's right profile, then a repetition of this, which is a language understood by the audience to indicate a conversation. This describes another theory of film, the 180-degree rule, as a visual story-telling device with an ability to place a viewer in a context of being psychologically present through the use of visual composition and editing. The "Hollywood style" includes this narrative theory, due to the overwhelming practice of the rule by movie studios based in Hollywood, California, during film's classical era. Another example of cinematic language is having a shot that zooms in on the forehead of an actor with an expression of silent reflection that cuts to a shot of a younger actor who vaguely resembles the first actor, indicating that the first person is remembering a past self, an edit of compositions that causes a time transition.
|
48 |
+
|
49 |
+
Montage is the technique by which separate pieces of film are selected, edited, and then pieced together to make a new section of film. A scene could show a man going into battle, with flashbacks to his youth and to his home-life and with added special effects, placed into the film after filming is complete. As these were all filmed separately, and perhaps with different actors, the final version is called a montage. Directors developed a theory of montage, beginning with Eisenstein and the complex juxtaposition of images in his film Battleship Potemkin.[3] Incorporation of musical and visual counterpoint, and scene development through mise en scene, editing, and effects has led to more complex techniques comparable to those used in opera and ballet.
|
50 |
+
|
51 |
+
— Roger Ebert (1986)[4]
|
52 |
+
|
53 |
+
Film criticism is the analysis and evaluation of films. In general, these works can be divided into two categories: academic criticism by film scholars and journalistic film criticism that appears regularly in newspapers and other media. Film critics working for newspapers, magazines, and broadcast media mainly review new releases. Normally they only see any given film once and have only a day or two to formulate their opinions. Despite this, critics have an important impact on the audience response and attendance at films, especially those of certain genres. Mass marketed action, horror, and comedy films tend not to be greatly affected by a critic's overall judgment of a film. The plot summary and description of a film and the assessment of the director's and screenwriters' work that makes up the majority of most film reviews can still have an important impact on whether people decide to see a film. For prestige films such as most dramas and art films, the influence of reviews is important. Poor reviews from leading critics at major papers and magazines will often reduce audience interest and attendance.
|
54 |
+
|
55 |
+
The impact of a reviewer on a given film's box office performance is a matter of debate. Some observers claim that movie marketing in the 2000s is so intense, well-coordinated and well financed that reviewers cannot prevent a poorly written or filmed blockbuster from attaining market success. However, the cataclysmic failure of some heavily promoted films which were harshly reviewed, as well as the unexpected success of critically praised independent films indicates that extreme critical reactions can have considerable influence. Other observers note that positive film reviews have been shown to spark interest in little-known films. Conversely, there have been several films in which film companies have so little confidence that they refuse to give reviewers an advanced viewing to avoid widespread panning of the film. However, this usually backfires, as reviewers are wise to the tactic and warn the public that the film may not be worth seeing and the films often do poorly as a result. Journalist film critics are sometimes called film reviewers. Critics who take a more academic approach to films, through publishing in film journals and writing books about films using film theory or film studies approaches, study how film and filming techniques work, and what effect they have on people. Rather than having their reviews published in newspapers or appearing on television, their articles are published in scholarly journals or up-market magazines. They also tend to be affiliated with colleges or universities as professors or instructors.
|
56 |
+
|
57 |
+
The making and showing of motion pictures became a source of profit almost as soon as the process was invented. Upon seeing how successful their new invention, and its product, was in their native France, the Lumières quickly set about touring the Continent to exhibit the first films privately to royalty and publicly to the masses. In each country, they would normally add new, local scenes to their catalogue and, quickly enough, found local entrepreneurs in the various countries of Europe to buy their equipment and photograph, export, import, and screen additional product commercially. The Oberammergau Passion Play of 1898[citation needed] was the first commercial motion picture ever produced. Other pictures soon followed, and motion pictures became a separate industry that overshadowed the vaudeville world. Dedicated theaters and companies formed specifically to produce and distribute films, while motion picture actors became major celebrities and commanded huge fees for their performances. By 1917 Charlie Chaplin had a contract that called for an annual salary of one million dollars. From 1931 to 1956, film was also the only image storage and playback system for television programming until the introduction of videotape recorders.
|
58 |
+
|
59 |
+
In the United States, much of the film industry is centered around Hollywood, California. Other regional centers exist in many parts of the world, such as Mumbai-centered Bollywood, the Indian film industry's Hindi cinema which produces the largest number of films in the world.[5] Though the expense involved in making films has led cinema production to concentrate under the auspices of movie studios, recent advances in affordable film making equipment have allowed independent film productions to flourish.
|
60 |
+
|
61 |
+
Profit is a key force in the industry, due to the costly and risky nature of filmmaking; many films have large cost overruns, an example being Kevin Costner's Waterworld. Yet many filmmakers strive to create works of lasting social significance. The Academy Awards (also known as "the Oscars") are the most prominent film awards in the United States, providing recognition each year to films, based on their artistic merits. There is also a large industry for educational and instructional films made in lieu of or in addition to lectures and texts. Revenue in the industry is sometimes volatile due to the reliance on blockbuster films released in movie theaters. The rise of alternative home entertainment has raised questions about the future of the cinema industry, and Hollywood employment has become less reliable, particularly for medium and low-budget films.[6]
|
62 |
+
|
63 |
+
Derivative academic fields of study may both interact with and develop independently of filmmaking, as in film theory and analysis. Fields of academic study have been created that are derivative or dependent on the existence of film, such as film criticism, film history, divisions of film propaganda in authoritarian governments, or psychological on subliminal effects (e.g., of a flashing soda can during a screening). These fields may further create derivative fields, such as a movie review section in a newspaper or a television guide. Sub-industries can spin off from film, such as popcorn makers, and film-related toys (e.g., Star Wars figures). Sub-industries of pre-existing industries may deal specifically with film, such as product placement and other advertising within films.
|
64 |
+
|
65 |
+
The terminology used for describing motion pictures varies considerably between British and American English. In British usage, the name of the medium is "film". The word "movie" is understood but seldom used.[7][8] Additionally, "the pictures" (plural) is used semi-frequently to refer to the place where movies are exhibited, while in American English this may be called "the movies", but it is becoming outdated. In other countries, the place where movies are exhibited may be called a cinema or movie theatre. By contrast, in the United States, "movie" is the predominant form. Although the words "film" and "movie" are sometimes used interchangeably, "film" is more often used when considering artistic, theoretical, or technical aspects. The term "movies" more often refers to entertainment or commercial aspects, as where to go for fun evening on a date. For example, a book titled "How to Understand a Film" would probably be about the aesthetics or theory of film, while a book entitled "Let's Go to the Movies" would probably be about the history of entertaining movies and blockbusters.
|
66 |
+
|
67 |
+
Further terminology is used to distinguish various forms and media used in the film industry. "Motion pictures" and "moving pictures" are frequently used terms for film and movie productions specifically intended for theatrical exhibition, such as, for instance, Batman. "DVD" and "videotape" are video formats that can reproduce a photochemical film. A reproduction based on such is called a "transfer." After the advent of theatrical film as an industry, the television industry began using videotape as a recording medium. For many decades, tape was solely an analog medium onto which moving images could be either recorded or transferred. "Film" and "filming" refer to the photochemical medium that chemically records a visual image and the act of recording respectively. However, the act of shooting images with other visual media, such as with a digital camera, is still called "filming" and the resulting works often called "films" as interchangeable to "movies," despite not being shot on film. "Silent films" need not be utterly silent, but are films and movies without an audible dialogue, including those that have a musical accompaniment. The word, "Talkies," refers to the earliest sound films created to have audible dialogue recorded for playback along with the film, regardless of a musical accompaniment. "Cinema" either broadly encompasses both films and movies, or it is roughly synonymous with film and theatrical exhibition, and both are capitalized when referring to a category of art. The "silver screen" refers to the projection screen used to exhibit films and, by extension, is also used as a metonym for the entire film industry.
|
68 |
+
|
69 |
+
"Widescreen" refers to a larger width to height in the frame, compared to earlier historic aspect ratios.[9] A "feature-length film", or "feature film", is of a conventional full length, usually 60 minutes or more, and can commercially stand by itself without other films in a ticketed screening.[10] A "short" is a film that is not as long as a feature-length film, often screened with other shorts, or preceding a feature-length film. An "independent" is a film made outside the conventional film industry.
|
70 |
+
|
71 |
+
In US usage, one talks of a "screening" or "projection" of a movie or video on a screen at a public or private "theater." In British English, a "film showing" happens at a cinema (never a "theatre", which is a different medium and place altogether).[8] A cinema usually refers to an arena designed specifically to exhibit films, where the screen is affixed to a wall, while a theater usually refers to a place where live, non-recorded action or combination thereof occurs from a podium or other type of stage, including the amphitheater. Theaters can still screen movies in them, though the theater would be retrofitted to do so. One might propose "going to the cinema" when referring to the activity, or sometimes "to the pictures" in British English, whereas the US expression is usually "going to the movies." A cinema usually shows a mass-marketed movie using a front-projection screen process with either a film projector or, more recently, with a digital projector. But, cinemas may also show theatrical movies from their home video transfers that include Blu-ray Disc, DVD, and videocassette when they possess sufficient projection quality or based upon need, such as movies that exist only in their transferred state, which may be due to the loss or deterioration of the film master and prints from which the movie originally existed. Due to the advent of digital film production and distribution, physical film might be absent entirely. A "double feature" is a screening of two independently marketed, stand-alone feature films. A "viewing" is a watching of a film. "Sales" and "at the box office" refer to tickets sold at a theater, or more currently, rights sold for individual showings. A "release" is the distribution and often simultaneous screening of a film. A "preview" is a screening in advance of the main release.
|
72 |
+
|
73 |
+
Any film may also have a "sequel", which portrays events following those in the film. Bride of Frankenstein is an early example. When there are more films than one with the same characters, story arcs, or subject themes, these movies become a "series," such as the James Bond series. And, existing outside a specific story timeline usually, does not exclude a film from being part of a series. A film that portrays events occurring earlier in a timeline with those in another film, but is released after that film, is sometimes called a "prequel," an example being Butch and Sundance: The Early Days.
|
74 |
+
|
75 |
+
The "credits," or "end credits," is a list that gives credit to the people involved in the production of a film. Films from before the 1970s usually start a film with credits, often ending with only a title card, saying "The End" or some equivalent, often an equivalent that depends on the language of the production[citation needed]. From then onward, a film's credits usually appear at the end of most films. However, films with credits that end a film often repeat some credits at or near the start of a film and therefore appear twice, such as that film's acting leads, while less frequently some appearing near or at the beginning only appear there, not at the end, which often happens to the director's credit. The credits appearing at or near the beginning of a film are usually called "titles" or "beginning titles." A post-credits scene is a scene shown after the end of the credits. Ferris Bueller's Day Off has a post-credit scene in which Ferris tells the audience that the film is over and they should go home.
|
76 |
+
|
77 |
+
A film's "cast" refers to a collection of the actors and actresses who appear, or "star," in a film. A star is an actor or actress, often a popular one, and in many cases, a celebrity who plays a central character in a film. Occasionally the word can also be used to refer to the fame of other members of the crew, such as a director or other personality, such as Martin Scorsese. A "crew" is usually interpreted as the people involved in a film's physical construction outside cast participation, and it could include directors, film editors, photographers, grips, gaffers, set decorators, prop masters, and costume designers. A person can both be part of a film's cast and crew, such as Woody Allen, who directed and starred in Take the Money and Run.
|
78 |
+
|
79 |
+
A "film goer," "movie goer," or "film buff" is a person who likes or often attends films and movies, and any of these, though more often the latter, could also see oneself as a student to films and movies or the filmic process. Intense interest in films, film theory, and film criticism, is known as cinephilia. A film enthusiast is known as a cinephile or cineaste.
|
80 |
+
|
81 |
+
A preview performance refers to a showing of a film to a select audience, usually for the purposes of corporate promotions, before the public film premiere itself. Previews are sometimes used to judge audience reaction, which if unexpectedly negative, may result in recutting or even refilming certain sections based on the audience response. One example of a film that was changed after a negative response from the test screening is 1982's First Blood. After the test audience responded very negatively to the death of protagonist John Rambo, a Vietnam veteran, at the end of the film, the company wrote and re-shot a new ending in which the character survives.[11]
|
82 |
+
|
83 |
+
Trailers or previews are advertisements for films that will be shown in 1 to 3 months at a cinema. Back in the early days of cinema, with theaters that had only one or two screens, only certain trailers were shown for the films that were going to be shown there. Later, when theaters added more screens or new theaters were built with a lot of screens, all different trailers were shown even if they weren't going to play that film in that theater. Film studios realized that the more trailers that were shown (even if it wasn't going to be shown in that particular theater) the more patrons would go to a different theater to see the film when it came out. The term "trailer" comes from their having originally been shown at the end of a film program. That practice did not last long because patrons tended to leave the theater after the films ended, but the name has stuck. Trailers are now shown before the film (or the "A film" in a double feature program) begins. Film trailers are also common on DVDs and Blu-ray Discs, as well as on the Internet and mobile devices. Trailers are created to be engaging and interesting for viewers. As a result, in the Internet era, viewers often seek out trailers to watch them. Of the ten billion videos watched online annually in 2008, film trailers ranked third, after news and user-created videos.[12] Teasers are a much shorter preview or advertisement that lasts only 10 to 30 seconds. Teasers are used to get patrons excited about a film coming out in the next six to twelve months. Teasers may be produced even before the film production is completed.
|
84 |
+
|
85 |
+
Film is used for a range of goals, including education and propaganda. When the purpose is primarily educational, a film is called an "educational film". Examples are recordings of academic lectures and experiments, or a film based on a classic novel. Film may be propaganda, in whole or in part, such as the films made by Leni Riefenstahl in Nazi Germany, US war film trailers during World War II, or artistic films made under Stalin by Sergei Eisenstein. They may also be works of political protest, as in the films of Andrzej Wajda, or more subtly, the films of Andrei Tarkovsky. The same film may be considered educational by some, and propaganda by others as the categorization of a film can be subjective.
|
86 |
+
|
87 |
+
At its core, the means to produce a film depend on the content the filmmaker wishes to show, and the apparatus for displaying it: the zoetrope merely requires a series of images on a strip of paper. Film production can, therefore, take as little as one person with a camera (or even without a camera, as in Stan Brakhage's 1963 film Mothlight), or thousands of actors, extras, and crew members for a live-action, feature-length epic.
|
88 |
+
|
89 |
+
The necessary steps for almost any film can be boiled down to conception, planning, execution, revision, and distribution. The more involved the production, the more significant each of the steps becomes. In a typical production cycle of a Hollywood-style film, these main stages are defined as development, pre-production, production, post-production and distribution.
|
90 |
+
|
91 |
+
This production cycle usually takes three years. The first year is taken up with development. The second year comprises preproduction and production. The third year, post-production and distribution. The bigger the production, the more resources it takes, and the more important financing becomes; most feature films are artistic works from the creators' perspective (e.g., film director, cinematographer, screenwriter) and for-profit business entities for the production companies.
|
92 |
+
|
93 |
+
A film crew is a group of people hired by a film company, employed during the "production" or "photography" phase, for the purpose of producing a film or motion picture. Crew is distinguished from cast, who are the actors who appear in front of the camera or provide voices for characters in the film. The crew interacts with but is also distinct from the production staff, consisting of producers, managers, company representatives, their assistants, and those whose primary responsibility falls in pre-production or post-production phases, such as screenwriters and film editors. Communication between production and crew generally passes through the director and his/her staff of assistants. Medium-to-large crews are generally divided into departments with well-defined hierarchies and standards for interaction and cooperation between the departments. Other than acting, the crew handles everything in the photography phase: props and costumes, shooting, sound, electrics (i.e., lights), sets, and production special effects. Caterers (known in the film industry as "craft services") are usually not considered part of the crew.
|
94 |
+
|
95 |
+
Film stock consists of transparent celluloid, acetate, or polyester base coated with an emulsion containing light-sensitive chemicals. Cellulose nitrate was the first type of film base used to record motion pictures, but due to its flammability was eventually replaced by safer materials. Stock widths and the film format for images on the reel have had a rich history, though most large commercial films are still shot on (and distributed to theaters) as 35 mm prints.
|
96 |
+
Originally moving picture film was shot and projected at various speeds using hand-cranked cameras and projectors; though 1000 frames per minute (162/3 frame/s) is generally cited as a standard silent speed, research indicates most films were shot between 16 frame/s and 23 frame/s and projected from 18 frame/s on up (often reels included instructions on how fast each scene should be shown).[13] When sound film was introduced in the late 1920s, a constant speed was required for the sound head. 24 frames per second were chosen because it was the slowest (and thus cheapest) speed which allowed for sufficient sound quality.[citation needed] Improvements since the late 19th century include the mechanization of cameras – allowing them to record at a consistent speed, quiet camera design – allowing sound recorded on-set to be usable without requiring large "blimps" to encase the camera, the invention of more sophisticated filmstocks and lenses, allowing directors to film in increasingly dim conditions, and the development of synchronized sound, allowing sound to be recorded at exactly the same speed as its corresponding action. The soundtrack can be recorded separately from shooting the film, but for live-action pictures, many parts of the soundtrack are usually recorded simultaneously.
|
97 |
+
|
98 |
+
As a medium, film is not limited to motion pictures, since the technology developed as the basis for photography. It can be used to present a progressive sequence of still images in the form of a slideshow. Film has also been incorporated into multimedia presentations and often has importance as primary historical documentation. However, historic films have problems in terms of preservation and storage, and the motion picture industry is exploring many alternatives. Most films on cellulose nitrate base have been copied onto modern safety films. Some studios save color films through the use of separation masters: three B&W negatives each exposed through red, green, or blue filters (essentially a reverse of the Technicolor process). Digital methods have also been used to restore films, although their continued obsolescence cycle makes them (as of 2006) a poor choice for long-term preservation. Film preservation of decaying film stock is a matter of concern to both film historians and archivists and to companies interested in preserving their existing products in order to make them available to future generations (and thereby increase revenue). Preservation is generally a higher concern for nitrate and single-strip color films, due to their high decay rates; black-and-white films on safety bases and color films preserved on Technicolor imbibition prints tend to keep up much better, assuming proper handling and storage.
|
99 |
+
|
100 |
+
Some films in recent decades have been recorded using analog video technology similar to that used in television production. Modern digital video cameras and digital projectors are gaining ground as well. These approaches are preferred by some film-makers, especially because footage shot with digital cinema can be evaluated and edited with non-linear editing systems (NLE) without waiting for the film stock to be processed. The migration was gradual, and as of 2005, most major motion pictures were still shot on film.[needs update]
|
101 |
+
|
102 |
+
Independent filmmaking often takes place outside Hollywood, or other major studio systems. An independent film (or indie film) is a film initially produced without financing or distribution from a major film studio. Creative, business and technological reasons have all contributed to the growth of the indie film scene in the late 20th and early 21st century. On the business side, the costs of big-budget studio films also lead to conservative choices in cast and crew. There is a trend in Hollywood towards co-financing (over two-thirds of the films put out by Warner Bros. in 2000 were joint ventures, up from 10% in 1987).[14] A hopeful director is almost never given the opportunity to get a job on a big-budget studio film unless he or she has significant industry experience in film or television. Also, the studios rarely produce films with unknown actors, particularly in lead roles.
|
103 |
+
|
104 |
+
Before the advent of digital alternatives, the cost of professional film equipment and stock was also a hurdle to being able to produce, direct, or star in a traditional studio film. But the advent of consumer camcorders in 1985, and more importantly, the arrival of high-resolution digital video in the early 1990s, have lowered the technology barrier to film production significantly. Both production and post-production costs have been significantly lowered; in the 2000s, the hardware and software for post-production can be installed in a commodity-based personal computer. Technologies such as DVDs, FireWire connections and a wide variety of professional and consumer-grade video editing software make film-making relatively affordable.
|
105 |
+
|
106 |
+
Since the introduction of digital video DV technology, the means of production have become more democratized. Filmmakers can conceivably shoot a film with a digital video camera and edit the film, create and edit the sound and music, and mix the final cut on a high-end home computer. However, while the means of production may be democratized, financing, distribution, and marketing remain difficult to accomplish outside the traditional system. Most independent filmmakers rely on film festivals to get their films noticed and sold for distribution. The arrival of internet-based video websites such as YouTube and Veoh has further changed the filmmaking landscape, enabling indie filmmakers to make their films available to the public.
|
107 |
+
|
108 |
+
An open content film is much like an independent film, but it is produced through open collaborations; its source material is available under a license which is permissive enough to allow other parties to create fan fiction or derivative works, than a traditional copyright. Like independent filmmaking, open source filmmaking takes place outside Hollywood, or other major studio systems.
|
109 |
+
|
110 |
+
A fan film is a film or video inspired by a film, television program, comic book or a similar source, created by fans rather than by the source's copyright holders or creators. Fan filmmakers have traditionally been amateurs, but some of the most notable films have actually been produced by professional filmmakers as film school class projects or as demonstration reels. Fan films vary tremendously in length, from short faux-teaser trailers for non-existent motion pictures to rarer full-length motion pictures.
|
111 |
+
|
112 |
+
Film distribution is the process through which a film is made available for viewing by an audience. This is normally the task of a professional film distributor, who would determine the marketing strategy of the film, the media by which a film is to be exhibited or made available for viewing, and may set the release date and other matters. The film may be exhibited directly to the public either through a movie theater (historically the main way films were distributed) or television for personal home viewing (including on DVD-Video or Blu-ray Disc, video-on-demand, online downloading, television programs through broadcast syndication etc.). Other ways of distributing a film include rental or personal purchase of the film in a variety of media and formats, such as VHS tape or DVD, or Internet downloading or streaming using a computer.
|
113 |
+
|
114 |
+
Animation is a technique in which each frame of a film is produced individually, whether generated as a computer graphic, or by photographing a drawn image, or by repeatedly making small changes to a model unit (see claymation and stop motion), and then photographing the result with a special animation camera. When the frames are strung together and the resulting film is viewed at a speed of 16 or more frames per second, there is an illusion of continuous movement (due to the phi phenomenon). Generating such a film is very labor-intensive and tedious, though the development of computer animation has greatly sped up the process. Because animation is very time-consuming and often very expensive to produce, the majority of animation for TV and films comes from professional animation studios. However, the field of independent animation has existed at least since the 1950s, with animation being produced by independent studios (and sometimes by a single person). Several independent animation producers have gone on to enter the professional animation industry.
|
115 |
+
|
116 |
+
Limited animation is a way of increasing production and decreasing costs of animation by using "short cuts" in the animation process. This method was pioneered by UPA and popularized by Hanna-Barbera in the United States, and by Osamu Tezuka in Japan, and adapted by other studios as cartoons moved from movie theaters to television.[15] Although most animation studios are now using digital technologies in their productions, there is a specific style of animation that depends on film. Camera-less animation, made famous by film-makers like Norman McLaren, Len Lye, and Stan Brakhage, is painted and drawn directly onto pieces of film, and then run through a projector.
|
en/1136.html.txt
ADDED
@@ -0,0 +1,116 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
Film, also called movie, motion picture or moving picture, is a visual art-form used to simulate experiences that communicate ideas, stories, perceptions, feelings, beauty, or atmosphere through the use of moving images. These images are generally accompanied by sound, and more rarely, other sensory stimulations.[1] The word "cinema", short for cinematography, is often used to refer to filmmaking and the film industry, and to the art form that is the result of it.
|
6 |
+
|
7 |
+
The moving images of a film are created by photographing actual scenes with a motion-picture camera, by photographing drawings or miniature models using traditional animation techniques, by means of CGI and computer animation, or by a combination of some or all of these techniques, and other visual effects.
|
8 |
+
|
9 |
+
Traditionally, films were recorded onto celluloid film stock through a photochemical process and then shown through a movie projector onto a large screen. Contemporary films are often fully digital through the entire process of production, distribution, and exhibition, while films recorded in a photochemical form traditionally included an analogous optical soundtrack (a graphic recording of the spoken words, music and other sounds that accompany the images which runs along a portion of the film exclusively reserved for it, and is not projected).
|
10 |
+
|
11 |
+
Films are cultural artifacts created by specific cultures. They reflect those cultures, and, in turn, affect them. Film is considered to be an important art form, a source of popular entertainment, and a powerful medium for educating—or indoctrinating—citizens. The visual basis of film gives it a universal power of communication. Some films have become popular worldwide attractions through the use of dubbing or subtitles to translate the dialog into other languages.
|
12 |
+
|
13 |
+
The individual images that make up a film are called frames. In the projection of traditional celluloid films, a rotating shutter causes intervals of darkness as each frame, in turn, is moved into position to be projected, but the viewer does not notice the interruptions because of an effect known as persistence of vision, whereby the eye retains a visual image for a fraction of a second after its source disappears. The perception of motion is partly due to a psychological effect called the phi phenomenon.
|
14 |
+
|
15 |
+
The name "film" originates from the fact that photographic film (also called film stock) has historically been the medium for recording and displaying motion pictures. Many other terms exist for an individual motion-picture, including picture, picture show, moving picture, photoplay, and flick. The most common term in the United States is movie, while in Europe film is preferred. Common terms for the field in general include the big screen, the silver screen, the movies, and cinema; the last of these is commonly used, as an overarching term, in scholarly texts and critical essays. In early years, the word sheet was sometimes used instead of screen.
|
16 |
+
|
17 |
+
The art of film has drawn on several earlier traditions in fields such as oral storytelling, literature, theatre and visual arts. Forms of art and entertainment that had already featured moving and/or projected images include:
|
18 |
+
|
19 |
+
The stroboscopic animation principle was introduced in 1833 with the phénakisticope and also applied in the zoetrope since 1866, the flip book since 1868, and the praxinoscope since 1877, before it became the basic principle for cinematography.
|
20 |
+
|
21 |
+
Experiments with early phenakisticope-based animation projectors were made at least as early as 1843. Jules Duboscq marketed phénakisticope projection systems in France between 1853 and the 1890s.
|
22 |
+
|
23 |
+
Photography was introduced in 1839, but at first photographic emulsions needed such long exposures that the recording of moving subjects seemed impossible. At least as early as 1844, photographic series of subjects posed in different positions have been created to either suggest a motion sequence or to document a range of different viewing angles. The advent of stereoscopic photography, with early experiments in the 1840s and commercial success since the early 1850s, raised interest in completing the photographic medium with the addition of means to capture colour and motion. In 1849, Joseph Plateau published about the idea to combine his invention of the phénakisticope with the stereoscope, as suggested to him by stereoscope inventor Charles Wheatstone, and use photographs of plaster sculptures in different positions to be animated in the combined device. In 1852, Jules Duboscq patented such an instrument as the "Stéréoscope-fantascope, ou Bïoscope". He marginally advertised it for a short period. It was a commercial failure and no complete instrument has yet been located, but one bioscope disc has been preserved in the Plateau collection of the Ghent University. It has stereoscopic photographs of a machine.
|
24 |
+
|
25 |
+
By the late 1850s the first examples of instantaneous photography came about and provided hope that motion photography would soon be possible, but it took a few decades before it was successfully combined with a method to record series of sequential images in real-time. In 1878, Eadweard Muybridge eventually managed to take a series of photographs of a running horse with a battery of cameras in a line along the track and published the results as The Horse in Motion on cabinet cards. Muybridge, as well as Étienne-Jules Marey, Ottomar Anschütz and many others would create many more chronophotography studies. Muybridge had the contours of dozens of his chronophotographic series traced onto glass discs and projected them with his zoopraxiscope in his lectures from 1880 to 1895. Anschütz developed his own Electrotachyscope in 1887 to project 24 diapositive photographic images on glass disks as moving images, looped as long as deemed interesting for the audience.
|
26 |
+
|
27 |
+
Émile Reynaud already mentioned the possibility of projecting the images in his 1877 patent application for the praxinoscope. He presented a praxinoscope projection device at the Société française de photographie on 4 June 1880, but did not market his praxinoscope a projection before 1882. He then further developed the device into the Théâtre Optique which could project longer sequences with separate backgrounds, patented in 1888. He created several movies for the machine by painting images on hundreds of gelatin plates that were mounted into cardboard frames and attached to a cloth band. From 28 October 1892 to March 1900 Reynaud gave over 12,800 shows to a total of over 500,000 visitors at the Musée Grévin in Paris.
|
28 |
+
|
29 |
+
By the end of the 1880s, the introduction of lengths of celluloid photographic film and the invention of motion picture cameras, which could photograph an indefinitely long rapid sequence of images using only one lens, allowed several minutes of action to be captured and stored on a single compact reel of film. Some early films were made to be viewed by one person at a time through a "peep show" device such as the Kinetoscope and the mutoscope. Others were intended for a projector, mechanically similar to the camera and sometimes actually the same machine, which was used to shine an intense light through the processed and printed film and into a projection lens so that these "moving pictures" could be shown tremendously enlarged on a screen for viewing by an entire audience. The first kinetoscope film shown in public exhibition was Blacksmith Scene, produced by Edison Manufacturing Company in 1893. The following year the company would begin Edison Studios, which became an early leader in the film industry with notable early shorts including The Kiss, and would go on to produce close to 1,200 films.
|
30 |
+
|
31 |
+
The first public screenings of films at which admission was charged were made in 1895 by the American Woodville Latham and his sons, using films produced by their Eidoloscope company,[2] and by the – arguably better known – French brothers Auguste and Louis Lumière with ten of their own productions.[citation needed] Private screenings had preceded these by several months, with Latham's slightly predating the Lumière brothers'.[citation needed]
|
32 |
+
|
33 |
+
The earliest films were simply one static shot that showed an event or action with no editing or other cinematic techniques. Around the turn of the 20th century, films started stringing several scenes together to tell a story. The scenes were later broken up into multiple shots photographed from different distances and angles. Other techniques such as camera movement were developed as effective ways to tell a story with film. Until sound film became commercially practical in the late 1920s, motion pictures were a purely visual art, but these innovative silent films had gained a hold on the public imagination. Rather than leave audiences with only the noise of the projector as an accompaniment, theater owners hired a pianist or organist or, in large urban theaters, a full orchestra to play music that fit the mood of the film at any given moment. By the early 1920s, most films came with a prepared list of sheet music to be used for this purpose, and complete film scores were composed for major productions.
|
34 |
+
|
35 |
+
The rise of European cinema was interrupted by the outbreak of World War I, while the film industry in the United States flourished with the rise of Hollywood, typified most prominently by the innovative work of D. W. Griffith in The Birth of a Nation (1915) and Intolerance (1916). However, in the 1920s, European filmmakers such as Eisenstein, F. W. Murnau and Fritz Lang, in many ways inspired by the meteoric wartime progress of film through Griffith, along with the contributions of Charles Chaplin, Buster Keaton and others, quickly caught up with American film-making and continued to further advance the medium.
|
36 |
+
|
37 |
+
In the 1920s, the development of electronic sound recording technologies made it practical to incorporate a soundtrack of speech, music and sound effects synchronized with the action on the screen.[citation needed] The resulting sound films were initially distinguished from the usual silent "moving pictures" or "movies" by calling them "talking pictures" or "talkies."[citation needed] The revolution they wrought was swift. By 1930, silent film was practically extinct in the US and already being referred to as "the old medium."[citation needed]
|
38 |
+
|
39 |
+
Another major technological development was the introduction of "natural color," which meant color that was photographically recorded from nature rather than added to black-and-white prints by hand-coloring, stencil-coloring or other arbitrary procedures, although the earliest processes typically yielded colors which were far from "natural" in appearance.[citation needed] While the advent of sound films quickly made silent films and theater musicians obsolete, color replaced black-and-white much more gradually.[citation needed] The pivotal innovation was the introduction of the three-strip version of the Technicolor process, first used for animated cartoons in 1932, then also for live-action short films and isolated sequences in a few feature films, then for an entire feature film, Becky Sharp, in 1935. The expense of the process was daunting, but favorable public response in the form of increased box office receipts usually justified the added cost. The number of films made in color slowly increased year after year.
|
40 |
+
|
41 |
+
In the early 1950s, the proliferation of black-and-white television started seriously depressing North American theater attendance.[citation needed] In an attempt to lure audiences back into theaters, bigger screens were installed, widescreen processes, polarized 3D projection, and stereophonic sound were introduced, and more films were made in color, which soon became the rule rather than the exception. Some important mainstream Hollywood films were still being made in black-and-white as late as the mid-1960s, but they marked the end of an era. Color television receivers had been available in the US since the mid-1950s, but at first, they were very expensive and few broadcasts were in color. During the 1960s, prices gradually came down, color broadcasts became common, and sales boomed. The overwhelming public verdict in favor of color was clear. After the final flurry of black-and-white films had been released in mid-decade, all Hollywood studio productions were filmed in color, with the usual exceptions made only at the insistence of "star" filmmakers such as Peter Bogdanovich and Martin Scorsese.[citation needed]
|
42 |
+
|
43 |
+
The decades following the decline of the studio system in the 1960s saw changes in the production and style of film. Various New Wave movements (including the French New Wave, Indian New Wave, Japanese New Wave, and New Hollywood) and the rise of film-school-educated independent filmmakers contributed to the changes the medium experienced in the latter half of the 20th century. Digital technology has been the driving force for change throughout the 1990s and into the 2000s. Digital 3D projection largely replaced earlier problem-prone 3D film systems and has become popular in the early 2010s.[citation needed]
|
44 |
+
|
45 |
+
"Film theory" seeks to develop concise and systematic concepts that apply to the study of film as art. The concept of film as an art-form began in 1911 with Ricciotto Canudo's The Birth of the Sixth Art. Formalist film theory, led by Rudolf Arnheim, Béla Balázs, and Siegfried Kracauer, emphasized how film differed from reality and thus could be considered a valid fine art. André Bazin reacted against this theory by arguing that film's artistic essence lay in its ability to mechanically reproduce reality, not in its differences from reality, and this gave rise to realist theory. More recent analysis spurred by Jacques Lacan's psychoanalysis and Ferdinand de Saussure's semiotics among other things has given rise to psychoanalytic film theory, structuralist film theory, feminist film theory, and others. On the other hand, critics from the analytical philosophy tradition, influenced by Wittgenstein, try to clarify misconceptions used in theoretical studies and produce analysis of a film's vocabulary and its link to a form of life.
|
46 |
+
|
47 |
+
Film is considered to have its own language. James Monaco wrote a classic text on film theory, titled "How to Read a Film," that addresses this. Director Ingmar Bergman famously said, "Andrei Tarkovsky for me is the greatest director, the one who invented a new language, true to the nature of film, as it captures life as a reflection, life as a dream." An example of the language is a sequence of back and forth images of one speaking actor's left profile, followed by another speaking actor's right profile, then a repetition of this, which is a language understood by the audience to indicate a conversation. This describes another theory of film, the 180-degree rule, as a visual story-telling device with an ability to place a viewer in a context of being psychologically present through the use of visual composition and editing. The "Hollywood style" includes this narrative theory, due to the overwhelming practice of the rule by movie studios based in Hollywood, California, during film's classical era. Another example of cinematic language is having a shot that zooms in on the forehead of an actor with an expression of silent reflection that cuts to a shot of a younger actor who vaguely resembles the first actor, indicating that the first person is remembering a past self, an edit of compositions that causes a time transition.
|
48 |
+
|
49 |
+
Montage is the technique by which separate pieces of film are selected, edited, and then pieced together to make a new section of film. A scene could show a man going into battle, with flashbacks to his youth and to his home-life and with added special effects, placed into the film after filming is complete. As these were all filmed separately, and perhaps with different actors, the final version is called a montage. Directors developed a theory of montage, beginning with Eisenstein and the complex juxtaposition of images in his film Battleship Potemkin.[3] Incorporation of musical and visual counterpoint, and scene development through mise en scene, editing, and effects has led to more complex techniques comparable to those used in opera and ballet.
|
50 |
+
|
51 |
+
— Roger Ebert (1986)[4]
|
52 |
+
|
53 |
+
Film criticism is the analysis and evaluation of films. In general, these works can be divided into two categories: academic criticism by film scholars and journalistic film criticism that appears regularly in newspapers and other media. Film critics working for newspapers, magazines, and broadcast media mainly review new releases. Normally they only see any given film once and have only a day or two to formulate their opinions. Despite this, critics have an important impact on the audience response and attendance at films, especially those of certain genres. Mass marketed action, horror, and comedy films tend not to be greatly affected by a critic's overall judgment of a film. The plot summary and description of a film and the assessment of the director's and screenwriters' work that makes up the majority of most film reviews can still have an important impact on whether people decide to see a film. For prestige films such as most dramas and art films, the influence of reviews is important. Poor reviews from leading critics at major papers and magazines will often reduce audience interest and attendance.
|
54 |
+
|
55 |
+
The impact of a reviewer on a given film's box office performance is a matter of debate. Some observers claim that movie marketing in the 2000s is so intense, well-coordinated and well financed that reviewers cannot prevent a poorly written or filmed blockbuster from attaining market success. However, the cataclysmic failure of some heavily promoted films which were harshly reviewed, as well as the unexpected success of critically praised independent films indicates that extreme critical reactions can have considerable influence. Other observers note that positive film reviews have been shown to spark interest in little-known films. Conversely, there have been several films in which film companies have so little confidence that they refuse to give reviewers an advanced viewing to avoid widespread panning of the film. However, this usually backfires, as reviewers are wise to the tactic and warn the public that the film may not be worth seeing and the films often do poorly as a result. Journalist film critics are sometimes called film reviewers. Critics who take a more academic approach to films, through publishing in film journals and writing books about films using film theory or film studies approaches, study how film and filming techniques work, and what effect they have on people. Rather than having their reviews published in newspapers or appearing on television, their articles are published in scholarly journals or up-market magazines. They also tend to be affiliated with colleges or universities as professors or instructors.
|
56 |
+
|
57 |
+
The making and showing of motion pictures became a source of profit almost as soon as the process was invented. Upon seeing how successful their new invention, and its product, was in their native France, the Lumières quickly set about touring the Continent to exhibit the first films privately to royalty and publicly to the masses. In each country, they would normally add new, local scenes to their catalogue and, quickly enough, found local entrepreneurs in the various countries of Europe to buy their equipment and photograph, export, import, and screen additional product commercially. The Oberammergau Passion Play of 1898[citation needed] was the first commercial motion picture ever produced. Other pictures soon followed, and motion pictures became a separate industry that overshadowed the vaudeville world. Dedicated theaters and companies formed specifically to produce and distribute films, while motion picture actors became major celebrities and commanded huge fees for their performances. By 1917 Charlie Chaplin had a contract that called for an annual salary of one million dollars. From 1931 to 1956, film was also the only image storage and playback system for television programming until the introduction of videotape recorders.
|
58 |
+
|
59 |
+
In the United States, much of the film industry is centered around Hollywood, California. Other regional centers exist in many parts of the world, such as Mumbai-centered Bollywood, the Indian film industry's Hindi cinema which produces the largest number of films in the world.[5] Though the expense involved in making films has led cinema production to concentrate under the auspices of movie studios, recent advances in affordable film making equipment have allowed independent film productions to flourish.
|
60 |
+
|
61 |
+
Profit is a key force in the industry, due to the costly and risky nature of filmmaking; many films have large cost overruns, an example being Kevin Costner's Waterworld. Yet many filmmakers strive to create works of lasting social significance. The Academy Awards (also known as "the Oscars") are the most prominent film awards in the United States, providing recognition each year to films, based on their artistic merits. There is also a large industry for educational and instructional films made in lieu of or in addition to lectures and texts. Revenue in the industry is sometimes volatile due to the reliance on blockbuster films released in movie theaters. The rise of alternative home entertainment has raised questions about the future of the cinema industry, and Hollywood employment has become less reliable, particularly for medium and low-budget films.[6]
|
62 |
+
|
63 |
+
Derivative academic fields of study may both interact with and develop independently of filmmaking, as in film theory and analysis. Fields of academic study have been created that are derivative or dependent on the existence of film, such as film criticism, film history, divisions of film propaganda in authoritarian governments, or psychological on subliminal effects (e.g., of a flashing soda can during a screening). These fields may further create derivative fields, such as a movie review section in a newspaper or a television guide. Sub-industries can spin off from film, such as popcorn makers, and film-related toys (e.g., Star Wars figures). Sub-industries of pre-existing industries may deal specifically with film, such as product placement and other advertising within films.
|
64 |
+
|
65 |
+
The terminology used for describing motion pictures varies considerably between British and American English. In British usage, the name of the medium is "film". The word "movie" is understood but seldom used.[7][8] Additionally, "the pictures" (plural) is used semi-frequently to refer to the place where movies are exhibited, while in American English this may be called "the movies", but it is becoming outdated. In other countries, the place where movies are exhibited may be called a cinema or movie theatre. By contrast, in the United States, "movie" is the predominant form. Although the words "film" and "movie" are sometimes used interchangeably, "film" is more often used when considering artistic, theoretical, or technical aspects. The term "movies" more often refers to entertainment or commercial aspects, as where to go for fun evening on a date. For example, a book titled "How to Understand a Film" would probably be about the aesthetics or theory of film, while a book entitled "Let's Go to the Movies" would probably be about the history of entertaining movies and blockbusters.
|
66 |
+
|
67 |
+
Further terminology is used to distinguish various forms and media used in the film industry. "Motion pictures" and "moving pictures" are frequently used terms for film and movie productions specifically intended for theatrical exhibition, such as, for instance, Batman. "DVD" and "videotape" are video formats that can reproduce a photochemical film. A reproduction based on such is called a "transfer." After the advent of theatrical film as an industry, the television industry began using videotape as a recording medium. For many decades, tape was solely an analog medium onto which moving images could be either recorded or transferred. "Film" and "filming" refer to the photochemical medium that chemically records a visual image and the act of recording respectively. However, the act of shooting images with other visual media, such as with a digital camera, is still called "filming" and the resulting works often called "films" as interchangeable to "movies," despite not being shot on film. "Silent films" need not be utterly silent, but are films and movies without an audible dialogue, including those that have a musical accompaniment. The word, "Talkies," refers to the earliest sound films created to have audible dialogue recorded for playback along with the film, regardless of a musical accompaniment. "Cinema" either broadly encompasses both films and movies, or it is roughly synonymous with film and theatrical exhibition, and both are capitalized when referring to a category of art. The "silver screen" refers to the projection screen used to exhibit films and, by extension, is also used as a metonym for the entire film industry.
|
68 |
+
|
69 |
+
"Widescreen" refers to a larger width to height in the frame, compared to earlier historic aspect ratios.[9] A "feature-length film", or "feature film", is of a conventional full length, usually 60 minutes or more, and can commercially stand by itself without other films in a ticketed screening.[10] A "short" is a film that is not as long as a feature-length film, often screened with other shorts, or preceding a feature-length film. An "independent" is a film made outside the conventional film industry.
|
70 |
+
|
71 |
+
In US usage, one talks of a "screening" or "projection" of a movie or video on a screen at a public or private "theater." In British English, a "film showing" happens at a cinema (never a "theatre", which is a different medium and place altogether).[8] A cinema usually refers to an arena designed specifically to exhibit films, where the screen is affixed to a wall, while a theater usually refers to a place where live, non-recorded action or combination thereof occurs from a podium or other type of stage, including the amphitheater. Theaters can still screen movies in them, though the theater would be retrofitted to do so. One might propose "going to the cinema" when referring to the activity, or sometimes "to the pictures" in British English, whereas the US expression is usually "going to the movies." A cinema usually shows a mass-marketed movie using a front-projection screen process with either a film projector or, more recently, with a digital projector. But, cinemas may also show theatrical movies from their home video transfers that include Blu-ray Disc, DVD, and videocassette when they possess sufficient projection quality or based upon need, such as movies that exist only in their transferred state, which may be due to the loss or deterioration of the film master and prints from which the movie originally existed. Due to the advent of digital film production and distribution, physical film might be absent entirely. A "double feature" is a screening of two independently marketed, stand-alone feature films. A "viewing" is a watching of a film. "Sales" and "at the box office" refer to tickets sold at a theater, or more currently, rights sold for individual showings. A "release" is the distribution and often simultaneous screening of a film. A "preview" is a screening in advance of the main release.
|
72 |
+
|
73 |
+
Any film may also have a "sequel", which portrays events following those in the film. Bride of Frankenstein is an early example. When there are more films than one with the same characters, story arcs, or subject themes, these movies become a "series," such as the James Bond series. And, existing outside a specific story timeline usually, does not exclude a film from being part of a series. A film that portrays events occurring earlier in a timeline with those in another film, but is released after that film, is sometimes called a "prequel," an example being Butch and Sundance: The Early Days.
|
74 |
+
|
75 |
+
The "credits," or "end credits," is a list that gives credit to the people involved in the production of a film. Films from before the 1970s usually start a film with credits, often ending with only a title card, saying "The End" or some equivalent, often an equivalent that depends on the language of the production[citation needed]. From then onward, a film's credits usually appear at the end of most films. However, films with credits that end a film often repeat some credits at or near the start of a film and therefore appear twice, such as that film's acting leads, while less frequently some appearing near or at the beginning only appear there, not at the end, which often happens to the director's credit. The credits appearing at or near the beginning of a film are usually called "titles" or "beginning titles." A post-credits scene is a scene shown after the end of the credits. Ferris Bueller's Day Off has a post-credit scene in which Ferris tells the audience that the film is over and they should go home.
|
76 |
+
|
77 |
+
A film's "cast" refers to a collection of the actors and actresses who appear, or "star," in a film. A star is an actor or actress, often a popular one, and in many cases, a celebrity who plays a central character in a film. Occasionally the word can also be used to refer to the fame of other members of the crew, such as a director or other personality, such as Martin Scorsese. A "crew" is usually interpreted as the people involved in a film's physical construction outside cast participation, and it could include directors, film editors, photographers, grips, gaffers, set decorators, prop masters, and costume designers. A person can both be part of a film's cast and crew, such as Woody Allen, who directed and starred in Take the Money and Run.
|
78 |
+
|
79 |
+
A "film goer," "movie goer," or "film buff" is a person who likes or often attends films and movies, and any of these, though more often the latter, could also see oneself as a student to films and movies or the filmic process. Intense interest in films, film theory, and film criticism, is known as cinephilia. A film enthusiast is known as a cinephile or cineaste.
|
80 |
+
|
81 |
+
A preview performance refers to a showing of a film to a select audience, usually for the purposes of corporate promotions, before the public film premiere itself. Previews are sometimes used to judge audience reaction, which if unexpectedly negative, may result in recutting or even refilming certain sections based on the audience response. One example of a film that was changed after a negative response from the test screening is 1982's First Blood. After the test audience responded very negatively to the death of protagonist John Rambo, a Vietnam veteran, at the end of the film, the company wrote and re-shot a new ending in which the character survives.[11]
|
82 |
+
|
83 |
+
Trailers or previews are advertisements for films that will be shown in 1 to 3 months at a cinema. Back in the early days of cinema, with theaters that had only one or two screens, only certain trailers were shown for the films that were going to be shown there. Later, when theaters added more screens or new theaters were built with a lot of screens, all different trailers were shown even if they weren't going to play that film in that theater. Film studios realized that the more trailers that were shown (even if it wasn't going to be shown in that particular theater) the more patrons would go to a different theater to see the film when it came out. The term "trailer" comes from their having originally been shown at the end of a film program. That practice did not last long because patrons tended to leave the theater after the films ended, but the name has stuck. Trailers are now shown before the film (or the "A film" in a double feature program) begins. Film trailers are also common on DVDs and Blu-ray Discs, as well as on the Internet and mobile devices. Trailers are created to be engaging and interesting for viewers. As a result, in the Internet era, viewers often seek out trailers to watch them. Of the ten billion videos watched online annually in 2008, film trailers ranked third, after news and user-created videos.[12] Teasers are a much shorter preview or advertisement that lasts only 10 to 30 seconds. Teasers are used to get patrons excited about a film coming out in the next six to twelve months. Teasers may be produced even before the film production is completed.
|
84 |
+
|
85 |
+
Film is used for a range of goals, including education and propaganda. When the purpose is primarily educational, a film is called an "educational film". Examples are recordings of academic lectures and experiments, or a film based on a classic novel. Film may be propaganda, in whole or in part, such as the films made by Leni Riefenstahl in Nazi Germany, US war film trailers during World War II, or artistic films made under Stalin by Sergei Eisenstein. They may also be works of political protest, as in the films of Andrzej Wajda, or more subtly, the films of Andrei Tarkovsky. The same film may be considered educational by some, and propaganda by others as the categorization of a film can be subjective.
|
86 |
+
|
87 |
+
At its core, the means to produce a film depend on the content the filmmaker wishes to show, and the apparatus for displaying it: the zoetrope merely requires a series of images on a strip of paper. Film production can, therefore, take as little as one person with a camera (or even without a camera, as in Stan Brakhage's 1963 film Mothlight), or thousands of actors, extras, and crew members for a live-action, feature-length epic.
|
88 |
+
|
89 |
+
The necessary steps for almost any film can be boiled down to conception, planning, execution, revision, and distribution. The more involved the production, the more significant each of the steps becomes. In a typical production cycle of a Hollywood-style film, these main stages are defined as development, pre-production, production, post-production and distribution.
|
90 |
+
|
91 |
+
This production cycle usually takes three years. The first year is taken up with development. The second year comprises preproduction and production. The third year, post-production and distribution. The bigger the production, the more resources it takes, and the more important financing becomes; most feature films are artistic works from the creators' perspective (e.g., film director, cinematographer, screenwriter) and for-profit business entities for the production companies.
|
92 |
+
|
93 |
+
A film crew is a group of people hired by a film company, employed during the "production" or "photography" phase, for the purpose of producing a film or motion picture. Crew is distinguished from cast, who are the actors who appear in front of the camera or provide voices for characters in the film. The crew interacts with but is also distinct from the production staff, consisting of producers, managers, company representatives, their assistants, and those whose primary responsibility falls in pre-production or post-production phases, such as screenwriters and film editors. Communication between production and crew generally passes through the director and his/her staff of assistants. Medium-to-large crews are generally divided into departments with well-defined hierarchies and standards for interaction and cooperation between the departments. Other than acting, the crew handles everything in the photography phase: props and costumes, shooting, sound, electrics (i.e., lights), sets, and production special effects. Caterers (known in the film industry as "craft services") are usually not considered part of the crew.
|
94 |
+
|
95 |
+
Film stock consists of transparent celluloid, acetate, or polyester base coated with an emulsion containing light-sensitive chemicals. Cellulose nitrate was the first type of film base used to record motion pictures, but due to its flammability was eventually replaced by safer materials. Stock widths and the film format for images on the reel have had a rich history, though most large commercial films are still shot on (and distributed to theaters) as 35 mm prints.
|
96 |
+
Originally moving picture film was shot and projected at various speeds using hand-cranked cameras and projectors; though 1000 frames per minute (162/3 frame/s) is generally cited as a standard silent speed, research indicates most films were shot between 16 frame/s and 23 frame/s and projected from 18 frame/s on up (often reels included instructions on how fast each scene should be shown).[13] When sound film was introduced in the late 1920s, a constant speed was required for the sound head. 24 frames per second were chosen because it was the slowest (and thus cheapest) speed which allowed for sufficient sound quality.[citation needed] Improvements since the late 19th century include the mechanization of cameras – allowing them to record at a consistent speed, quiet camera design – allowing sound recorded on-set to be usable without requiring large "blimps" to encase the camera, the invention of more sophisticated filmstocks and lenses, allowing directors to film in increasingly dim conditions, and the development of synchronized sound, allowing sound to be recorded at exactly the same speed as its corresponding action. The soundtrack can be recorded separately from shooting the film, but for live-action pictures, many parts of the soundtrack are usually recorded simultaneously.
|
97 |
+
|
98 |
+
As a medium, film is not limited to motion pictures, since the technology developed as the basis for photography. It can be used to present a progressive sequence of still images in the form of a slideshow. Film has also been incorporated into multimedia presentations and often has importance as primary historical documentation. However, historic films have problems in terms of preservation and storage, and the motion picture industry is exploring many alternatives. Most films on cellulose nitrate base have been copied onto modern safety films. Some studios save color films through the use of separation masters: three B&W negatives each exposed through red, green, or blue filters (essentially a reverse of the Technicolor process). Digital methods have also been used to restore films, although their continued obsolescence cycle makes them (as of 2006) a poor choice for long-term preservation. Film preservation of decaying film stock is a matter of concern to both film historians and archivists and to companies interested in preserving their existing products in order to make them available to future generations (and thereby increase revenue). Preservation is generally a higher concern for nitrate and single-strip color films, due to their high decay rates; black-and-white films on safety bases and color films preserved on Technicolor imbibition prints tend to keep up much better, assuming proper handling and storage.
|
99 |
+
|
100 |
+
Some films in recent decades have been recorded using analog video technology similar to that used in television production. Modern digital video cameras and digital projectors are gaining ground as well. These approaches are preferred by some film-makers, especially because footage shot with digital cinema can be evaluated and edited with non-linear editing systems (NLE) without waiting for the film stock to be processed. The migration was gradual, and as of 2005, most major motion pictures were still shot on film.[needs update]
|
101 |
+
|
102 |
+
Independent filmmaking often takes place outside Hollywood, or other major studio systems. An independent film (or indie film) is a film initially produced without financing or distribution from a major film studio. Creative, business and technological reasons have all contributed to the growth of the indie film scene in the late 20th and early 21st century. On the business side, the costs of big-budget studio films also lead to conservative choices in cast and crew. There is a trend in Hollywood towards co-financing (over two-thirds of the films put out by Warner Bros. in 2000 were joint ventures, up from 10% in 1987).[14] A hopeful director is almost never given the opportunity to get a job on a big-budget studio film unless he or she has significant industry experience in film or television. Also, the studios rarely produce films with unknown actors, particularly in lead roles.
|
103 |
+
|
104 |
+
Before the advent of digital alternatives, the cost of professional film equipment and stock was also a hurdle to being able to produce, direct, or star in a traditional studio film. But the advent of consumer camcorders in 1985, and more importantly, the arrival of high-resolution digital video in the early 1990s, have lowered the technology barrier to film production significantly. Both production and post-production costs have been significantly lowered; in the 2000s, the hardware and software for post-production can be installed in a commodity-based personal computer. Technologies such as DVDs, FireWire connections and a wide variety of professional and consumer-grade video editing software make film-making relatively affordable.
|
105 |
+
|
106 |
+
Since the introduction of digital video DV technology, the means of production have become more democratized. Filmmakers can conceivably shoot a film with a digital video camera and edit the film, create and edit the sound and music, and mix the final cut on a high-end home computer. However, while the means of production may be democratized, financing, distribution, and marketing remain difficult to accomplish outside the traditional system. Most independent filmmakers rely on film festivals to get their films noticed and sold for distribution. The arrival of internet-based video websites such as YouTube and Veoh has further changed the filmmaking landscape, enabling indie filmmakers to make their films available to the public.
|
107 |
+
|
108 |
+
An open content film is much like an independent film, but it is produced through open collaborations; its source material is available under a license which is permissive enough to allow other parties to create fan fiction or derivative works, than a traditional copyright. Like independent filmmaking, open source filmmaking takes place outside Hollywood, or other major studio systems.
|
109 |
+
|
110 |
+
A fan film is a film or video inspired by a film, television program, comic book or a similar source, created by fans rather than by the source's copyright holders or creators. Fan filmmakers have traditionally been amateurs, but some of the most notable films have actually been produced by professional filmmakers as film school class projects or as demonstration reels. Fan films vary tremendously in length, from short faux-teaser trailers for non-existent motion pictures to rarer full-length motion pictures.
|
111 |
+
|
112 |
+
Film distribution is the process through which a film is made available for viewing by an audience. This is normally the task of a professional film distributor, who would determine the marketing strategy of the film, the media by which a film is to be exhibited or made available for viewing, and may set the release date and other matters. The film may be exhibited directly to the public either through a movie theater (historically the main way films were distributed) or television for personal home viewing (including on DVD-Video or Blu-ray Disc, video-on-demand, online downloading, television programs through broadcast syndication etc.). Other ways of distributing a film include rental or personal purchase of the film in a variety of media and formats, such as VHS tape or DVD, or Internet downloading or streaming using a computer.
|
113 |
+
|
114 |
+
Animation is a technique in which each frame of a film is produced individually, whether generated as a computer graphic, or by photographing a drawn image, or by repeatedly making small changes to a model unit (see claymation and stop motion), and then photographing the result with a special animation camera. When the frames are strung together and the resulting film is viewed at a speed of 16 or more frames per second, there is an illusion of continuous movement (due to the phi phenomenon). Generating such a film is very labor-intensive and tedious, though the development of computer animation has greatly sped up the process. Because animation is very time-consuming and often very expensive to produce, the majority of animation for TV and films comes from professional animation studios. However, the field of independent animation has existed at least since the 1950s, with animation being produced by independent studios (and sometimes by a single person). Several independent animation producers have gone on to enter the professional animation industry.
|
115 |
+
|
116 |
+
Limited animation is a way of increasing production and decreasing costs of animation by using "short cuts" in the animation process. This method was pioneered by UPA and popularized by Hanna-Barbera in the United States, and by Osamu Tezuka in Japan, and adapted by other studios as cartoons moved from movie theaters to television.[15] Although most animation studios are now using digital technologies in their productions, there is a specific style of animation that depends on film. Camera-less animation, made famous by film-makers like Norman McLaren, Len Lye, and Stan Brakhage, is painted and drawn directly onto pieces of film, and then run through a projector.
|
en/1137.html.txt
ADDED
@@ -0,0 +1,116 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
Film, also called movie, motion picture or moving picture, is a visual art-form used to simulate experiences that communicate ideas, stories, perceptions, feelings, beauty, or atmosphere through the use of moving images. These images are generally accompanied by sound, and more rarely, other sensory stimulations.[1] The word "cinema", short for cinematography, is often used to refer to filmmaking and the film industry, and to the art form that is the result of it.
|
6 |
+
|
7 |
+
The moving images of a film are created by photographing actual scenes with a motion-picture camera, by photographing drawings or miniature models using traditional animation techniques, by means of CGI and computer animation, or by a combination of some or all of these techniques, and other visual effects.
|
8 |
+
|
9 |
+
Traditionally, films were recorded onto celluloid film stock through a photochemical process and then shown through a movie projector onto a large screen. Contemporary films are often fully digital through the entire process of production, distribution, and exhibition, while films recorded in a photochemical form traditionally included an analogous optical soundtrack (a graphic recording of the spoken words, music and other sounds that accompany the images which runs along a portion of the film exclusively reserved for it, and is not projected).
|
10 |
+
|
11 |
+
Films are cultural artifacts created by specific cultures. They reflect those cultures, and, in turn, affect them. Film is considered to be an important art form, a source of popular entertainment, and a powerful medium for educating—or indoctrinating—citizens. The visual basis of film gives it a universal power of communication. Some films have become popular worldwide attractions through the use of dubbing or subtitles to translate the dialog into other languages.
|
12 |
+
|
13 |
+
The individual images that make up a film are called frames. In the projection of traditional celluloid films, a rotating shutter causes intervals of darkness as each frame, in turn, is moved into position to be projected, but the viewer does not notice the interruptions because of an effect known as persistence of vision, whereby the eye retains a visual image for a fraction of a second after its source disappears. The perception of motion is partly due to a psychological effect called the phi phenomenon.
|
14 |
+
|
15 |
+
The name "film" originates from the fact that photographic film (also called film stock) has historically been the medium for recording and displaying motion pictures. Many other terms exist for an individual motion-picture, including picture, picture show, moving picture, photoplay, and flick. The most common term in the United States is movie, while in Europe film is preferred. Common terms for the field in general include the big screen, the silver screen, the movies, and cinema; the last of these is commonly used, as an overarching term, in scholarly texts and critical essays. In early years, the word sheet was sometimes used instead of screen.
|
16 |
+
|
17 |
+
The art of film has drawn on several earlier traditions in fields such as oral storytelling, literature, theatre and visual arts. Forms of art and entertainment that had already featured moving and/or projected images include:
|
18 |
+
|
19 |
+
The stroboscopic animation principle was introduced in 1833 with the phénakisticope and also applied in the zoetrope since 1866, the flip book since 1868, and the praxinoscope since 1877, before it became the basic principle for cinematography.
|
20 |
+
|
21 |
+
Experiments with early phenakisticope-based animation projectors were made at least as early as 1843. Jules Duboscq marketed phénakisticope projection systems in France between 1853 and the 1890s.
|
22 |
+
|
23 |
+
Photography was introduced in 1839, but at first photographic emulsions needed such long exposures that the recording of moving subjects seemed impossible. At least as early as 1844, photographic series of subjects posed in different positions have been created to either suggest a motion sequence or to document a range of different viewing angles. The advent of stereoscopic photography, with early experiments in the 1840s and commercial success since the early 1850s, raised interest in completing the photographic medium with the addition of means to capture colour and motion. In 1849, Joseph Plateau published about the idea to combine his invention of the phénakisticope with the stereoscope, as suggested to him by stereoscope inventor Charles Wheatstone, and use photographs of plaster sculptures in different positions to be animated in the combined device. In 1852, Jules Duboscq patented such an instrument as the "Stéréoscope-fantascope, ou Bïoscope". He marginally advertised it for a short period. It was a commercial failure and no complete instrument has yet been located, but one bioscope disc has been preserved in the Plateau collection of the Ghent University. It has stereoscopic photographs of a machine.
|
24 |
+
|
25 |
+
By the late 1850s the first examples of instantaneous photography came about and provided hope that motion photography would soon be possible, but it took a few decades before it was successfully combined with a method to record series of sequential images in real-time. In 1878, Eadweard Muybridge eventually managed to take a series of photographs of a running horse with a battery of cameras in a line along the track and published the results as The Horse in Motion on cabinet cards. Muybridge, as well as Étienne-Jules Marey, Ottomar Anschütz and many others would create many more chronophotography studies. Muybridge had the contours of dozens of his chronophotographic series traced onto glass discs and projected them with his zoopraxiscope in his lectures from 1880 to 1895. Anschütz developed his own Electrotachyscope in 1887 to project 24 diapositive photographic images on glass disks as moving images, looped as long as deemed interesting for the audience.
|
26 |
+
|
27 |
+
Émile Reynaud already mentioned the possibility of projecting the images in his 1877 patent application for the praxinoscope. He presented a praxinoscope projection device at the Société française de photographie on 4 June 1880, but did not market his praxinoscope a projection before 1882. He then further developed the device into the Théâtre Optique which could project longer sequences with separate backgrounds, patented in 1888. He created several movies for the machine by painting images on hundreds of gelatin plates that were mounted into cardboard frames and attached to a cloth band. From 28 October 1892 to March 1900 Reynaud gave over 12,800 shows to a total of over 500,000 visitors at the Musée Grévin in Paris.
|
28 |
+
|
29 |
+
By the end of the 1880s, the introduction of lengths of celluloid photographic film and the invention of motion picture cameras, which could photograph an indefinitely long rapid sequence of images using only one lens, allowed several minutes of action to be captured and stored on a single compact reel of film. Some early films were made to be viewed by one person at a time through a "peep show" device such as the Kinetoscope and the mutoscope. Others were intended for a projector, mechanically similar to the camera and sometimes actually the same machine, which was used to shine an intense light through the processed and printed film and into a projection lens so that these "moving pictures" could be shown tremendously enlarged on a screen for viewing by an entire audience. The first kinetoscope film shown in public exhibition was Blacksmith Scene, produced by Edison Manufacturing Company in 1893. The following year the company would begin Edison Studios, which became an early leader in the film industry with notable early shorts including The Kiss, and would go on to produce close to 1,200 films.
|
30 |
+
|
31 |
+
The first public screenings of films at which admission was charged were made in 1895 by the American Woodville Latham and his sons, using films produced by their Eidoloscope company,[2] and by the – arguably better known – French brothers Auguste and Louis Lumière with ten of their own productions.[citation needed] Private screenings had preceded these by several months, with Latham's slightly predating the Lumière brothers'.[citation needed]
|
32 |
+
|
33 |
+
The earliest films were simply one static shot that showed an event or action with no editing or other cinematic techniques. Around the turn of the 20th century, films started stringing several scenes together to tell a story. The scenes were later broken up into multiple shots photographed from different distances and angles. Other techniques such as camera movement were developed as effective ways to tell a story with film. Until sound film became commercially practical in the late 1920s, motion pictures were a purely visual art, but these innovative silent films had gained a hold on the public imagination. Rather than leave audiences with only the noise of the projector as an accompaniment, theater owners hired a pianist or organist or, in large urban theaters, a full orchestra to play music that fit the mood of the film at any given moment. By the early 1920s, most films came with a prepared list of sheet music to be used for this purpose, and complete film scores were composed for major productions.
|
34 |
+
|
35 |
+
The rise of European cinema was interrupted by the outbreak of World War I, while the film industry in the United States flourished with the rise of Hollywood, typified most prominently by the innovative work of D. W. Griffith in The Birth of a Nation (1915) and Intolerance (1916). However, in the 1920s, European filmmakers such as Eisenstein, F. W. Murnau and Fritz Lang, in many ways inspired by the meteoric wartime progress of film through Griffith, along with the contributions of Charles Chaplin, Buster Keaton and others, quickly caught up with American film-making and continued to further advance the medium.
|
36 |
+
|
37 |
+
In the 1920s, the development of electronic sound recording technologies made it practical to incorporate a soundtrack of speech, music and sound effects synchronized with the action on the screen.[citation needed] The resulting sound films were initially distinguished from the usual silent "moving pictures" or "movies" by calling them "talking pictures" or "talkies."[citation needed] The revolution they wrought was swift. By 1930, silent film was practically extinct in the US and already being referred to as "the old medium."[citation needed]
|
38 |
+
|
39 |
+
Another major technological development was the introduction of "natural color," which meant color that was photographically recorded from nature rather than added to black-and-white prints by hand-coloring, stencil-coloring or other arbitrary procedures, although the earliest processes typically yielded colors which were far from "natural" in appearance.[citation needed] While the advent of sound films quickly made silent films and theater musicians obsolete, color replaced black-and-white much more gradually.[citation needed] The pivotal innovation was the introduction of the three-strip version of the Technicolor process, first used for animated cartoons in 1932, then also for live-action short films and isolated sequences in a few feature films, then for an entire feature film, Becky Sharp, in 1935. The expense of the process was daunting, but favorable public response in the form of increased box office receipts usually justified the added cost. The number of films made in color slowly increased year after year.
|
40 |
+
|
41 |
+
In the early 1950s, the proliferation of black-and-white television started seriously depressing North American theater attendance.[citation needed] In an attempt to lure audiences back into theaters, bigger screens were installed, widescreen processes, polarized 3D projection, and stereophonic sound were introduced, and more films were made in color, which soon became the rule rather than the exception. Some important mainstream Hollywood films were still being made in black-and-white as late as the mid-1960s, but they marked the end of an era. Color television receivers had been available in the US since the mid-1950s, but at first, they were very expensive and few broadcasts were in color. During the 1960s, prices gradually came down, color broadcasts became common, and sales boomed. The overwhelming public verdict in favor of color was clear. After the final flurry of black-and-white films had been released in mid-decade, all Hollywood studio productions were filmed in color, with the usual exceptions made only at the insistence of "star" filmmakers such as Peter Bogdanovich and Martin Scorsese.[citation needed]
|
42 |
+
|
43 |
+
The decades following the decline of the studio system in the 1960s saw changes in the production and style of film. Various New Wave movements (including the French New Wave, Indian New Wave, Japanese New Wave, and New Hollywood) and the rise of film-school-educated independent filmmakers contributed to the changes the medium experienced in the latter half of the 20th century. Digital technology has been the driving force for change throughout the 1990s and into the 2000s. Digital 3D projection largely replaced earlier problem-prone 3D film systems and has become popular in the early 2010s.[citation needed]
|
44 |
+
|
45 |
+
"Film theory" seeks to develop concise and systematic concepts that apply to the study of film as art. The concept of film as an art-form began in 1911 with Ricciotto Canudo's The Birth of the Sixth Art. Formalist film theory, led by Rudolf Arnheim, Béla Balázs, and Siegfried Kracauer, emphasized how film differed from reality and thus could be considered a valid fine art. André Bazin reacted against this theory by arguing that film's artistic essence lay in its ability to mechanically reproduce reality, not in its differences from reality, and this gave rise to realist theory. More recent analysis spurred by Jacques Lacan's psychoanalysis and Ferdinand de Saussure's semiotics among other things has given rise to psychoanalytic film theory, structuralist film theory, feminist film theory, and others. On the other hand, critics from the analytical philosophy tradition, influenced by Wittgenstein, try to clarify misconceptions used in theoretical studies and produce analysis of a film's vocabulary and its link to a form of life.
|
46 |
+
|
47 |
+
Film is considered to have its own language. James Monaco wrote a classic text on film theory, titled "How to Read a Film," that addresses this. Director Ingmar Bergman famously said, "Andrei Tarkovsky for me is the greatest director, the one who invented a new language, true to the nature of film, as it captures life as a reflection, life as a dream." An example of the language is a sequence of back and forth images of one speaking actor's left profile, followed by another speaking actor's right profile, then a repetition of this, which is a language understood by the audience to indicate a conversation. This describes another theory of film, the 180-degree rule, as a visual story-telling device with an ability to place a viewer in a context of being psychologically present through the use of visual composition and editing. The "Hollywood style" includes this narrative theory, due to the overwhelming practice of the rule by movie studios based in Hollywood, California, during film's classical era. Another example of cinematic language is having a shot that zooms in on the forehead of an actor with an expression of silent reflection that cuts to a shot of a younger actor who vaguely resembles the first actor, indicating that the first person is remembering a past self, an edit of compositions that causes a time transition.
|
48 |
+
|
49 |
+
Montage is the technique by which separate pieces of film are selected, edited, and then pieced together to make a new section of film. A scene could show a man going into battle, with flashbacks to his youth and to his home-life and with added special effects, placed into the film after filming is complete. As these were all filmed separately, and perhaps with different actors, the final version is called a montage. Directors developed a theory of montage, beginning with Eisenstein and the complex juxtaposition of images in his film Battleship Potemkin.[3] Incorporation of musical and visual counterpoint, and scene development through mise en scene, editing, and effects has led to more complex techniques comparable to those used in opera and ballet.
|
50 |
+
|
51 |
+
— Roger Ebert (1986)[4]
|
52 |
+
|
53 |
+
Film criticism is the analysis and evaluation of films. In general, these works can be divided into two categories: academic criticism by film scholars and journalistic film criticism that appears regularly in newspapers and other media. Film critics working for newspapers, magazines, and broadcast media mainly review new releases. Normally they only see any given film once and have only a day or two to formulate their opinions. Despite this, critics have an important impact on the audience response and attendance at films, especially those of certain genres. Mass marketed action, horror, and comedy films tend not to be greatly affected by a critic's overall judgment of a film. The plot summary and description of a film and the assessment of the director's and screenwriters' work that makes up the majority of most film reviews can still have an important impact on whether people decide to see a film. For prestige films such as most dramas and art films, the influence of reviews is important. Poor reviews from leading critics at major papers and magazines will often reduce audience interest and attendance.
|
54 |
+
|
55 |
+
The impact of a reviewer on a given film's box office performance is a matter of debate. Some observers claim that movie marketing in the 2000s is so intense, well-coordinated and well financed that reviewers cannot prevent a poorly written or filmed blockbuster from attaining market success. However, the cataclysmic failure of some heavily promoted films which were harshly reviewed, as well as the unexpected success of critically praised independent films indicates that extreme critical reactions can have considerable influence. Other observers note that positive film reviews have been shown to spark interest in little-known films. Conversely, there have been several films in which film companies have so little confidence that they refuse to give reviewers an advanced viewing to avoid widespread panning of the film. However, this usually backfires, as reviewers are wise to the tactic and warn the public that the film may not be worth seeing and the films often do poorly as a result. Journalist film critics are sometimes called film reviewers. Critics who take a more academic approach to films, through publishing in film journals and writing books about films using film theory or film studies approaches, study how film and filming techniques work, and what effect they have on people. Rather than having their reviews published in newspapers or appearing on television, their articles are published in scholarly journals or up-market magazines. They also tend to be affiliated with colleges or universities as professors or instructors.
|
56 |
+
|
57 |
+
The making and showing of motion pictures became a source of profit almost as soon as the process was invented. Upon seeing how successful their new invention, and its product, was in their native France, the Lumières quickly set about touring the Continent to exhibit the first films privately to royalty and publicly to the masses. In each country, they would normally add new, local scenes to their catalogue and, quickly enough, found local entrepreneurs in the various countries of Europe to buy their equipment and photograph, export, import, and screen additional product commercially. The Oberammergau Passion Play of 1898[citation needed] was the first commercial motion picture ever produced. Other pictures soon followed, and motion pictures became a separate industry that overshadowed the vaudeville world. Dedicated theaters and companies formed specifically to produce and distribute films, while motion picture actors became major celebrities and commanded huge fees for their performances. By 1917 Charlie Chaplin had a contract that called for an annual salary of one million dollars. From 1931 to 1956, film was also the only image storage and playback system for television programming until the introduction of videotape recorders.
|
58 |
+
|
59 |
+
In the United States, much of the film industry is centered around Hollywood, California. Other regional centers exist in many parts of the world, such as Mumbai-centered Bollywood, the Indian film industry's Hindi cinema which produces the largest number of films in the world.[5] Though the expense involved in making films has led cinema production to concentrate under the auspices of movie studios, recent advances in affordable film making equipment have allowed independent film productions to flourish.
|
60 |
+
|
61 |
+
Profit is a key force in the industry, due to the costly and risky nature of filmmaking; many films have large cost overruns, an example being Kevin Costner's Waterworld. Yet many filmmakers strive to create works of lasting social significance. The Academy Awards (also known as "the Oscars") are the most prominent film awards in the United States, providing recognition each year to films, based on their artistic merits. There is also a large industry for educational and instructional films made in lieu of or in addition to lectures and texts. Revenue in the industry is sometimes volatile due to the reliance on blockbuster films released in movie theaters. The rise of alternative home entertainment has raised questions about the future of the cinema industry, and Hollywood employment has become less reliable, particularly for medium and low-budget films.[6]
|
62 |
+
|
63 |
+
Derivative academic fields of study may both interact with and develop independently of filmmaking, as in film theory and analysis. Fields of academic study have been created that are derivative or dependent on the existence of film, such as film criticism, film history, divisions of film propaganda in authoritarian governments, or psychological on subliminal effects (e.g., of a flashing soda can during a screening). These fields may further create derivative fields, such as a movie review section in a newspaper or a television guide. Sub-industries can spin off from film, such as popcorn makers, and film-related toys (e.g., Star Wars figures). Sub-industries of pre-existing industries may deal specifically with film, such as product placement and other advertising within films.
|
64 |
+
|
65 |
+
The terminology used for describing motion pictures varies considerably between British and American English. In British usage, the name of the medium is "film". The word "movie" is understood but seldom used.[7][8] Additionally, "the pictures" (plural) is used semi-frequently to refer to the place where movies are exhibited, while in American English this may be called "the movies", but it is becoming outdated. In other countries, the place where movies are exhibited may be called a cinema or movie theatre. By contrast, in the United States, "movie" is the predominant form. Although the words "film" and "movie" are sometimes used interchangeably, "film" is more often used when considering artistic, theoretical, or technical aspects. The term "movies" more often refers to entertainment or commercial aspects, as where to go for fun evening on a date. For example, a book titled "How to Understand a Film" would probably be about the aesthetics or theory of film, while a book entitled "Let's Go to the Movies" would probably be about the history of entertaining movies and blockbusters.
|
66 |
+
|
67 |
+
Further terminology is used to distinguish various forms and media used in the film industry. "Motion pictures" and "moving pictures" are frequently used terms for film and movie productions specifically intended for theatrical exhibition, such as, for instance, Batman. "DVD" and "videotape" are video formats that can reproduce a photochemical film. A reproduction based on such is called a "transfer." After the advent of theatrical film as an industry, the television industry began using videotape as a recording medium. For many decades, tape was solely an analog medium onto which moving images could be either recorded or transferred. "Film" and "filming" refer to the photochemical medium that chemically records a visual image and the act of recording respectively. However, the act of shooting images with other visual media, such as with a digital camera, is still called "filming" and the resulting works often called "films" as interchangeable to "movies," despite not being shot on film. "Silent films" need not be utterly silent, but are films and movies without an audible dialogue, including those that have a musical accompaniment. The word, "Talkies," refers to the earliest sound films created to have audible dialogue recorded for playback along with the film, regardless of a musical accompaniment. "Cinema" either broadly encompasses both films and movies, or it is roughly synonymous with film and theatrical exhibition, and both are capitalized when referring to a category of art. The "silver screen" refers to the projection screen used to exhibit films and, by extension, is also used as a metonym for the entire film industry.
|
68 |
+
|
69 |
+
"Widescreen" refers to a larger width to height in the frame, compared to earlier historic aspect ratios.[9] A "feature-length film", or "feature film", is of a conventional full length, usually 60 minutes or more, and can commercially stand by itself without other films in a ticketed screening.[10] A "short" is a film that is not as long as a feature-length film, often screened with other shorts, or preceding a feature-length film. An "independent" is a film made outside the conventional film industry.
|
70 |
+
|
71 |
+
In US usage, one talks of a "screening" or "projection" of a movie or video on a screen at a public or private "theater." In British English, a "film showing" happens at a cinema (never a "theatre", which is a different medium and place altogether).[8] A cinema usually refers to an arena designed specifically to exhibit films, where the screen is affixed to a wall, while a theater usually refers to a place where live, non-recorded action or combination thereof occurs from a podium or other type of stage, including the amphitheater. Theaters can still screen movies in them, though the theater would be retrofitted to do so. One might propose "going to the cinema" when referring to the activity, or sometimes "to the pictures" in British English, whereas the US expression is usually "going to the movies." A cinema usually shows a mass-marketed movie using a front-projection screen process with either a film projector or, more recently, with a digital projector. But, cinemas may also show theatrical movies from their home video transfers that include Blu-ray Disc, DVD, and videocassette when they possess sufficient projection quality or based upon need, such as movies that exist only in their transferred state, which may be due to the loss or deterioration of the film master and prints from which the movie originally existed. Due to the advent of digital film production and distribution, physical film might be absent entirely. A "double feature" is a screening of two independently marketed, stand-alone feature films. A "viewing" is a watching of a film. "Sales" and "at the box office" refer to tickets sold at a theater, or more currently, rights sold for individual showings. A "release" is the distribution and often simultaneous screening of a film. A "preview" is a screening in advance of the main release.
|
72 |
+
|
73 |
+
Any film may also have a "sequel", which portrays events following those in the film. Bride of Frankenstein is an early example. When there are more films than one with the same characters, story arcs, or subject themes, these movies become a "series," such as the James Bond series. And, existing outside a specific story timeline usually, does not exclude a film from being part of a series. A film that portrays events occurring earlier in a timeline with those in another film, but is released after that film, is sometimes called a "prequel," an example being Butch and Sundance: The Early Days.
|
74 |
+
|
75 |
+
The "credits," or "end credits," is a list that gives credit to the people involved in the production of a film. Films from before the 1970s usually start a film with credits, often ending with only a title card, saying "The End" or some equivalent, often an equivalent that depends on the language of the production[citation needed]. From then onward, a film's credits usually appear at the end of most films. However, films with credits that end a film often repeat some credits at or near the start of a film and therefore appear twice, such as that film's acting leads, while less frequently some appearing near or at the beginning only appear there, not at the end, which often happens to the director's credit. The credits appearing at or near the beginning of a film are usually called "titles" or "beginning titles." A post-credits scene is a scene shown after the end of the credits. Ferris Bueller's Day Off has a post-credit scene in which Ferris tells the audience that the film is over and they should go home.
|
76 |
+
|
77 |
+
A film's "cast" refers to a collection of the actors and actresses who appear, or "star," in a film. A star is an actor or actress, often a popular one, and in many cases, a celebrity who plays a central character in a film. Occasionally the word can also be used to refer to the fame of other members of the crew, such as a director or other personality, such as Martin Scorsese. A "crew" is usually interpreted as the people involved in a film's physical construction outside cast participation, and it could include directors, film editors, photographers, grips, gaffers, set decorators, prop masters, and costume designers. A person can both be part of a film's cast and crew, such as Woody Allen, who directed and starred in Take the Money and Run.
|
78 |
+
|
79 |
+
A "film goer," "movie goer," or "film buff" is a person who likes or often attends films and movies, and any of these, though more often the latter, could also see oneself as a student to films and movies or the filmic process. Intense interest in films, film theory, and film criticism, is known as cinephilia. A film enthusiast is known as a cinephile or cineaste.
|
80 |
+
|
81 |
+
A preview performance refers to a showing of a film to a select audience, usually for the purposes of corporate promotions, before the public film premiere itself. Previews are sometimes used to judge audience reaction, which if unexpectedly negative, may result in recutting or even refilming certain sections based on the audience response. One example of a film that was changed after a negative response from the test screening is 1982's First Blood. After the test audience responded very negatively to the death of protagonist John Rambo, a Vietnam veteran, at the end of the film, the company wrote and re-shot a new ending in which the character survives.[11]
|
82 |
+
|
83 |
+
Trailers or previews are advertisements for films that will be shown in 1 to 3 months at a cinema. Back in the early days of cinema, with theaters that had only one or two screens, only certain trailers were shown for the films that were going to be shown there. Later, when theaters added more screens or new theaters were built with a lot of screens, all different trailers were shown even if they weren't going to play that film in that theater. Film studios realized that the more trailers that were shown (even if it wasn't going to be shown in that particular theater) the more patrons would go to a different theater to see the film when it came out. The term "trailer" comes from their having originally been shown at the end of a film program. That practice did not last long because patrons tended to leave the theater after the films ended, but the name has stuck. Trailers are now shown before the film (or the "A film" in a double feature program) begins. Film trailers are also common on DVDs and Blu-ray Discs, as well as on the Internet and mobile devices. Trailers are created to be engaging and interesting for viewers. As a result, in the Internet era, viewers often seek out trailers to watch them. Of the ten billion videos watched online annually in 2008, film trailers ranked third, after news and user-created videos.[12] Teasers are a much shorter preview or advertisement that lasts only 10 to 30 seconds. Teasers are used to get patrons excited about a film coming out in the next six to twelve months. Teasers may be produced even before the film production is completed.
|
84 |
+
|
85 |
+
Film is used for a range of goals, including education and propaganda. When the purpose is primarily educational, a film is called an "educational film". Examples are recordings of academic lectures and experiments, or a film based on a classic novel. Film may be propaganda, in whole or in part, such as the films made by Leni Riefenstahl in Nazi Germany, US war film trailers during World War II, or artistic films made under Stalin by Sergei Eisenstein. They may also be works of political protest, as in the films of Andrzej Wajda, or more subtly, the films of Andrei Tarkovsky. The same film may be considered educational by some, and propaganda by others as the categorization of a film can be subjective.
|
86 |
+
|
87 |
+
At its core, the means to produce a film depend on the content the filmmaker wishes to show, and the apparatus for displaying it: the zoetrope merely requires a series of images on a strip of paper. Film production can, therefore, take as little as one person with a camera (or even without a camera, as in Stan Brakhage's 1963 film Mothlight), or thousands of actors, extras, and crew members for a live-action, feature-length epic.
|
88 |
+
|
89 |
+
The necessary steps for almost any film can be boiled down to conception, planning, execution, revision, and distribution. The more involved the production, the more significant each of the steps becomes. In a typical production cycle of a Hollywood-style film, these main stages are defined as development, pre-production, production, post-production and distribution.
|
90 |
+
|
91 |
+
This production cycle usually takes three years. The first year is taken up with development. The second year comprises preproduction and production. The third year, post-production and distribution. The bigger the production, the more resources it takes, and the more important financing becomes; most feature films are artistic works from the creators' perspective (e.g., film director, cinematographer, screenwriter) and for-profit business entities for the production companies.
|
92 |
+
|
93 |
+
A film crew is a group of people hired by a film company, employed during the "production" or "photography" phase, for the purpose of producing a film or motion picture. Crew is distinguished from cast, who are the actors who appear in front of the camera or provide voices for characters in the film. The crew interacts with but is also distinct from the production staff, consisting of producers, managers, company representatives, their assistants, and those whose primary responsibility falls in pre-production or post-production phases, such as screenwriters and film editors. Communication between production and crew generally passes through the director and his/her staff of assistants. Medium-to-large crews are generally divided into departments with well-defined hierarchies and standards for interaction and cooperation between the departments. Other than acting, the crew handles everything in the photography phase: props and costumes, shooting, sound, electrics (i.e., lights), sets, and production special effects. Caterers (known in the film industry as "craft services") are usually not considered part of the crew.
|
94 |
+
|
95 |
+
Film stock consists of transparent celluloid, acetate, or polyester base coated with an emulsion containing light-sensitive chemicals. Cellulose nitrate was the first type of film base used to record motion pictures, but due to its flammability was eventually replaced by safer materials. Stock widths and the film format for images on the reel have had a rich history, though most large commercial films are still shot on (and distributed to theaters) as 35 mm prints.
|
96 |
+
Originally moving picture film was shot and projected at various speeds using hand-cranked cameras and projectors; though 1000 frames per minute (162/3 frame/s) is generally cited as a standard silent speed, research indicates most films were shot between 16 frame/s and 23 frame/s and projected from 18 frame/s on up (often reels included instructions on how fast each scene should be shown).[13] When sound film was introduced in the late 1920s, a constant speed was required for the sound head. 24 frames per second were chosen because it was the slowest (and thus cheapest) speed which allowed for sufficient sound quality.[citation needed] Improvements since the late 19th century include the mechanization of cameras – allowing them to record at a consistent speed, quiet camera design – allowing sound recorded on-set to be usable without requiring large "blimps" to encase the camera, the invention of more sophisticated filmstocks and lenses, allowing directors to film in increasingly dim conditions, and the development of synchronized sound, allowing sound to be recorded at exactly the same speed as its corresponding action. The soundtrack can be recorded separately from shooting the film, but for live-action pictures, many parts of the soundtrack are usually recorded simultaneously.
|
97 |
+
|
98 |
+
As a medium, film is not limited to motion pictures, since the technology developed as the basis for photography. It can be used to present a progressive sequence of still images in the form of a slideshow. Film has also been incorporated into multimedia presentations and often has importance as primary historical documentation. However, historic films have problems in terms of preservation and storage, and the motion picture industry is exploring many alternatives. Most films on cellulose nitrate base have been copied onto modern safety films. Some studios save color films through the use of separation masters: three B&W negatives each exposed through red, green, or blue filters (essentially a reverse of the Technicolor process). Digital methods have also been used to restore films, although their continued obsolescence cycle makes them (as of 2006) a poor choice for long-term preservation. Film preservation of decaying film stock is a matter of concern to both film historians and archivists and to companies interested in preserving their existing products in order to make them available to future generations (and thereby increase revenue). Preservation is generally a higher concern for nitrate and single-strip color films, due to their high decay rates; black-and-white films on safety bases and color films preserved on Technicolor imbibition prints tend to keep up much better, assuming proper handling and storage.
|
99 |
+
|
100 |
+
Some films in recent decades have been recorded using analog video technology similar to that used in television production. Modern digital video cameras and digital projectors are gaining ground as well. These approaches are preferred by some film-makers, especially because footage shot with digital cinema can be evaluated and edited with non-linear editing systems (NLE) without waiting for the film stock to be processed. The migration was gradual, and as of 2005, most major motion pictures were still shot on film.[needs update]
|
101 |
+
|
102 |
+
Independent filmmaking often takes place outside Hollywood, or other major studio systems. An independent film (or indie film) is a film initially produced without financing or distribution from a major film studio. Creative, business and technological reasons have all contributed to the growth of the indie film scene in the late 20th and early 21st century. On the business side, the costs of big-budget studio films also lead to conservative choices in cast and crew. There is a trend in Hollywood towards co-financing (over two-thirds of the films put out by Warner Bros. in 2000 were joint ventures, up from 10% in 1987).[14] A hopeful director is almost never given the opportunity to get a job on a big-budget studio film unless he or she has significant industry experience in film or television. Also, the studios rarely produce films with unknown actors, particularly in lead roles.
|
103 |
+
|
104 |
+
Before the advent of digital alternatives, the cost of professional film equipment and stock was also a hurdle to being able to produce, direct, or star in a traditional studio film. But the advent of consumer camcorders in 1985, and more importantly, the arrival of high-resolution digital video in the early 1990s, have lowered the technology barrier to film production significantly. Both production and post-production costs have been significantly lowered; in the 2000s, the hardware and software for post-production can be installed in a commodity-based personal computer. Technologies such as DVDs, FireWire connections and a wide variety of professional and consumer-grade video editing software make film-making relatively affordable.
|
105 |
+
|
106 |
+
Since the introduction of digital video DV technology, the means of production have become more democratized. Filmmakers can conceivably shoot a film with a digital video camera and edit the film, create and edit the sound and music, and mix the final cut on a high-end home computer. However, while the means of production may be democratized, financing, distribution, and marketing remain difficult to accomplish outside the traditional system. Most independent filmmakers rely on film festivals to get their films noticed and sold for distribution. The arrival of internet-based video websites such as YouTube and Veoh has further changed the filmmaking landscape, enabling indie filmmakers to make their films available to the public.
|
107 |
+
|
108 |
+
An open content film is much like an independent film, but it is produced through open collaborations; its source material is available under a license which is permissive enough to allow other parties to create fan fiction or derivative works, than a traditional copyright. Like independent filmmaking, open source filmmaking takes place outside Hollywood, or other major studio systems.
|
109 |
+
|
110 |
+
A fan film is a film or video inspired by a film, television program, comic book or a similar source, created by fans rather than by the source's copyright holders or creators. Fan filmmakers have traditionally been amateurs, but some of the most notable films have actually been produced by professional filmmakers as film school class projects or as demonstration reels. Fan films vary tremendously in length, from short faux-teaser trailers for non-existent motion pictures to rarer full-length motion pictures.
|
111 |
+
|
112 |
+
Film distribution is the process through which a film is made available for viewing by an audience. This is normally the task of a professional film distributor, who would determine the marketing strategy of the film, the media by which a film is to be exhibited or made available for viewing, and may set the release date and other matters. The film may be exhibited directly to the public either through a movie theater (historically the main way films were distributed) or television for personal home viewing (including on DVD-Video or Blu-ray Disc, video-on-demand, online downloading, television programs through broadcast syndication etc.). Other ways of distributing a film include rental or personal purchase of the film in a variety of media and formats, such as VHS tape or DVD, or Internet downloading or streaming using a computer.
|
113 |
+
|
114 |
+
Animation is a technique in which each frame of a film is produced individually, whether generated as a computer graphic, or by photographing a drawn image, or by repeatedly making small changes to a model unit (see claymation and stop motion), and then photographing the result with a special animation camera. When the frames are strung together and the resulting film is viewed at a speed of 16 or more frames per second, there is an illusion of continuous movement (due to the phi phenomenon). Generating such a film is very labor-intensive and tedious, though the development of computer animation has greatly sped up the process. Because animation is very time-consuming and often very expensive to produce, the majority of animation for TV and films comes from professional animation studios. However, the field of independent animation has existed at least since the 1950s, with animation being produced by independent studios (and sometimes by a single person). Several independent animation producers have gone on to enter the professional animation industry.
|
115 |
+
|
116 |
+
Limited animation is a way of increasing production and decreasing costs of animation by using "short cuts" in the animation process. This method was pioneered by UPA and popularized by Hanna-Barbera in the United States, and by Osamu Tezuka in Japan, and adapted by other studios as cartoons moved from movie theaters to television.[15] Although most animation studios are now using digital technologies in their productions, there is a specific style of animation that depends on film. Camera-less animation, made famous by film-makers like Norman McLaren, Len Lye, and Stan Brakhage, is painted and drawn directly onto pieces of film, and then run through a projector.
|
en/1138.html.txt
ADDED
@@ -0,0 +1,116 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
Film, also called movie, motion picture or moving picture, is a visual art-form used to simulate experiences that communicate ideas, stories, perceptions, feelings, beauty, or atmosphere through the use of moving images. These images are generally accompanied by sound, and more rarely, other sensory stimulations.[1] The word "cinema", short for cinematography, is often used to refer to filmmaking and the film industry, and to the art form that is the result of it.
|
6 |
+
|
7 |
+
The moving images of a film are created by photographing actual scenes with a motion-picture camera, by photographing drawings or miniature models using traditional animation techniques, by means of CGI and computer animation, or by a combination of some or all of these techniques, and other visual effects.
|
8 |
+
|
9 |
+
Traditionally, films were recorded onto celluloid film stock through a photochemical process and then shown through a movie projector onto a large screen. Contemporary films are often fully digital through the entire process of production, distribution, and exhibition, while films recorded in a photochemical form traditionally included an analogous optical soundtrack (a graphic recording of the spoken words, music and other sounds that accompany the images which runs along a portion of the film exclusively reserved for it, and is not projected).
|
10 |
+
|
11 |
+
Films are cultural artifacts created by specific cultures. They reflect those cultures, and, in turn, affect them. Film is considered to be an important art form, a source of popular entertainment, and a powerful medium for educating—or indoctrinating—citizens. The visual basis of film gives it a universal power of communication. Some films have become popular worldwide attractions through the use of dubbing or subtitles to translate the dialog into other languages.
|
12 |
+
|
13 |
+
The individual images that make up a film are called frames. In the projection of traditional celluloid films, a rotating shutter causes intervals of darkness as each frame, in turn, is moved into position to be projected, but the viewer does not notice the interruptions because of an effect known as persistence of vision, whereby the eye retains a visual image for a fraction of a second after its source disappears. The perception of motion is partly due to a psychological effect called the phi phenomenon.
|
14 |
+
|
15 |
+
The name "film" originates from the fact that photographic film (also called film stock) has historically been the medium for recording and displaying motion pictures. Many other terms exist for an individual motion-picture, including picture, picture show, moving picture, photoplay, and flick. The most common term in the United States is movie, while in Europe film is preferred. Common terms for the field in general include the big screen, the silver screen, the movies, and cinema; the last of these is commonly used, as an overarching term, in scholarly texts and critical essays. In early years, the word sheet was sometimes used instead of screen.
|
16 |
+
|
17 |
+
The art of film has drawn on several earlier traditions in fields such as oral storytelling, literature, theatre and visual arts. Forms of art and entertainment that had already featured moving and/or projected images include:
|
18 |
+
|
19 |
+
The stroboscopic animation principle was introduced in 1833 with the phénakisticope and also applied in the zoetrope since 1866, the flip book since 1868, and the praxinoscope since 1877, before it became the basic principle for cinematography.
|
20 |
+
|
21 |
+
Experiments with early phenakisticope-based animation projectors were made at least as early as 1843. Jules Duboscq marketed phénakisticope projection systems in France between 1853 and the 1890s.
|
22 |
+
|
23 |
+
Photography was introduced in 1839, but at first photographic emulsions needed such long exposures that the recording of moving subjects seemed impossible. At least as early as 1844, photographic series of subjects posed in different positions have been created to either suggest a motion sequence or to document a range of different viewing angles. The advent of stereoscopic photography, with early experiments in the 1840s and commercial success since the early 1850s, raised interest in completing the photographic medium with the addition of means to capture colour and motion. In 1849, Joseph Plateau published about the idea to combine his invention of the phénakisticope with the stereoscope, as suggested to him by stereoscope inventor Charles Wheatstone, and use photographs of plaster sculptures in different positions to be animated in the combined device. In 1852, Jules Duboscq patented such an instrument as the "Stéréoscope-fantascope, ou Bïoscope". He marginally advertised it for a short period. It was a commercial failure and no complete instrument has yet been located, but one bioscope disc has been preserved in the Plateau collection of the Ghent University. It has stereoscopic photographs of a machine.
|
24 |
+
|
25 |
+
By the late 1850s the first examples of instantaneous photography came about and provided hope that motion photography would soon be possible, but it took a few decades before it was successfully combined with a method to record series of sequential images in real-time. In 1878, Eadweard Muybridge eventually managed to take a series of photographs of a running horse with a battery of cameras in a line along the track and published the results as The Horse in Motion on cabinet cards. Muybridge, as well as Étienne-Jules Marey, Ottomar Anschütz and many others would create many more chronophotography studies. Muybridge had the contours of dozens of his chronophotographic series traced onto glass discs and projected them with his zoopraxiscope in his lectures from 1880 to 1895. Anschütz developed his own Electrotachyscope in 1887 to project 24 diapositive photographic images on glass disks as moving images, looped as long as deemed interesting for the audience.
|
26 |
+
|
27 |
+
Émile Reynaud already mentioned the possibility of projecting the images in his 1877 patent application for the praxinoscope. He presented a praxinoscope projection device at the Société française de photographie on 4 June 1880, but did not market his praxinoscope a projection before 1882. He then further developed the device into the Théâtre Optique which could project longer sequences with separate backgrounds, patented in 1888. He created several movies for the machine by painting images on hundreds of gelatin plates that were mounted into cardboard frames and attached to a cloth band. From 28 October 1892 to March 1900 Reynaud gave over 12,800 shows to a total of over 500,000 visitors at the Musée Grévin in Paris.
|
28 |
+
|
29 |
+
By the end of the 1880s, the introduction of lengths of celluloid photographic film and the invention of motion picture cameras, which could photograph an indefinitely long rapid sequence of images using only one lens, allowed several minutes of action to be captured and stored on a single compact reel of film. Some early films were made to be viewed by one person at a time through a "peep show" device such as the Kinetoscope and the mutoscope. Others were intended for a projector, mechanically similar to the camera and sometimes actually the same machine, which was used to shine an intense light through the processed and printed film and into a projection lens so that these "moving pictures" could be shown tremendously enlarged on a screen for viewing by an entire audience. The first kinetoscope film shown in public exhibition was Blacksmith Scene, produced by Edison Manufacturing Company in 1893. The following year the company would begin Edison Studios, which became an early leader in the film industry with notable early shorts including The Kiss, and would go on to produce close to 1,200 films.
|
30 |
+
|
31 |
+
The first public screenings of films at which admission was charged were made in 1895 by the American Woodville Latham and his sons, using films produced by their Eidoloscope company,[2] and by the – arguably better known – French brothers Auguste and Louis Lumière with ten of their own productions.[citation needed] Private screenings had preceded these by several months, with Latham's slightly predating the Lumière brothers'.[citation needed]
|
32 |
+
|
33 |
+
The earliest films were simply one static shot that showed an event or action with no editing or other cinematic techniques. Around the turn of the 20th century, films started stringing several scenes together to tell a story. The scenes were later broken up into multiple shots photographed from different distances and angles. Other techniques such as camera movement were developed as effective ways to tell a story with film. Until sound film became commercially practical in the late 1920s, motion pictures were a purely visual art, but these innovative silent films had gained a hold on the public imagination. Rather than leave audiences with only the noise of the projector as an accompaniment, theater owners hired a pianist or organist or, in large urban theaters, a full orchestra to play music that fit the mood of the film at any given moment. By the early 1920s, most films came with a prepared list of sheet music to be used for this purpose, and complete film scores were composed for major productions.
|
34 |
+
|
35 |
+
The rise of European cinema was interrupted by the outbreak of World War I, while the film industry in the United States flourished with the rise of Hollywood, typified most prominently by the innovative work of D. W. Griffith in The Birth of a Nation (1915) and Intolerance (1916). However, in the 1920s, European filmmakers such as Eisenstein, F. W. Murnau and Fritz Lang, in many ways inspired by the meteoric wartime progress of film through Griffith, along with the contributions of Charles Chaplin, Buster Keaton and others, quickly caught up with American film-making and continued to further advance the medium.
|
36 |
+
|
37 |
+
In the 1920s, the development of electronic sound recording technologies made it practical to incorporate a soundtrack of speech, music and sound effects synchronized with the action on the screen.[citation needed] The resulting sound films were initially distinguished from the usual silent "moving pictures" or "movies" by calling them "talking pictures" or "talkies."[citation needed] The revolution they wrought was swift. By 1930, silent film was practically extinct in the US and already being referred to as "the old medium."[citation needed]
|
38 |
+
|
39 |
+
Another major technological development was the introduction of "natural color," which meant color that was photographically recorded from nature rather than added to black-and-white prints by hand-coloring, stencil-coloring or other arbitrary procedures, although the earliest processes typically yielded colors which were far from "natural" in appearance.[citation needed] While the advent of sound films quickly made silent films and theater musicians obsolete, color replaced black-and-white much more gradually.[citation needed] The pivotal innovation was the introduction of the three-strip version of the Technicolor process, first used for animated cartoons in 1932, then also for live-action short films and isolated sequences in a few feature films, then for an entire feature film, Becky Sharp, in 1935. The expense of the process was daunting, but favorable public response in the form of increased box office receipts usually justified the added cost. The number of films made in color slowly increased year after year.
|
40 |
+
|
41 |
+
In the early 1950s, the proliferation of black-and-white television started seriously depressing North American theater attendance.[citation needed] In an attempt to lure audiences back into theaters, bigger screens were installed, widescreen processes, polarized 3D projection, and stereophonic sound were introduced, and more films were made in color, which soon became the rule rather than the exception. Some important mainstream Hollywood films were still being made in black-and-white as late as the mid-1960s, but they marked the end of an era. Color television receivers had been available in the US since the mid-1950s, but at first, they were very expensive and few broadcasts were in color. During the 1960s, prices gradually came down, color broadcasts became common, and sales boomed. The overwhelming public verdict in favor of color was clear. After the final flurry of black-and-white films had been released in mid-decade, all Hollywood studio productions were filmed in color, with the usual exceptions made only at the insistence of "star" filmmakers such as Peter Bogdanovich and Martin Scorsese.[citation needed]
|
42 |
+
|
43 |
+
The decades following the decline of the studio system in the 1960s saw changes in the production and style of film. Various New Wave movements (including the French New Wave, Indian New Wave, Japanese New Wave, and New Hollywood) and the rise of film-school-educated independent filmmakers contributed to the changes the medium experienced in the latter half of the 20th century. Digital technology has been the driving force for change throughout the 1990s and into the 2000s. Digital 3D projection largely replaced earlier problem-prone 3D film systems and has become popular in the early 2010s.[citation needed]
|
44 |
+
|
45 |
+
"Film theory" seeks to develop concise and systematic concepts that apply to the study of film as art. The concept of film as an art-form began in 1911 with Ricciotto Canudo's The Birth of the Sixth Art. Formalist film theory, led by Rudolf Arnheim, Béla Balázs, and Siegfried Kracauer, emphasized how film differed from reality and thus could be considered a valid fine art. André Bazin reacted against this theory by arguing that film's artistic essence lay in its ability to mechanically reproduce reality, not in its differences from reality, and this gave rise to realist theory. More recent analysis spurred by Jacques Lacan's psychoanalysis and Ferdinand de Saussure's semiotics among other things has given rise to psychoanalytic film theory, structuralist film theory, feminist film theory, and others. On the other hand, critics from the analytical philosophy tradition, influenced by Wittgenstein, try to clarify misconceptions used in theoretical studies and produce analysis of a film's vocabulary and its link to a form of life.
|
46 |
+
|
47 |
+
Film is considered to have its own language. James Monaco wrote a classic text on film theory, titled "How to Read a Film," that addresses this. Director Ingmar Bergman famously said, "Andrei Tarkovsky for me is the greatest director, the one who invented a new language, true to the nature of film, as it captures life as a reflection, life as a dream." An example of the language is a sequence of back and forth images of one speaking actor's left profile, followed by another speaking actor's right profile, then a repetition of this, which is a language understood by the audience to indicate a conversation. This describes another theory of film, the 180-degree rule, as a visual story-telling device with an ability to place a viewer in a context of being psychologically present through the use of visual composition and editing. The "Hollywood style" includes this narrative theory, due to the overwhelming practice of the rule by movie studios based in Hollywood, California, during film's classical era. Another example of cinematic language is having a shot that zooms in on the forehead of an actor with an expression of silent reflection that cuts to a shot of a younger actor who vaguely resembles the first actor, indicating that the first person is remembering a past self, an edit of compositions that causes a time transition.
|
48 |
+
|
49 |
+
Montage is the technique by which separate pieces of film are selected, edited, and then pieced together to make a new section of film. A scene could show a man going into battle, with flashbacks to his youth and to his home-life and with added special effects, placed into the film after filming is complete. As these were all filmed separately, and perhaps with different actors, the final version is called a montage. Directors developed a theory of montage, beginning with Eisenstein and the complex juxtaposition of images in his film Battleship Potemkin.[3] Incorporation of musical and visual counterpoint, and scene development through mise en scene, editing, and effects has led to more complex techniques comparable to those used in opera and ballet.
|
50 |
+
|
51 |
+
— Roger Ebert (1986)[4]
|
52 |
+
|
53 |
+
Film criticism is the analysis and evaluation of films. In general, these works can be divided into two categories: academic criticism by film scholars and journalistic film criticism that appears regularly in newspapers and other media. Film critics working for newspapers, magazines, and broadcast media mainly review new releases. Normally they only see any given film once and have only a day or two to formulate their opinions. Despite this, critics have an important impact on the audience response and attendance at films, especially those of certain genres. Mass marketed action, horror, and comedy films tend not to be greatly affected by a critic's overall judgment of a film. The plot summary and description of a film and the assessment of the director's and screenwriters' work that makes up the majority of most film reviews can still have an important impact on whether people decide to see a film. For prestige films such as most dramas and art films, the influence of reviews is important. Poor reviews from leading critics at major papers and magazines will often reduce audience interest and attendance.
|
54 |
+
|
55 |
+
The impact of a reviewer on a given film's box office performance is a matter of debate. Some observers claim that movie marketing in the 2000s is so intense, well-coordinated and well financed that reviewers cannot prevent a poorly written or filmed blockbuster from attaining market success. However, the cataclysmic failure of some heavily promoted films which were harshly reviewed, as well as the unexpected success of critically praised independent films indicates that extreme critical reactions can have considerable influence. Other observers note that positive film reviews have been shown to spark interest in little-known films. Conversely, there have been several films in which film companies have so little confidence that they refuse to give reviewers an advanced viewing to avoid widespread panning of the film. However, this usually backfires, as reviewers are wise to the tactic and warn the public that the film may not be worth seeing and the films often do poorly as a result. Journalist film critics are sometimes called film reviewers. Critics who take a more academic approach to films, through publishing in film journals and writing books about films using film theory or film studies approaches, study how film and filming techniques work, and what effect they have on people. Rather than having their reviews published in newspapers or appearing on television, their articles are published in scholarly journals or up-market magazines. They also tend to be affiliated with colleges or universities as professors or instructors.
|
56 |
+
|
57 |
+
The making and showing of motion pictures became a source of profit almost as soon as the process was invented. Upon seeing how successful their new invention, and its product, was in their native France, the Lumières quickly set about touring the Continent to exhibit the first films privately to royalty and publicly to the masses. In each country, they would normally add new, local scenes to their catalogue and, quickly enough, found local entrepreneurs in the various countries of Europe to buy their equipment and photograph, export, import, and screen additional product commercially. The Oberammergau Passion Play of 1898[citation needed] was the first commercial motion picture ever produced. Other pictures soon followed, and motion pictures became a separate industry that overshadowed the vaudeville world. Dedicated theaters and companies formed specifically to produce and distribute films, while motion picture actors became major celebrities and commanded huge fees for their performances. By 1917 Charlie Chaplin had a contract that called for an annual salary of one million dollars. From 1931 to 1956, film was also the only image storage and playback system for television programming until the introduction of videotape recorders.
|
58 |
+
|
59 |
+
In the United States, much of the film industry is centered around Hollywood, California. Other regional centers exist in many parts of the world, such as Mumbai-centered Bollywood, the Indian film industry's Hindi cinema which produces the largest number of films in the world.[5] Though the expense involved in making films has led cinema production to concentrate under the auspices of movie studios, recent advances in affordable film making equipment have allowed independent film productions to flourish.
|
60 |
+
|
61 |
+
Profit is a key force in the industry, due to the costly and risky nature of filmmaking; many films have large cost overruns, an example being Kevin Costner's Waterworld. Yet many filmmakers strive to create works of lasting social significance. The Academy Awards (also known as "the Oscars") are the most prominent film awards in the United States, providing recognition each year to films, based on their artistic merits. There is also a large industry for educational and instructional films made in lieu of or in addition to lectures and texts. Revenue in the industry is sometimes volatile due to the reliance on blockbuster films released in movie theaters. The rise of alternative home entertainment has raised questions about the future of the cinema industry, and Hollywood employment has become less reliable, particularly for medium and low-budget films.[6]
|
62 |
+
|
63 |
+
Derivative academic fields of study may both interact with and develop independently of filmmaking, as in film theory and analysis. Fields of academic study have been created that are derivative or dependent on the existence of film, such as film criticism, film history, divisions of film propaganda in authoritarian governments, or psychological on subliminal effects (e.g., of a flashing soda can during a screening). These fields may further create derivative fields, such as a movie review section in a newspaper or a television guide. Sub-industries can spin off from film, such as popcorn makers, and film-related toys (e.g., Star Wars figures). Sub-industries of pre-existing industries may deal specifically with film, such as product placement and other advertising within films.
|
64 |
+
|
65 |
+
The terminology used for describing motion pictures varies considerably between British and American English. In British usage, the name of the medium is "film". The word "movie" is understood but seldom used.[7][8] Additionally, "the pictures" (plural) is used semi-frequently to refer to the place where movies are exhibited, while in American English this may be called "the movies", but it is becoming outdated. In other countries, the place where movies are exhibited may be called a cinema or movie theatre. By contrast, in the United States, "movie" is the predominant form. Although the words "film" and "movie" are sometimes used interchangeably, "film" is more often used when considering artistic, theoretical, or technical aspects. The term "movies" more often refers to entertainment or commercial aspects, as where to go for fun evening on a date. For example, a book titled "How to Understand a Film" would probably be about the aesthetics or theory of film, while a book entitled "Let's Go to the Movies" would probably be about the history of entertaining movies and blockbusters.
|
66 |
+
|
67 |
+
Further terminology is used to distinguish various forms and media used in the film industry. "Motion pictures" and "moving pictures" are frequently used terms for film and movie productions specifically intended for theatrical exhibition, such as, for instance, Batman. "DVD" and "videotape" are video formats that can reproduce a photochemical film. A reproduction based on such is called a "transfer." After the advent of theatrical film as an industry, the television industry began using videotape as a recording medium. For many decades, tape was solely an analog medium onto which moving images could be either recorded or transferred. "Film" and "filming" refer to the photochemical medium that chemically records a visual image and the act of recording respectively. However, the act of shooting images with other visual media, such as with a digital camera, is still called "filming" and the resulting works often called "films" as interchangeable to "movies," despite not being shot on film. "Silent films" need not be utterly silent, but are films and movies without an audible dialogue, including those that have a musical accompaniment. The word, "Talkies," refers to the earliest sound films created to have audible dialogue recorded for playback along with the film, regardless of a musical accompaniment. "Cinema" either broadly encompasses both films and movies, or it is roughly synonymous with film and theatrical exhibition, and both are capitalized when referring to a category of art. The "silver screen" refers to the projection screen used to exhibit films and, by extension, is also used as a metonym for the entire film industry.
|
68 |
+
|
69 |
+
"Widescreen" refers to a larger width to height in the frame, compared to earlier historic aspect ratios.[9] A "feature-length film", or "feature film", is of a conventional full length, usually 60 minutes or more, and can commercially stand by itself without other films in a ticketed screening.[10] A "short" is a film that is not as long as a feature-length film, often screened with other shorts, or preceding a feature-length film. An "independent" is a film made outside the conventional film industry.
|
70 |
+
|
71 |
+
In US usage, one talks of a "screening" or "projection" of a movie or video on a screen at a public or private "theater." In British English, a "film showing" happens at a cinema (never a "theatre", which is a different medium and place altogether).[8] A cinema usually refers to an arena designed specifically to exhibit films, where the screen is affixed to a wall, while a theater usually refers to a place where live, non-recorded action or combination thereof occurs from a podium or other type of stage, including the amphitheater. Theaters can still screen movies in them, though the theater would be retrofitted to do so. One might propose "going to the cinema" when referring to the activity, or sometimes "to the pictures" in British English, whereas the US expression is usually "going to the movies." A cinema usually shows a mass-marketed movie using a front-projection screen process with either a film projector or, more recently, with a digital projector. But, cinemas may also show theatrical movies from their home video transfers that include Blu-ray Disc, DVD, and videocassette when they possess sufficient projection quality or based upon need, such as movies that exist only in their transferred state, which may be due to the loss or deterioration of the film master and prints from which the movie originally existed. Due to the advent of digital film production and distribution, physical film might be absent entirely. A "double feature" is a screening of two independently marketed, stand-alone feature films. A "viewing" is a watching of a film. "Sales" and "at the box office" refer to tickets sold at a theater, or more currently, rights sold for individual showings. A "release" is the distribution and often simultaneous screening of a film. A "preview" is a screening in advance of the main release.
|
72 |
+
|
73 |
+
Any film may also have a "sequel", which portrays events following those in the film. Bride of Frankenstein is an early example. When there are more films than one with the same characters, story arcs, or subject themes, these movies become a "series," such as the James Bond series. And, existing outside a specific story timeline usually, does not exclude a film from being part of a series. A film that portrays events occurring earlier in a timeline with those in another film, but is released after that film, is sometimes called a "prequel," an example being Butch and Sundance: The Early Days.
|
74 |
+
|
75 |
+
The "credits," or "end credits," is a list that gives credit to the people involved in the production of a film. Films from before the 1970s usually start a film with credits, often ending with only a title card, saying "The End" or some equivalent, often an equivalent that depends on the language of the production[citation needed]. From then onward, a film's credits usually appear at the end of most films. However, films with credits that end a film often repeat some credits at or near the start of a film and therefore appear twice, such as that film's acting leads, while less frequently some appearing near or at the beginning only appear there, not at the end, which often happens to the director's credit. The credits appearing at or near the beginning of a film are usually called "titles" or "beginning titles." A post-credits scene is a scene shown after the end of the credits. Ferris Bueller's Day Off has a post-credit scene in which Ferris tells the audience that the film is over and they should go home.
|
76 |
+
|
77 |
+
A film's "cast" refers to a collection of the actors and actresses who appear, or "star," in a film. A star is an actor or actress, often a popular one, and in many cases, a celebrity who plays a central character in a film. Occasionally the word can also be used to refer to the fame of other members of the crew, such as a director or other personality, such as Martin Scorsese. A "crew" is usually interpreted as the people involved in a film's physical construction outside cast participation, and it could include directors, film editors, photographers, grips, gaffers, set decorators, prop masters, and costume designers. A person can both be part of a film's cast and crew, such as Woody Allen, who directed and starred in Take the Money and Run.
|
78 |
+
|
79 |
+
A "film goer," "movie goer," or "film buff" is a person who likes or often attends films and movies, and any of these, though more often the latter, could also see oneself as a student to films and movies or the filmic process. Intense interest in films, film theory, and film criticism, is known as cinephilia. A film enthusiast is known as a cinephile or cineaste.
|
80 |
+
|
81 |
+
A preview performance refers to a showing of a film to a select audience, usually for the purposes of corporate promotions, before the public film premiere itself. Previews are sometimes used to judge audience reaction, which if unexpectedly negative, may result in recutting or even refilming certain sections based on the audience response. One example of a film that was changed after a negative response from the test screening is 1982's First Blood. After the test audience responded very negatively to the death of protagonist John Rambo, a Vietnam veteran, at the end of the film, the company wrote and re-shot a new ending in which the character survives.[11]
|
82 |
+
|
83 |
+
Trailers or previews are advertisements for films that will be shown in 1 to 3 months at a cinema. Back in the early days of cinema, with theaters that had only one or two screens, only certain trailers were shown for the films that were going to be shown there. Later, when theaters added more screens or new theaters were built with a lot of screens, all different trailers were shown even if they weren't going to play that film in that theater. Film studios realized that the more trailers that were shown (even if it wasn't going to be shown in that particular theater) the more patrons would go to a different theater to see the film when it came out. The term "trailer" comes from their having originally been shown at the end of a film program. That practice did not last long because patrons tended to leave the theater after the films ended, but the name has stuck. Trailers are now shown before the film (or the "A film" in a double feature program) begins. Film trailers are also common on DVDs and Blu-ray Discs, as well as on the Internet and mobile devices. Trailers are created to be engaging and interesting for viewers. As a result, in the Internet era, viewers often seek out trailers to watch them. Of the ten billion videos watched online annually in 2008, film trailers ranked third, after news and user-created videos.[12] Teasers are a much shorter preview or advertisement that lasts only 10 to 30 seconds. Teasers are used to get patrons excited about a film coming out in the next six to twelve months. Teasers may be produced even before the film production is completed.
|
84 |
+
|
85 |
+
Film is used for a range of goals, including education and propaganda. When the purpose is primarily educational, a film is called an "educational film". Examples are recordings of academic lectures and experiments, or a film based on a classic novel. Film may be propaganda, in whole or in part, such as the films made by Leni Riefenstahl in Nazi Germany, US war film trailers during World War II, or artistic films made under Stalin by Sergei Eisenstein. They may also be works of political protest, as in the films of Andrzej Wajda, or more subtly, the films of Andrei Tarkovsky. The same film may be considered educational by some, and propaganda by others as the categorization of a film can be subjective.
|
86 |
+
|
87 |
+
At its core, the means to produce a film depend on the content the filmmaker wishes to show, and the apparatus for displaying it: the zoetrope merely requires a series of images on a strip of paper. Film production can, therefore, take as little as one person with a camera (or even without a camera, as in Stan Brakhage's 1963 film Mothlight), or thousands of actors, extras, and crew members for a live-action, feature-length epic.
|
88 |
+
|
89 |
+
The necessary steps for almost any film can be boiled down to conception, planning, execution, revision, and distribution. The more involved the production, the more significant each of the steps becomes. In a typical production cycle of a Hollywood-style film, these main stages are defined as development, pre-production, production, post-production and distribution.
|
90 |
+
|
91 |
+
This production cycle usually takes three years. The first year is taken up with development. The second year comprises preproduction and production. The third year, post-production and distribution. The bigger the production, the more resources it takes, and the more important financing becomes; most feature films are artistic works from the creators' perspective (e.g., film director, cinematographer, screenwriter) and for-profit business entities for the production companies.
|
92 |
+
|
93 |
+
A film crew is a group of people hired by a film company, employed during the "production" or "photography" phase, for the purpose of producing a film or motion picture. Crew is distinguished from cast, who are the actors who appear in front of the camera or provide voices for characters in the film. The crew interacts with but is also distinct from the production staff, consisting of producers, managers, company representatives, their assistants, and those whose primary responsibility falls in pre-production or post-production phases, such as screenwriters and film editors. Communication between production and crew generally passes through the director and his/her staff of assistants. Medium-to-large crews are generally divided into departments with well-defined hierarchies and standards for interaction and cooperation between the departments. Other than acting, the crew handles everything in the photography phase: props and costumes, shooting, sound, electrics (i.e., lights), sets, and production special effects. Caterers (known in the film industry as "craft services") are usually not considered part of the crew.
|
94 |
+
|
95 |
+
Film stock consists of transparent celluloid, acetate, or polyester base coated with an emulsion containing light-sensitive chemicals. Cellulose nitrate was the first type of film base used to record motion pictures, but due to its flammability was eventually replaced by safer materials. Stock widths and the film format for images on the reel have had a rich history, though most large commercial films are still shot on (and distributed to theaters) as 35 mm prints.
|
96 |
+
Originally moving picture film was shot and projected at various speeds using hand-cranked cameras and projectors; though 1000 frames per minute (162/3 frame/s) is generally cited as a standard silent speed, research indicates most films were shot between 16 frame/s and 23 frame/s and projected from 18 frame/s on up (often reels included instructions on how fast each scene should be shown).[13] When sound film was introduced in the late 1920s, a constant speed was required for the sound head. 24 frames per second were chosen because it was the slowest (and thus cheapest) speed which allowed for sufficient sound quality.[citation needed] Improvements since the late 19th century include the mechanization of cameras – allowing them to record at a consistent speed, quiet camera design – allowing sound recorded on-set to be usable without requiring large "blimps" to encase the camera, the invention of more sophisticated filmstocks and lenses, allowing directors to film in increasingly dim conditions, and the development of synchronized sound, allowing sound to be recorded at exactly the same speed as its corresponding action. The soundtrack can be recorded separately from shooting the film, but for live-action pictures, many parts of the soundtrack are usually recorded simultaneously.
|
97 |
+
|
98 |
+
As a medium, film is not limited to motion pictures, since the technology developed as the basis for photography. It can be used to present a progressive sequence of still images in the form of a slideshow. Film has also been incorporated into multimedia presentations and often has importance as primary historical documentation. However, historic films have problems in terms of preservation and storage, and the motion picture industry is exploring many alternatives. Most films on cellulose nitrate base have been copied onto modern safety films. Some studios save color films through the use of separation masters: three B&W negatives each exposed through red, green, or blue filters (essentially a reverse of the Technicolor process). Digital methods have also been used to restore films, although their continued obsolescence cycle makes them (as of 2006) a poor choice for long-term preservation. Film preservation of decaying film stock is a matter of concern to both film historians and archivists and to companies interested in preserving their existing products in order to make them available to future generations (and thereby increase revenue). Preservation is generally a higher concern for nitrate and single-strip color films, due to their high decay rates; black-and-white films on safety bases and color films preserved on Technicolor imbibition prints tend to keep up much better, assuming proper handling and storage.
|
99 |
+
|
100 |
+
Some films in recent decades have been recorded using analog video technology similar to that used in television production. Modern digital video cameras and digital projectors are gaining ground as well. These approaches are preferred by some film-makers, especially because footage shot with digital cinema can be evaluated and edited with non-linear editing systems (NLE) without waiting for the film stock to be processed. The migration was gradual, and as of 2005, most major motion pictures were still shot on film.[needs update]
|
101 |
+
|
102 |
+
Independent filmmaking often takes place outside Hollywood, or other major studio systems. An independent film (or indie film) is a film initially produced without financing or distribution from a major film studio. Creative, business and technological reasons have all contributed to the growth of the indie film scene in the late 20th and early 21st century. On the business side, the costs of big-budget studio films also lead to conservative choices in cast and crew. There is a trend in Hollywood towards co-financing (over two-thirds of the films put out by Warner Bros. in 2000 were joint ventures, up from 10% in 1987).[14] A hopeful director is almost never given the opportunity to get a job on a big-budget studio film unless he or she has significant industry experience in film or television. Also, the studios rarely produce films with unknown actors, particularly in lead roles.
|
103 |
+
|
104 |
+
Before the advent of digital alternatives, the cost of professional film equipment and stock was also a hurdle to being able to produce, direct, or star in a traditional studio film. But the advent of consumer camcorders in 1985, and more importantly, the arrival of high-resolution digital video in the early 1990s, have lowered the technology barrier to film production significantly. Both production and post-production costs have been significantly lowered; in the 2000s, the hardware and software for post-production can be installed in a commodity-based personal computer. Technologies such as DVDs, FireWire connections and a wide variety of professional and consumer-grade video editing software make film-making relatively affordable.
|
105 |
+
|
106 |
+
Since the introduction of digital video DV technology, the means of production have become more democratized. Filmmakers can conceivably shoot a film with a digital video camera and edit the film, create and edit the sound and music, and mix the final cut on a high-end home computer. However, while the means of production may be democratized, financing, distribution, and marketing remain difficult to accomplish outside the traditional system. Most independent filmmakers rely on film festivals to get their films noticed and sold for distribution. The arrival of internet-based video websites such as YouTube and Veoh has further changed the filmmaking landscape, enabling indie filmmakers to make their films available to the public.
|
107 |
+
|
108 |
+
An open content film is much like an independent film, but it is produced through open collaborations; its source material is available under a license which is permissive enough to allow other parties to create fan fiction or derivative works, than a traditional copyright. Like independent filmmaking, open source filmmaking takes place outside Hollywood, or other major studio systems.
|
109 |
+
|
110 |
+
A fan film is a film or video inspired by a film, television program, comic book or a similar source, created by fans rather than by the source's copyright holders or creators. Fan filmmakers have traditionally been amateurs, but some of the most notable films have actually been produced by professional filmmakers as film school class projects or as demonstration reels. Fan films vary tremendously in length, from short faux-teaser trailers for non-existent motion pictures to rarer full-length motion pictures.
|
111 |
+
|
112 |
+
Film distribution is the process through which a film is made available for viewing by an audience. This is normally the task of a professional film distributor, who would determine the marketing strategy of the film, the media by which a film is to be exhibited or made available for viewing, and may set the release date and other matters. The film may be exhibited directly to the public either through a movie theater (historically the main way films were distributed) or television for personal home viewing (including on DVD-Video or Blu-ray Disc, video-on-demand, online downloading, television programs through broadcast syndication etc.). Other ways of distributing a film include rental or personal purchase of the film in a variety of media and formats, such as VHS tape or DVD, or Internet downloading or streaming using a computer.
|
113 |
+
|
114 |
+
Animation is a technique in which each frame of a film is produced individually, whether generated as a computer graphic, or by photographing a drawn image, or by repeatedly making small changes to a model unit (see claymation and stop motion), and then photographing the result with a special animation camera. When the frames are strung together and the resulting film is viewed at a speed of 16 or more frames per second, there is an illusion of continuous movement (due to the phi phenomenon). Generating such a film is very labor-intensive and tedious, though the development of computer animation has greatly sped up the process. Because animation is very time-consuming and often very expensive to produce, the majority of animation for TV and films comes from professional animation studios. However, the field of independent animation has existed at least since the 1950s, with animation being produced by independent studios (and sometimes by a single person). Several independent animation producers have gone on to enter the professional animation industry.
|
115 |
+
|
116 |
+
Limited animation is a way of increasing production and decreasing costs of animation by using "short cuts" in the animation process. This method was pioneered by UPA and popularized by Hanna-Barbera in the United States, and by Osamu Tezuka in Japan, and adapted by other studios as cartoons moved from movie theaters to television.[15] Although most animation studios are now using digital technologies in their productions, there is a specific style of animation that depends on film. Camera-less animation, made famous by film-makers like Norman McLaren, Len Lye, and Stan Brakhage, is painted and drawn directly onto pieces of film, and then run through a projector.
|
en/1139.html.txt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
5 is a number, numeral, and glyph.
|
2 |
+
|
3 |
+
5, five or number 5 may also refer to:
|
en/114.html.txt
ADDED
@@ -0,0 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Alcides Edgardo Ghiggia Pereyra (pronounced [ˈɡiddʒa]; 22 December 1926 – 16 July 2015) was a Uruguayan-Italian football player, who played as a right winger. He achieved lasting fame for his decisive role in the final match of the 1950 World Cup, and at the time of his death exactly 65 years later, he was also the last surviving player from that game.
|
4 |
+
|
5 |
+
He played for the national sides of both Uruguay and Italy during his career. He also played for the club sides of the Peñarol and Danubio in Uruguay and A.S. Roma and A.C. Milan in Italy.
|
6 |
+
|
7 |
+
In 1950, Ghiggia, then playing for Uruguay, scored the winning goal against Brazil in the final match of that year's World Cup (the Maracanazo). Roberto Muylaert compares the black and white film of the goal with Abraham Zapruder's chance images of the Kennedy assassination in Dallas: he says that the goal and the shot that killed the US President have "the same dramatic pattern ... the same movement ... the same precision of an unstoppable trajectory. They even have the dust in common that was stirred up, here by a rifle and there by Ghiggia's left foot."[1] The match is considered one of the biggest upsets in football history; Ghiggia would later remark that "only three people managed to silence the Maracanã: Frank Sinatra, the Pope, and me."[2]
|
8 |
+
|
9 |
+
He managed C.A. Peñarol in 1980.[3]
|
10 |
+
|
11 |
+
On 29 December 2009, Brazil honoured Ghiggia by celebrating his decisive goal in the 1950 World Cup. Ghiggia returned to Maracanã Stadium almost 60 years later for this honour and planted his feet in a mould to take his place alongside greats including Brazil's Pelé, Portugal's Eusébio and Germany's Franz Beckenbauer on the Maracanã Stadium walk of fame. Ghiggia was very emotional and thanked Brazil for the warm reception and recognition he received even when the game is considered the most disappointing match in Brazilian football history.[4]
|
12 |
+
|
13 |
+
Ghiggia's family was of Ticinese descent originally from Sonvico.[5]
|
14 |
+
|
15 |
+
Ghiggia lived out his last years at his home in Las Piedras, Uruguay. He died on 16 July 2015 in a private hospital in Montevideo at the age of 88. Coincidentally, it was the 65th anniversary of the Maracanazo.[6] At the time of his death, Ghiggia was the oldest living World Cup champion.[7]
|
16 |
+
|
17 |
+
Ghiggia was the last surviving member from either the Brazilian or Uruguayan squads involved in the historic 1950 World Cup game.[8]
|
18 |
+
|
en/1140.html.txt
ADDED
@@ -0,0 +1,195 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Sensation is the physical process during which sensory systems respond to stimuli and provide data for perception.[1] A sense is any of the systems involved in sensation. During sensation, sense organs engage in stimulus collection and transduction.[2] Sensation is often differentiated from the related and dependent concept of perception, which processes and integrates sensory information in order to give meaning to and understand detected stimuli, giving rise to subjective perceptual experience, or qualia.[3] Sensation and perception are central to and precede almost all aspects of cognition, behavior and thought.[1]
|
2 |
+
|
3 |
+
In organisms, a sensory organ consists of a group of related sensory cells that respond to a specific type of physical stimulus. Via cranial and spinal nerves, the different types of sensory receptor cells (mechanoreceptors, photoreceptors, chemoreceptors, thermoreceptors) in sensory organs transduct sensory information from sensory organs towards the central nervous system, to the sensory cortices in the brain, where sensory signals are further processed and interpreted (perceived).[1][4][5] Sensory systems, or senses, are often divided into external (exteroception) and internal (interoception) sensory systems.[6][7] Sensory modalities or submodalities refer to the way sensory information is encoded or transduced.[4] Multimodality integrates different senses into one unified perceptual experience. For example, information from one sense has the potential to influence how information from another is perceived.[2] Sensation and perception are studied by a variety of related fields, most notably psychophysics, neurobiology, cognitive psychology, and cognitive science.[1]
|
4 |
+
|
5 |
+
Humans have a multitude of sensory systems. Human external sensation is based on the sensory organs of the eyes, ears, skin, inner ear, nose, and mouth. The corresponding sensory systems of the visual system (sense of vision), auditory system (sense of hearing), somatosensory system (sense of touch), vestibular system (sense of balance), olfactory system (sense of smell), and gustatory system (sense of taste) contribute, respectively, to the perceptions of vision, hearing, touch, spatial orientation, smell, and taste (flavor).[2][1] Internal sensation, or interoception, detects stimuli from internal organs and tissues. Many internal sensory and perceptual systems exist in humans, including proprioception (body position) and nociception (pain). Further internal chemoreception and osmoreception based sensory systems lead to various perceptions, such as hunger, thirst, suffocation, and nausea, or different involuntary behaviors, such as vomiting.[6][7][8]
|
6 |
+
|
7 |
+
Nonhuman animals experience sensation and perception, with varying levels of similarity to and difference from humans and other animal species. For example, mammals, in general, have a stronger sense of smell than humans. Some animal species lack one or more human sensory system analogues, some have sensory systems that are not found in humans, while others process and interpret the same sensory information in very different ways. For example, some animals are able to detect electrical[9] and magnetic fields,[10] air moisture,[11] or polarized light,[12] while others sense and perceive through alternative systems, such as echolocation.[13][14] Recently, it has been suggested that plants and artificial agents may be able to detect and interpret environmental information in an analogous manner to animals.[15][16][17]
|
8 |
+
|
9 |
+
Sensory modality refers to the way that information is encoded, which is similar to the idea of transduction. The main sensory modalities can be described on the basis of how each is transduced. Listing all the different sensory modalities, which can number as many as 17, involves separating the major senses into more specific categories, or submodalities, of the larger sense. An individual sensory modality represents the sensation of a specific type of stimulus. For example, the general sensation and perception of touch, which is known as somatosensation, can be separated into light pressure, deep pressure, vibration, itch, pain, temperature, or hair movement, while the general sensation and perception of taste can be separated into submodalities of sweet, salty, sour, bitter, spicy. and umami, all of which are based on different chemicals binding to sensory neurons.[4]
|
10 |
+
|
11 |
+
Sensory receptors are the cells or structures that detect sensations. Stimuli in the environment activate specialized receptor cells in the peripheral nervous system. During transduction, physical stimulus is converted into action potential by receptors and transmitted towards the central nervous system for processing.[5] Different types of stimuli are sensed by different types of receptor cells. Receptor cells can be classified into types on the basis of three different criteria: cell type, position, and function. Receptors can be classified structurally on the basis of cell type and their position in relation to stimuli they sense. Receptors can further be classified functionally on the basis of the transduction of stimuli, or how the mechanical stimulus, light, or chemical changed the cell membrane potential.[4]
|
12 |
+
|
13 |
+
One way to classify receptors is based on their location relative to the stimuli. An exteroceptor is a receptor that is located near a stimulus of the external environment, such as the somatosensory receptors that are located in the skin. An interoceptor is one that interprets stimuli from internal organs and tissues, such as the receptors that sense the increase in blood pressure in the aorta or carotid sinus.[4]
|
14 |
+
|
15 |
+
The cells that interpret information about the environment can be either (1) a neuron that has a free nerve ending, with dendrites embedded in tissue that would receive a sensation; (2) a neuron that has an encapsulated ending in which the sensory nerve endings are encapsulated in connective tissue that enhances their sensitivity; or (3) a specialized receptor cell, which has distinct structural components that interpret a specific type of stimulus. The pain and temperature receptors in the dermis of the skin are examples of neurons that have free nerve endings (1). Also located in the dermis of the skin are lamellated corpuscles, neurons with encapsulated nerve endings that respond to pressure and touch (2). The cells in the retina that respond to light stimuli are an example of a specialized receptor (3), a photoreceptor.[4]
|
16 |
+
|
17 |
+
A transmembrane protein receptor is a protein in the cell membrane that mediates a physiological change in a neuron, most often through the opening of ion channels or changes in the cell signaling processes. Transmembrane receptors are activated by chemicals called ligands. For example, a molecule in food can serve as a ligand for taste receptors. Other transmembrane proteins, which are not accurately called receptors, are sensitive to mechanical or thermal changes. Physical changes in these proteins increase ion flow across the membrane, and can generate an action potential or a graded potential in the sensory neurons.[4]
|
18 |
+
|
19 |
+
A third classification of receptors is by how the receptor transduces stimuli into membrane potential changes. Stimuli are of three general types. Some stimuli are ions and macromolecules that affect transmembrane receptor proteins when these chemicals diffuse across the cell membrane. Some stimuli are physical variations in the environment that affect receptor cell membrane potentials. Other stimuli include the electromagnetic radiation from visible light. For humans, the only electromagnetic energy that is perceived by our eyes is visible light. Some other organisms have receptors that humans lack, such as the heat sensors of snakes, the ultraviolet light sensors of bees, or magnetic receptors in migratory birds.[4]
|
20 |
+
|
21 |
+
Receptor cells can be further categorized on the basis of the type of stimuli they transduce. The different types of functional receptor cell types are mechanoreceptors, photoreceptors, chemoreceptors (osmoreceptor), thermoreceptors, and nociceptors. Physical stimuli, such as pressure and vibration, as well as the sensation of sound and body position (balance), are interpreted through a mechanoreceptor. Photoreceptors convert light (visible electromagnetic radiation) into signals. Chemical stimuli can be interpreted by a chemoreceptor that interprets chemical stimuli, such as an object's taste or smell, while osmoreceptors respond to a chemical solute concentrations of body fluids. Nociception (pain) interprets the presence of tissue damage, from sensory information from mechano-, chemo-, and thermoreceptors.[18] Another physical stimulus that has its own type of receptor is temperature, which is sensed through a thermoreceptor that is either sensitive to temperatures above (heat) or below (cold) normal body temperature.[4]
|
22 |
+
|
23 |
+
Each sense organ (eyes or nose, for instance) requires a minimal amount of stimulation in order to detect a stimulus. This minimum amount of stimulus is called the absolute threshold.[2] The absolute threshold is defined as the minimum amount of stimulation necessary for the detection of a stimulus 50% of the time.[1] Absolute threshold is measured by using a method called signal detection. This process involves presenting stimuli of varying intensities to a subject in order to determine the level at which the subject can reliably detect stimulation in a given sense.[2]
|
24 |
+
|
25 |
+
Differential threshold or just noticeable difference (JDS) is the smallest detectable difference between two stimuli, or the smallest difference in stimuli that can be judged to be different from each other.[1] Weber's Law is an empirical law that states that the difference threshold is a constant fraction of the comparison stimulus.[1] According to Weber's Law, bigger stimuli require larger differences to be noticed.[2]
|
26 |
+
|
27 |
+
Magnitude estimation is a psychophysical method in which subjects assign perceived values of given stimuli. The relationship between stimulus intensity and perceptive intensity is described by Steven's power law.[1]
|
28 |
+
|
29 |
+
Signal detection theory quantifies the experience of the subject to the presentation of a stimulus in the presence of noise. There is internal noise and there is external noise when it comes to signal detection. The internal noise originates from static in the nervous system. For example, an individual with closed eyes in a dark room still sees something - a blotchy pattern of grey with intermittent brighter flashes -, this is internal noise. External noise is the result of noise in the environment that can interfere with the detection of the stimulus of interest. Noise is only a problem if the magnitude of the noise is large enough to interfere with signal collection. The nervous system calculates a criterion, or an internal threshold, for the detection of a signal in the presence of noise. If a signal is judged to be above the criterion, thus the signal is differentiated from the noise, the signal is sensed and perceived. Errors in signal detection can potentially lead to false positives and false negatives. The sensory criterion might be shifted based on the importance of the detecting the signal. Shifting of the criterion may influence the likelihood of false positives and false negatives.[1]
|
30 |
+
|
31 |
+
Subjective visual and auditory experiences appear to be similar across humans subjects. The same cannot be said about taste. For example, there is a molecule called propylthiouracil (PROP) that some humans experience as bitter, some as almost tasteless, while others experience it as somewhere between tasteless and bitter. There is a genetic basis for this difference between perception given the same sensory stimulus. This subjective difference in taste perception has implications for individuals' food preferences, and consequently, health.[1]
|
32 |
+
|
33 |
+
When a stimulus is constant and unchanging, perceptual sensory adaptation occurs. During this process, the subject becomes less sensitive to the stimulus.[2]
|
34 |
+
|
35 |
+
Biological auditory (hearing), vestibular and spatial, and visual systems (vision) appear to break down real-world complex stimuli into sine wave components, through the mathematical process called Fourier analysis. Many neurons have a strong preference for certain sine frequency components in contrast to others. The way that simpler sounds and images are encoded during sensation can provide insight into how perception of real-world objects happens.[1]
|
36 |
+
|
37 |
+
Perception occurs when nerves that lead from the sensory organs (e.g. eye) to the brain are stimulated, even if that stimulation is unrelated to the target signal of the sensory organ. For example, in the case of the eye, it does not matter whether light or something else stimulates the optic nerve, that stimulation will results in visual perception, even if there was no visual stimulus to begin with. (To prove this point to yourself (and if you are a human), close your eyes (preferably in a dark room) and press gently on the outside corner of one eye through the eyelid. You will see a visual spot toward the inside of your visual field, near your nose.)[1]
|
38 |
+
|
39 |
+
All stimuli received by the receptors are transduced to an action potential, which is carried along one or more afferent neurons towards a specific area (cortex) of the brain. Just as different nerves are dedicated to sensory and motors tasks, different areas of the brain (cortices) are similarly dedicated to different sensory and perceptual tasks. More complex processing is accomplished across primary cortical regions that spread beyond the primary cortices. Every nerve, sensory or motor, has its own signal transmission speed. For example, nerves in the frog's legs have a 90 ft/s (99 km/h) signal transmission speed, while sensory nerves in humans, transmit sensory information at speeds between 165 ft/s (181 km/h) and 330 ft/s (362 km/h).[1]
|
40 |
+
|
41 |
+
Perceptual experience is often multimodal. Multimodality integrates different senses into one unified perceptual experience. Information from one sense has the potential to influence how information from another is perceived.[2] Multimodal perception is qualitatively different from unimodal perception. There has been a growing body of evidence since the mid-1990s on the neural correlates of multimodal perception.[20]
|
42 |
+
|
43 |
+
Historical inquiries into the underlying mechanisms of sensation and perception have lead early researchers to subscribe to various philosophical interpretations of perception and the mind, including panpsychism, dualism, and materialism. The majority of modern scientists who study sensation and perception take on a materialistic view of the mind.[1]
|
44 |
+
|
45 |
+
Some examples of human absolute thresholds for the 9-21 external senses.[21]
|
46 |
+
|
47 |
+
Humans respond more strongly to multimodal stimuli compared to the sum of each single modality together, an effect called the superadditive effect of multisensory integration.[2] Neurons that respond to both visual and auditory stimuli have been identified in the superior temporal sulcus.[20] Additionally, multimodal “what” and “where” pathways have been proposed for auditory and tactile stimuli.[22]
|
48 |
+
|
49 |
+
External receptors that respond to stimuli from outside the body are called extoreceptors.[23] Human external sensation is based on the sensory organs of the eyes, ears, skin, vestibular system, nose, and mouth, which contribute, respectively, to the sensory perceptions of vision, hearing, touch, spatial orientation, smell, and taste. Smell and taste are both responsible for identifying molecules and thus both are types of chemoreceptors. Both olfaction (smell) and gustation (taste) require the transduction of chemical stimuli into electrical potentials.[2][1]
|
50 |
+
|
51 |
+
The visual system, or sense of sight, is based on the transduction of light stimuli received through the eyes and contributes to visual perception. The visual system detects light on photoreceptors in the retina of each eye that generates electrical nerve impulses for the perception of varying colors and brightness. There are two types of photoreceptors: rods and cones. Rods are very sensitive to light but do not distinguish colors. Cones distinguish colors but are less sensitive to dim light.[4]
|
52 |
+
|
53 |
+
At the molecular level, visual stimuli cause changes in the photopigment molecule that lead to changes in membrane potential of the photoreceptor cell. A single unit of light is called a photon, which is described in physics as a packet of energy with properties of both a particle and a wave. The energy of a photon is represented by its wavelength, with each wavelength of visible light corresponding to a particular color. Visible light is electromagnetic radiation with a wavelength between 380 and 720 nm. Wavelengths of electromagnetic radiation longer than 720 nm fall into the infrared range, whereas wavelengths shorter than 380 nm fall into the ultraviolet range. Light with a wavelength of 380 nm is blue whereas light with a wavelength of 720 nm is dark red. All other colors fall between red and blue at various points along the wavelength scale.[4]
|
54 |
+
|
55 |
+
The three types of cone opsins, being sensitive to different wavelengths of light, provide us with color vision. By comparing the activity of the three different cones, the brain can extract color information from visual stimuli. For example, a bright blue light that has a wavelength of approximately 450 nm would activate the “red” cones minimally, the “green” cones marginally, and the “blue” cones predominantly. The relative activation of the three different cones is calculated by the brain, which perceives the color as blue. However, cones cannot react to low-intensity light, and rods do not sense the color of light. Therefore, our low-light vision is—in essence—in grayscale. In other words, in a dark room, everything appears as a shade of gray. If you think that you can see colors in the dark, it is most likely because your brain knows what color something is and is relying on that memory.[4]
|
56 |
+
|
57 |
+
There is some disagreement as to whether the visual system consists of one, two, or three submodalities. Neuroanatomists generally regard it as two submodalities, given that different receptors are responsible for the perception of color and brightness. Some argue[citation needed] that stereopsis, the perception of depth using both eyes, also constitutes a sense, but it is generally regarded as a cognitive (that is, post-sensory) function of the visual cortex of the brain where patterns and objects in images are recognized and interpreted based on previously learned information. This is called visual memory.
|
58 |
+
|
59 |
+
The inability to see is called blindness. Blindness may result from damage to the eyeball, especially to the retina, damage to the optic nerve that connects each eye to the brain, and/or from stroke (infarcts in the brain). Temporary or permanent blindness can be caused by poisons or medications. People who are blind from degradation or damage to the visual cortex, but still have functional eyes, are actually capable of some level of vision and reaction to visual stimuli but not a conscious perception; this is known as blindsight. People with blindsight are usually not aware that they are reacting to visual sources, and instead just unconsciously adapt their behavior to the stimulus.
|
60 |
+
|
61 |
+
On February 14, 2013 researchers developed a neural implant that gives rats the ability to sense infrared light which for the first time provides living creatures with new abilities, instead of simply replacing or augmenting existing abilities.[24]
|
62 |
+
|
63 |
+
Visual Perception in Psychology
|
64 |
+
|
65 |
+
According to Gestalt Psychology, people perceive the whole of something even if it is not there. The Gestalt’s Law of Organization states that people have seven factors that help to group what is seen into patterns or groups: Common Fate, Similarity, Proximity, Closure, Symmetry, Continuity, and Past Experience.[25]
|
66 |
+
|
67 |
+
The Law of Common fate says that objects are led along the smoothest path. People follow the trend of motion as the lines/dots flow. [26]
|
68 |
+
|
69 |
+
The Law of Similarity refers to the grouping of images or objects that are similar to each other in some aspect. This could be due to shade, colour, size, shape, or other qualities you could distinguish.[27]
|
70 |
+
|
71 |
+
The Law of Proximity states that our minds like to group based on how close objects are to each other. We may see 42 objects in a group, but we can also perceive three groups of two lines with seven objects in each line. [26]
|
72 |
+
|
73 |
+
The Law of Closure is the idea that we as humans still see a full picture even if there are gaps within that picture. There could be gaps or parts missing from a section of a shape, but we would still perceive the shape as whole.[27]
|
74 |
+
|
75 |
+
The Law of Symmetry refers to a person's preference to see symmetry around a central point. An example would be when we use parentheses in writing. We tend to perceive all of the words in the parentheses as one section instead of individual words within the parentheses.[27]
|
76 |
+
|
77 |
+
The Law of Continuity tells us that objects are grouped together by their elements and then perceived as a whole. This usually happens when we see overlapping objects.We will see the overlapping objects with no interruptions.[27]
|
78 |
+
|
79 |
+
The Law of Past Experience refers to the tendency humans have to categorize objects according to past experiences under certain circumstances. If two objects are usually perceived together or within close proximity of each other the Law of Past Experience is usually seen.[26]
|
80 |
+
|
81 |
+
Hearing, or audition, is the transduction of sound waves into a neural signal that is made possible by the structures of the ear. The large, fleshy structure on the lateral aspect of the head is known as the auricle. At the end of the auditory canal is the tympanic membrane, or ear drum, which vibrates after it is struck by sound waves. The auricle, ear canal, and tympanic membrane are often referred to as the external ear. The middle ear consists of a space spanned by three small bones called the ossicles. The three ossicles are the malleus, incus, and stapes, which are Latin names that roughly translate to hammer, anvil, and stirrup. The malleus is attached to the tympanic membrane and articulates with the incus. The incus, in turn, articulates with the stapes. The stapes is then attached to the inner ear, where the sound waves will be transduced into a neural signal. The middle ear is connected to the pharynx through the Eustachian tube, which helps equilibrate air pressure across the tympanic membrane. The tube is normally closed but will pop open when the muscles of the pharynx contract during swallowing or yawning.[4]
|
82 |
+
|
83 |
+
Mechanoreceptors turn motion into electrical nerve pulses, which are located in the inner ear. Since sound is vibration, propagating through a medium such as air, the detection of these vibrations, that is the sense of the hearing, is a mechanical sense because these vibrations are mechanically conducted from the eardrum through a series of tiny bones to hair-like fibers in the inner ear, which detect mechanical motion of the fibers within a range of about 20 to 20,000 hertz,[28] with substantial variation between individuals. Hearing at high frequencies declines with an increase in age. Inability to hear is called deafness or hearing impairment. Sound can also be detected as vibrations conducted through the body by tactition. Lower frequencies that can be heard are detected this way. Some deaf people are able to determine the direction and location of vibrations picked up through the feet.[29]
|
84 |
+
|
85 |
+
Studies pertaining to Audition started to increase in number towards the latter end of the nineteenth century. During this time, many laboratories in the United States began to create new models, diagrams, and instruments that all pertained to the ear. [30]
|
86 |
+
|
87 |
+
There is a branch of Cognitive Psychology dedicated strictly to Audition. They call it Auditory Cognitive Psychology. The main point is to understand why humans are able to use sound in thinking outside of actually saying it. [31]
|
88 |
+
|
89 |
+
Relating to Auditory Cognitive Psychology is Psychoacoustics. Psychoacoustics is more pointed to people interested in music.[32] Haptics, a word used to refer to both taction and kinesthesia, has many parallels with psychoacoustics.[32] Most research around these two are focused on the instrument, the listener, and the player of the instrument. [32]
|
90 |
+
|
91 |
+
Somatosensation is considered a general sense, as opposed to the special senses discussed in this section. Somatosensation is the group of sensory modalities that are associated with touch and interoception. The modalities of somatosensation include pressure, vibration, light touch, tickle, itch, temperature, pain, kinesthesia.[4] Somatosensation, also called tactition (adjectival form: tactile) is a perception resulting from activation of neural receptors, generally in the skin including hair follicles, but also in the tongue, throat, and mucosa. A variety of pressure receptors respond to variations in pressure (firm, brushing, sustained, etc.). The touch sense of itching caused by insect bites or allergies involves special itch-specific neurons in the skin and spinal cord.[33] The loss or impairment of the ability to feel anything touched is called tactile anesthesia. Paresthesia is a sensation of tingling, pricking, or numbness of the skin that may result from nerve damage and may be permanent or temporary.
|
92 |
+
|
93 |
+
Two types of somatosensory signals that are transduced by free nerve endings are pain and temperature. These two modalities use thermoreceptors and nociceptors to transduce temperature and pain stimuli, respectively. Temperature receptors are stimulated when local temperatures differ from body temperature. Some thermoreceptors are sensitive to just cold and others to just heat. Nociception is the sensation of potentially damaging stimuli. Mechanical, chemical, or thermal stimuli beyond a set threshold will elicit painful sensations. Stressed or damaged tissues release chemicals that activate receptor proteins in the nociceptors. For example, the sensation of heat associated with spicy foods involves capsaicin, the active molecule in hot peppers.[4]
|
94 |
+
|
95 |
+
Low frequency vibrations are sensed by mechanoreceptors called Merkel cells, also known as type I cutaneous mechanoreceptors. Merkel cells are located in the stratum basale of the epidermis. Deep pressure and vibration is transduced by lamellated (Pacinian) corpuscles, which are receptors with encapsulated endings found deep in the dermis, or subcutaneous tissue. Light touch is transduced by the encapsulated endings known as tactile (Meissner) corpuscles. Follicles are also wrapped in a plexus of nerve endings known as the hair follicle plexus. These nerve endings detect the movement of hair at the surface of the skin, such as when an insect may be walking along the skin. Stretching of the skin is transduced by stretch receptors known as bulbous corpuscles. Bulbous corpuscles are also known as Ruffini corpuscles, or type II cutaneous mechanoreceptors.[4]
|
96 |
+
|
97 |
+
The heat receptors are sensitive to infrared radiation and can occur in specialized organs, for instance in pit vipers. The thermoceptors in the skin are quite different from the homeostatic thermoceptors in the brain (hypothalamus), which provide feedback on internal body temperature.
|
98 |
+
|
99 |
+
The vestibular sense, or sense of balance (equilibrium), is the sense that contributes to the perception of balance (equilibrium), spatial orientation, direction, or acceleration (equilibrioception). Along with audition, the inner ear is responsible for encoding information about equilibrium. A similar mechanoreceptor—a hair cell with stereocilia—senses head position, head movement, and whether our bodies are in motion. These cells are located within the vestibule of the inner ear. Head position is sensed by the utricle and saccule, whereas head movement is sensed by the semicircular canals. The neural signals generated in the vestibular ganglion are transmitted through the vestibulocochlear nerve to the brain stem and cerebellum.[4]
|
100 |
+
|
101 |
+
The semicircular canals are three ring-like extensions of the vestibule. One is oriented in the horizontal plane, whereas the other two are oriented in the vertical plane. The anterior and posterior vertical canals are oriented at approximately 45 degrees relative to the sagittal plane. The base of each semicircular canal, where it meets with the vestibule, connects to an enlarged region known as the ampulla. The ampulla contains the hair cells that respond to rotational movement, such as turning the head while saying “no.” The stereocilia of these hair cells extend into the cupula, a membrane that attaches to the top of the ampulla. As the head rotates in a plane parallel to the semicircular canal, the fluid lags, deflecting the cupula in the direction opposite to the head movement. The semicircular canals contain several ampullae, with some oriented horizontally and others oriented vertically. By comparing the relative movements of both the horizontal and vertical ampullae, the vestibular system can detect the direction of most head movements within three-dimensional (3D) space.[4]
|
102 |
+
|
103 |
+
The vestibular nerve conducts information from sensory receptors in three ampulla that sense motion of fluid in three semicircular canals caused by three-dimensional rotation of the head. The vestibular nerve also conducts information from the utricle and the saccule, which contain hair-like sensory receptors that bend under the weight of otoliths (which are small crystals of calcium carbonate) that provide the inertia needed to detect head rotation, linear acceleration, and the direction of gravitational force.
|
104 |
+
|
105 |
+
The gustatory system or the sense of taste is the sensory system that is partially responsible for the perception of taste (flavor).[34] A few recognized submodalities exist within taste: sweet, salty, sour, bitter, and umami. Very recent research has suggested that there may also be a sixth taste submodality for fats, or lipids.[4] The sense of taste is often confused with the perception of flavor, which is the results of the multimodal integration of gustatory (taste) and olfactory (smell) sensations.[35]
|
106 |
+
|
107 |
+
Within the structure of the lingual papillae are taste buds that contain specialized gustatory receptor cells for the transduction of taste stimuli. These receptor cells are sensitive to the chemicals contained within foods that are ingested, and they release neurotransmitters based on the amount of the chemical in the food. Neurotransmitters from the gustatory cells can activate sensory neurons in the facial, glossopharyngeal, and vagus cranial nerves.[4]
|
108 |
+
|
109 |
+
Salty and sour taste submodalities are triggered by the cations Na+ and H+, respectively. The other taste modalities result from food molecules binding to a G protein–coupled receptor. A G protein signal transduction system ultimately leads to depolarization of the gustatory cell. The sweet taste is the sensitivity of gustatory cells to the presence of glucose (or sugar substitutes) dissolved in the saliva. Bitter taste is similar to sweet in that food molecules bind to G protein–coupled receptors. The taste known as umami is often referred to as the savory taste. Like sweet and bitter, it is based on the activation of G protein–coupled receptors by a specific molecule.[4]
|
110 |
+
|
111 |
+
Once the gustatory cells are activated by the taste molecules, they release neurotransmitters onto the dendrites of sensory neurons. These neurons are part of the facial and glossopharyngeal cranial nerves, as well as a component within the vagus nerve dedicated to the gag reflex. The facial nerve connects to taste buds in the anterior third of the tongue. The glossopharyngeal nerve connects to taste buds in the posterior two thirds of the tongue. The vagus nerve connects to taste buds in the extreme posterior of the tongue, verging on the pharynx, which are more sensitive to noxious stimuli such as bitterness.[4]
|
112 |
+
|
113 |
+
Flavor depends on odor, texture, and temperature as well as on taste. Humans receive tastes through sensory organs called taste buds, or gustatory calyculi, concentrated on the upper surface of the tongue. Other tastes such as calcium[36][37] and free fatty acids[38] may also be basic tastes but have yet to receive widespread acceptance. The inability to taste is called ageusia.
|
114 |
+
|
115 |
+
There is a rare phenomenon when it comes to the Gustatory sense. It is called Lexical-Gustatory Synesthesia. Lexical-Gustatory Synesthesia is when people can “taste” words. [39] They have reported having flavor sensations they aren’t actually eating. When they read words, hear words, or even imagine words. They have reported not only simple flavors, but textures, complex flavors, and temperatures as well. [40]
|
116 |
+
|
117 |
+
Like the sense of taste, the sense of smell, or the olfactiory system, is also responsive to chemical stimuli.[4] Unlike taste, there are hundreds of olfactory receptors (388 according to one source), each binding to a particular molecular feature. Odor molecules possess a variety of features and, thus, excite specific receptors more or less strongly. This combination of excitatory signals from different receptors makes up what humans perceive as the molecule's smell.[41]
|
118 |
+
|
119 |
+
The olfactory receptor neurons are located in a small region within the superior nasal cavity. This region is referred to as the olfactory epithelium and contains bipolar sensory neurons. Each olfactory sensory neuron has dendrites that extend from the apical surface of the epithelium into the mucus lining the cavity. As airborne molecules are inhaled through the nose, they pass over the olfactory epithelial region and dissolve into the mucus. These odorant molecules bind to proteins that keep them dissolved in the mucus and help transport them to the olfactory dendrites. The odorant–protein complex binds to a receptor protein within the cell membrane of an olfactory dendrite. These receptors are G protein–coupled, and will produce a graded membrane potential in the olfactory neurons.[4]
|
120 |
+
|
121 |
+
In the brain, olfaction is processed by the olfactory cortex. Olfactory receptor neurons in the nose differ from most other neurons in that they die and regenerate on a regular basis. The inability to smell is called anosmia. Some neurons in the nose are specialized to detect pheromones.[42] Loss of the sense of smell can result in food tasting bland. A person with an impaired sense of smell may require additional spice and seasoning levels for food to be tasted. Anosmia may also be related to some presentations of mild depression, because the loss of enjoyment of food may lead to a general sense of despair. The ability of olfactory neurons to replace themselves decreases with age, leading to age-related anosmia. This explains why some elderly people salt their food more than younger people do.[4]
|
122 |
+
|
123 |
+
Causes of Olfactory dysfunction can be caused by age, exposure to toxic chemicals, viral infections, epilepsy, some sort of neurodegenerative disease, head trauma, or as a result of another disorder. [5]
|
124 |
+
|
125 |
+
As studies in olfaction have continued, there has been a positive correlation to its dysfunction or degeneration and early signs of Alzheimers and sporadic Parkinson’s disease. Many patients don’t notice the decline in smell before being tested. In Parkinson’s Disease and Alzheimers, an olfactory deficit is present in 85 to 90% of the early onset cases. [5]There is evidence that the decline of this sense can precede the Alzheimers or Parkinson’s Disease by a couple years. Although the deficit is present in these two diseases, as well as others, it is important to make note that the severity or magnitude vary with every disease. This has brought to light some suggestions that olfactory testing could be used in some cases to aid in differentiating many of the neurodegenerative diseases. [5]
|
126 |
+
|
127 |
+
Those who were born without a sense of smell or have a damaged sense of smell usually complain about 1, or more, of 3 things. Our olfactory sense is also used as a warning against bad food. If the sense of smell is damaged or not there, it can lead to a person contracting food poisoning more often. Not having a sense of smell can also lead to damaged relationships or insecurities within the relationships because of the inability for the person to not smell body odor. Lastly, smell influences how food and drink taste. When the olfactory sense is damaged, the satisfaction from eating and drinking is not as prominent.
|
128 |
+
|
129 |
+
Proprioception, the kinesthetic sense, provides the parietal cortex of the brain with information on the movement and relative positions of the parts of the body. Neurologists test this sense by telling patients to close their eyes and touch their own nose with the tip of a finger. Assuming proper proprioceptive function, at no time will the person lose awareness of where the hand actually is, even though it is not being detected by any of the other senses. Proprioception and touch are related in subtle ways, and their impairment results in surprising and deep deficits in perception and action.[43]
|
130 |
+
|
131 |
+
Nociception (physiological pain) signals nerve-damage or damage to tissue. The three types of pain receptors are cutaneous (skin), somatic (joints and bones), and visceral (body organs). It was previously believed that pain was simply the overloading of pressure receptors, but research in the first half of the 20th century indicated that pain is a distinct phenomenon that intertwines with all of the other senses, including touch. Pain was once considered an entirely subjective experience, but recent studies show that pain is registered in the anterior cingulate gyrus of the brain.[44] The main function of pain is to attract our attention to dangers and motivate us to avoid them. For example, humans avoid touching a sharp needle, or hot object, or extending an arm beyond a safe limit because it is dangerous, and thus hurts. Without pain, people could do many dangerous things without being aware of the dangers.
|
132 |
+
|
133 |
+
An internal sensation and perception also known as interoception[45] is "any sense that is normally stimulated from within the body".[46] These involve numerous sensory receptors in internal organs. Interoception is thought to be atypical in clinical conditions such as alexithymia.[47]
|
134 |
+
Some examples of specific receptors are:
|
135 |
+
|
136 |
+
Other living organisms have receptors to sense the world around them, including many of the senses listed above for humans. However, the mechanisms and capabilities vary widely.
|
137 |
+
|
138 |
+
An example of smell in non-mammals is that of sharks, which combine their keen sense of smell with timing to determine the direction of a smell. They follow the nostril that first detected the smell.[54] Insects have olfactory receptors on their antennae. Although it is unknown to the degree and magnitude which non-human animals can smell better than humans.[55]
|
139 |
+
|
140 |
+
Many animals (salamanders, reptiles, mammals) have a vomeronasal organ[56] that is connected with the mouth cavity. In mammals it is mainly used to detect pheromones of marked territory, trails, and sexual state. Reptiles like snakes and monitor lizards make extensive use of it as a smelling organ by transferring scent molecules to the vomeronasal organ with the tips of the forked tongue. In reptiles the vomeronasal organ is commonly referred to as Jacobsons organ. In mammals, it is often associated with a special behavior called flehmen characterized by uplifting of the lips. The organ is vestigial in humans, because associated neurons have not been found that give any sensory input in humans.[57]
|
141 |
+
|
142 |
+
Flies and butterflies have taste organs on their feet, allowing them to taste anything they land on. Catfish have taste organs across their entire bodies, and can taste anything they touch, including chemicals in the water.[58]
|
143 |
+
|
144 |
+
Cats have the ability to see in low light, which is due to muscles surrounding their irides–which contract and expand their pupils–as well as to the tapetum lucidum, a reflective membrane that optimizes the image.
|
145 |
+
Pit vipers, pythons and some boas have organs that allow them to detect infrared light, such that these snakes are able to sense the body heat of their prey. The common vampire bat may also have an infrared sensor on its nose.[59] It has been found that birds and some other animals are tetrachromats and have the ability to see in the ultraviolet down to 300 nanometers. Bees and dragonflies[60] are also able to see in the ultraviolet. Mantis shrimps can perceive both polarized light and multispectral images and have twelve distinct kinds of color receptors, unlike humans which have three kinds and most mammals which have two kinds.[61]
|
146 |
+
|
147 |
+
Cephalopods have the ability to change color using chromatophores in their skin. Researchers believe that opsins in the skin can sense different wavelengths of light and help the creatures choose a coloration that camouflages them, in addition to light input from the eyes.[62] Other researchers hypothesize that cephalopod eyes in species which only have a single photoreceptor protein may use chromatic aberration to turn monochromatic vision into color vision,[63] explaining pupils shaped like the letter U, the letter W, or a dumbbell, as well as explaining the need for colorful mating displays.[64] Some cephalopods can distinguish the polarization of light.
|
148 |
+
|
149 |
+
Many invertebrates have a statocyst, which is a sensor for acceleration and orientation that works very differently from the mammalian's semi-circular canals.
|
150 |
+
|
151 |
+
In addition, some animals have senses that humans do not, including the following:
|
152 |
+
|
153 |
+
Magnetoception (or magnetoreception) is the ability to detect the direction one is facing based on the Earth's magnetic field. Directional awareness is most commonly observed in birds, which rely on their magnetic sense to navigate during migration.[65][65][66][permanent dead link][67][68] It has also been observed in insects such as bees. Cattle make use of magnetoception to align themselves in a north–south direction.[69] Magnetotactic bacteria build miniature magnets inside themselves and use them to determine their orientation relative to the Earth's magnetic field.[70][71] There has been some recent (tentative) research suggesting that the Rhodopsin in the human eye, which responds particularly well to blue light, can facilitate magnetoception in humans.[72]
|
154 |
+
|
155 |
+
Certain animals, including bats and cetaceans, have the ability to determine orientation to other objects through interpretation of reflected sound (like sonar). They most often use this to navigate through poor lighting conditions or to identify and track prey. There is currently an uncertainty whether this is simply an extremely developed post-sensory interpretation of auditory perceptions or it actually constitutes a separate sense. Resolution of the issue will require brain scans of animals while they actually perform echolocation, a task that has proven difficult in practice.
|
156 |
+
|
157 |
+
Blind people report they are able to navigate and in some cases identify an object by interpreting reflected sounds (especially their own footsteps), a phenomenon known as human echolocation.
|
158 |
+
|
159 |
+
Electroreception (or electroception) is the ability to detect electric fields. Several species of fish, sharks, and rays have the capacity to sense changes in electric fields in their immediate vicinity. For cartilaginous fish this occurs through a specialized organ called the Ampullae of Lorenzini. Some fish passively sense changing nearby electric fields; some generate their own weak electric fields, and sense the pattern of field potentials over their body surface; and some use these electric field generating and sensing capacities for social communication. The mechanisms by which electroceptive fish construct a spatial representation from very small differences in field potentials involve comparisons of spike latencies from different parts of the fish's body.
|
160 |
+
|
161 |
+
The only orders of mammals that are known to demonstrate electroception are the dolphin and monotreme orders. Among these mammals, the platypus[73] has the most acute sense of electroception.
|
162 |
+
|
163 |
+
A dolphin can detect electric fields in water using electroreceptors in vibrissal crypts arrayed in pairs on its snout and which evolved from whisker motion sensors.[74] These electroreceptors can detect electric fields as weak as 4.6 microvolts per centimeter, such as those generated by contracting muscles and pumping gills of potential prey. This permits the dolphin to locate prey from the seafloor where sediment limits visibility and echolocation.
|
164 |
+
|
165 |
+
Spiders have been shown to detect electric fields to determine a suitable time to extend web for 'ballooning'.[75]
|
166 |
+
|
167 |
+
Body modification enthusiasts have experimented with magnetic implants to attempt to replicate this sense.[76] However, in general humans (and it is presumed other mammals) can detect electric fields only indirectly by detecting the effect they have on hairs. An electrically charged balloon, for instance, will exert a force on human arm hairs, which can be felt through tactition and identified as coming from a static charge (and not from wind or the like). This is not electroreception, as it is a post-sensory cognitive action.
|
168 |
+
|
169 |
+
Hygroreception is the ability to detect changes in the moisture content of the environment.[11][77]
|
170 |
+
|
171 |
+
The ability to sense infrared thermal radiation evolved independently in various families of snakes. Essentially, it allows these reptiles to "see" radiant heat at wavelengths between 5 and 30 μm to a degree of accuracy such that a blind rattlesnake can target vulnerable body parts of the prey at which it strikes.[78] It was previously thought that the organs evolved primarily as prey detectors, but it is now believed that it may also be used in thermoregulatory decision making.[79] The facial pit underwent parallel evolution in pitvipers and some boas and pythons, having evolved once in pitvipers and multiple times in boas and pythons.[80] The electrophysiology of the structure is similar between the two lineages, but they differ in gross structural anatomy. Most superficially, pitvipers possess one large pit organ on either side of the head, between the eye and the nostril (Loreal pit), while boas and pythons have three or more comparatively smaller pits lining the upper and sometimes the lower lip, in or between the scales. Those of the pitvipers are the more advanced, having a suspended sensory membrane as opposed to a simple pit structure. Within the family Viperidae, the pit organ is seen only in the subfamily Crotalinae: the pitvipers. The organ is used extensively to detect and target endothermic prey such as rodents and birds, and it was previously assumed that the organ evolved specifically for that purpose. However, recent evidence shows that the pit organ may also be used for thermoregulation. According to Krochmal et al., pitvipers can use their pits for thermoregulatory decision-making while true vipers (vipers who do not contain heat-sensing pits) cannot.
|
172 |
+
|
173 |
+
In spite of its detection of IR light, the pits' IR detection mechanism is not similar to photoreceptors – while photoreceptors detect light via photochemical reactions, the protein in the pits of snakes is in fact a temperature-sensitive ion channel. It senses infrared signals through a mechanism involving warming of the pit organ, rather than a chemical reaction to light.[81] This is consistent with the thin pit membrane, which allows incoming IR radiation to quickly and precisely warm a given ion channel and trigger a nerve impulse, as well as vascularize the pit membrane in order to rapidly cool the ion channel back to its original "resting" or "inactive" temperature.[81]
|
174 |
+
|
175 |
+
Pressure detection uses the organ of Weber, a system consisting of three appendages of vertebrae transferring changes in shape of the gas bladder to the middle ear. It can be used to regulate the buoyancy of the fish. Fish like the weather fish and other loaches are also known to respond to low pressure areas but they lack a swim bladder.
|
176 |
+
|
177 |
+
Current detection is a detection system of water currents, consisting mostly of vortices, found in the lateral line of fish and aquatic forms of amphibians. The lateral line is also sensitive to low-frequency vibrations. The mechanoreceptors are hair cells, the same mechanoreceptors for vestibular sense and hearing. It is used primarily for navigation, hunting, and schooling. The receptors of the electrical sense are modified hair cells of the lateral line system.
|
178 |
+
|
179 |
+
Polarized light direction/detection is used by bees to orient themselves, especially on cloudy days. Cuttlefish, some beetles, and mantis shrimp can also perceive the polarization of light. Most sighted humans can in fact learn to roughly detect large areas of polarization by an effect called Haidinger's brush, however this is considered an entoptic phenomenon rather than a separate sense.
|
180 |
+
|
181 |
+
Slit sensillae of spiders detect mechanical strain in the exoskeleton, providing information on force and vibrations.
|
182 |
+
|
183 |
+
By using a variety of sense receptors, plants sense light, temperature, humidity, chemical substances, chemical gradients, reorientation, magnetic fields, infections, tissue damage and mechanical pressure. The absence of a nervous system notwithstanding, plants interpret and respond to these stimuli by a variety of hormonal and cell-to-cell communication pathways that result in movement, morphological changes and physiological state alterations at the organism level, that is, result in plant behavior. Such physiological and cognitive functions are generally not believed to give rise to mental phenomena or qualia, however, as these are typically considered the product of nervous system activity. The emergence of mental phenomena from the activity of systems functionally or computationally analogous to that of nervous systems is, however, a hypothetical possibility explored by some schools of thought in the philosophy of mind field, such as functionalism and computationalism.
|
184 |
+
|
185 |
+
However, plants could perceive the world around them,[15] and might be able to emit airborne sounds similar to "screaming" when stressed. Those noises could not be detectable by human ears, but organisms with a hearing range that can hear ultrasonic frequencies—like mice, bats or perhaps other plants—could hear the plants' cries from as far as 15 feet (4.6 m) away.[82]
|
186 |
+
|
187 |
+
Machine perception is the capability of a computer system to interpret data in a manner that is similar to the way humans use their senses to relate to the world around them.[16][17][83] Computers take in and respond to their environment through attached hardware. Until recently, input was limited to a keyboard, joystick or a mouse, but advances in technology, both in hardware and software, have allowed computers to take in sensory input in a way similar to humans.[16][17]
|
188 |
+
|
189 |
+
In the time of William Shakespeare, there were commonly reckoned to be five wits or five senses.[85] At that time, the words "sense" and "wit" were synonyms,[85] so the senses were known as the five outward wits.[86][87] This traditional concept of five senses is common today.
|
190 |
+
|
191 |
+
The traditional five senses are enumerated as the "five material faculties" (pañcannaṃ indriyānaṃ avakanti) in Hindu literature. They appear in allegorical representation as early as in the Katha Upanishad (roughly 6th century BC), as five horses drawing the "chariot" of the body, guided by the mind as "chariot driver".
|
192 |
+
|
193 |
+
Depictions of the five traditional senses as allegory became a popular subject for seventeenth-century artists, especially among Dutch and Flemish Baroque painters. A typical example is Gérard de Lairesse's Allegory of the Five Senses (1668), in which each of the figures in the main group alludes to a sense: Sight is the reclining boy with a convex mirror, hearing is the cupid-like boy with a triangle, smell is represented by the girl with flowers, taste is represented by the woman with the fruit, and touch is represented by the woman holding the bird.
|
194 |
+
|
195 |
+
In Buddhist philosophy, Ayatana or "sense-base" includes the mind as a sense organ, in addition to the traditional five. This addition to the commonly acknowledged senses may arise from the psychological orientation involved in Buddhist thought and practice. The mind considered by itself is seen as the principal gateway to a different spectrum of phenomena that differ from the physical sense data. This way of viewing the human sense system indicates the importance of internal sources of sensation and perception that complements our experience of the external world.[citation needed]
|
en/1141.html.txt
ADDED
@@ -0,0 +1,143 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Circumcision is the removal of the foreskin from the human penis.[1][2] In the most common procedure, the foreskin is opened, adhesions are removed, and the foreskin is separated from the glans. After that, a circumcision device may be placed, and then the foreskin is cut off. Topical or locally injected anesthesia is used to reduce pain and physiologic stress.[3] The procedure is most often an elective surgery performed on babies and children for religious or cultural reasons.[4] Medically, circumcision is a treatment option for problematic cases of phimosis and balanoposthitis that do not resolve with other treatments, and for chronic urinary tract infections (UTIs).[5][6] It is contraindicated in cases of certain genital structure abnormalities or poor general health.[1][6]
|
4 |
+
|
5 |
+
The positions of the world's major medical organizations range from a belief that elective circumcision of babies and children carries significant risks and offers no medical benefits to a belief that the procedure has a modest health benefit that outweighs small risks.[7] No major medical organization recommends circumcising all males, and no major medical organization recommends banning the procedure.[7] Ethical and legal questions regarding informed consent and human rights have been raised over the circumcision of babies and children for non-medical reasons; for these reasons, the procedure is controversial.[8][9]
|
6 |
+
|
7 |
+
Male circumcision reduces the risk of HIV infection among heterosexual men in sub-Saharan Africa.[10][11] Consequently, the World Health Organization (WHO) recommends consideration of circumcision as part of a comprehensive HIV prevention program in areas with high rates of HIV.[12] The effectiveness of using circumcision to prevent HIV in the developed world is unclear;[13] however, there is some evidence that circumcision reduces HIV infection risk for men who have sex with men.[14] Circumcision is also associated with reduced rates of cancer-causing forms of human papillomavirus (HPV),[15][16] and UTIs.[3] It also decreases the risk of cancer of the penis via effectively curing phimosis.[3] Prevention of these conditions is not seen as a justification for routine circumcision of infants in the Western world.[5] Studies of other sexually transmitted infections also suggest that circumcision is protective, including for men who have sex with men.[17] A 2010 review found circumcisions performed by medical providers to have a typical complication rate of 1.5% for babies and 6% for older children, with few cases of severe complications.[18] Bleeding, infection, and the removal of either too much or too little foreskin are the most common acute complications. Meatal stenosis is the most common long term complication.[19] Complication rates are higher when the procedure is performed by an inexperienced operator, in unsterile conditions, or in older children.[18] Circumcision does not appear to have a negative impact on sexual function.[20][21]
|
8 |
+
|
9 |
+
An estimated one-third of males worldwide are circumcised.[4][18][22] Circumcision is most common among Muslims and Jews (among whom it is near-universal for religious reasons), and in the United States, parts of Southeast Asia, and Africa.[4][23] It is relatively rare for non-religious reasons in Europe, Latin America, parts of Southern Africa, and most of Asia.[4] The origin of circumcision is not known with certainty; the oldest documented evidence for it comes from ancient Egypt.[4][24] Various theories have been proposed as to its origin including as a religious sacrifice and as a rite of passage marking a boy's entrance into adulthood.[25] It is part of religious law in Judaism[26] and is an established practice in Islam, Coptic Christianity, and the Ethiopian Orthodox Church.[4][27][28] The word circumcision is from Latin circumcidere, meaning "to cut around".[4]
|
10 |
+
|
11 |
+
Neonatal circumcision is usually elected by the parents for non-medical reasons, such as religious beliefs or personal preferences, possibly driven by societal norms.[6] Outside the parts of Africa with high prevalence of HIV/AIDS, the positions of the world's major medical organizations on non-therapeutic neonatal circumcision range from considering it as having a modest net health benefit that outweighs small risks, to viewing it as having no benefit with significant risks for harm.[7] No major medical organization recommends universal neonatal circumcision, and no major medical organization calls for banning it either.[7] The Royal Dutch Medical Association, which expresses some of the strongest opposition to routine neonatal circumcision, argues that while there are valid reasons for banning it, doing so could lead parents who insist on the procedure to turn to poorly trained practitioners instead of medical professionals.[7][29] This argument to keep the procedure within the purview of medical professionals is found across all major medical organizations.[7] In addition, the organizations advise medical professionals to yield to some degree to parental preferences, which are commonly based upon cultural or religious views, in their decision to agree to circumcise.[7] The Danish College of General Practitioners states that circumcision should "only [be done] when medically needed, otherwise it is a case of mutilation."[30]
|
12 |
+
|
13 |
+
Circumcision may be used to treat pathological phimosis, refractory balanoposthitis and chronic or recurrent urinary tract infections (UTIs).[5][6] The WHO promotes circumcision to prevent female-to-male HIV transmission in countries with high rates of HIV.[12] The International AIDS Society-USA also suggests circumcision be discussed with men who have insertive anal sex with men, especially in regions where HIV is common.[31]
|
14 |
+
|
15 |
+
The finding that circumcision significantly reduces female-to-male HIV transmission has prompted medical organizations serving communities affected by endemic HIV/AIDS to promote circumcision as an additional method of controlling the spread of HIV.[7] In 2007 the WHO and the Joint United Nations Programme on HIV/AIDS (UNAIDS) recommended circumcision as part of a comprehensive program for prevention of HIV transmission in areas with high endemic rates of HIV, as long as the program includes "informed consent, confidentiality, and absence of coercion".[12]
|
16 |
+
|
17 |
+
Circumcision is contraindicated in infants with certain genital structure abnormalities, such as a misplaced urethral opening (as in hypospadias and epispadias), curvature of the head of the penis (chordee), or ambiguous genitalia, because the foreskin may be needed for reconstructive surgery. Circumcision is contraindicated in premature infants and those who are not clinically stable and in good health.[1][6][32] If an individual, child or adult, is known to have or has a family history of serious bleeding disorders (hemophilia), it is recommended that the blood be checked for normal coagulation properties before the procedure is attempted.[6][32]
|
18 |
+
|
19 |
+
The foreskin extends out from the base of the glans and covers the glans when the penis is flaccid. Proposed theories for the purpose of the foreskin are that it serves to protect the penis as the fetus develops in the mother's womb, that it helps to preserve moisture in the glans, and that it improves sexual pleasure. The foreskin may also be a pathway of infection for certain diseases. Circumcision removes the foreskin at its attachment to the base of the glans.[4]
|
20 |
+
|
21 |
+
For infant circumcision, devices such as the Gomco clamp, Plastibell and Mogen clamp are commonly used in the USA.[3] These follow the same basic procedure. First, the amount of foreskin to be removed is estimated. The practitioner opens the foreskin via the preputial orifice to reveal the glans underneath and ensures it is normal before bluntly separating the inner lining of the foreskin (preputial epithelium) from its attachment to the glans. The practitioner then places the circumcision device (this sometimes requires a dorsal slit), which remains until blood flow has stopped. Finally, the foreskin is amputated.[3] For older babies and adults, circumcision is often performed surgically without specialized instruments,[32] and alternatives such as Unicirc, Prepex or the Shang ring are available.[33]
|
22 |
+
|
23 |
+
The circumcision procedure causes pain, and for neonates this pain may interfere with mother-infant interaction or cause other behavioral changes,[34] so the use of analgesia is advocated.[3][35] Ordinary procedural pain may be managed in pharmacological and non-pharmacological ways. Pharmacological methods, such as localized or regional pain-blocking injections and topical analgesic creams, are safe and effective.[3][36][37] The ring block and dorsal penile nerve block (DPNB) are the most effective at reducing pain, and the ring block may be more effective than the DPNB. They are more effective than EMLA (eutectic mixture of local anesthetics) cream, which is more effective than a placebo.[36][37] Topical creams have been found to irritate the skin of low birth weight infants, so penile nerve block techniques are recommended in this group.[3]
|
24 |
+
|
25 |
+
For infants, non-pharmacological methods such as the use of a comfortable, padded chair and a sucrose or non-sucrose pacifier are more effective at reducing pain than a placebo,[37] but the American Academy of Pediatrics (AAP) states that such methods are insufficient alone and should be used to supplement more effective techniques.[3] A quicker procedure reduces duration of pain; use of the Mogen clamp was found to result in a shorter procedure time and less pain-induced stress than the use of the Gomco clamp or the Plastibell.[37] The available evidence does not indicate that post-procedure pain management is needed.[3] For adults, topical anesthesia, ring block, dorsal penile nerve block (DPNB) and general anesthesia are all options,[38] and the procedure requires four to six weeks of abstinence from masturbation or intercourse to allow the wound to heal.[32]
|
26 |
+
|
27 |
+
There is strong evidence that circumcision reduces the risk of men acquiring HIV infection in areas of the world with high rates of HIV.[10][11] Evidence among heterosexual men in sub-Saharan Africa shows an absolute decrease in risk of 1.8% which is a relative decrease of between 38% and 66% over two years,[11] and in this population studies rate it cost effective.[39] Whether it is of benefit in developed countries is undetermined.[13]
|
28 |
+
|
29 |
+
There are plausible explanations based on human biology for how circumcision can decrease the likelihood of female-to-male HIV transmission. The superficial skin layers of the penis contain Langerhans cells, which are targeted by HIV; removing the foreskin reduces the number of these cells. When an uncircumcised penis is erect during intercourse, any small tears on the inner surface of the foreskin come into direct contact with the vaginal walls, providing a pathway for transmission. When an uncircumcised penis is flaccid, the pocket between the inside of the foreskin and the head of the penis provides an environment conducive to pathogen survival; circumcision eliminates this pocket. Some experimental evidence has been provided to support these theories.[40]
|
30 |
+
|
31 |
+
The WHO and the UNAIDS state that male circumcision is an efficacious intervention for HIV prevention, but should be carried out by well-trained medical professionals and under conditions of informed consent (parents' consent for their infant boys).[4][12][41] The WHO has judged circumcision to be a cost-effective public health intervention against the spread of HIV in Africa, although not necessarily more cost-effective than condoms.[4] The joint WHO/UNAIDS recommendation also notes that circumcision only provides partial protection from HIV and should not replace known methods of HIV prevention.[12]
|
32 |
+
|
33 |
+
Male circumcision provides only indirect HIV protection for heterosexual women.[3][42][43] It is unknown whether or not circumcision reduces transmission when men engage in anal sex with a female partner.[41][44] Some evidence supports its effectiveness at reducing HIV risk in men who have sex with men.[14]
|
34 |
+
|
35 |
+
Human papillomavirus (HPV) is the most commonly transmitted sexually transmitted infection, affecting both men and women. While most infections are asymptomatic and are cleared by the immune system, some types of the virus cause genital warts, and other types, if untreated, cause various forms of cancer, including cervical cancer, and penile cancer. Genital warts and cervical cancer are the two most common problems resulting from HPV.[45]
|
36 |
+
|
37 |
+
Circumcision is associated with a reduced prevalence of oncogenic types of HPV infection, meaning that a randomly selected circumcised man is less likely to be found infected with cancer-causing types of HPV than an uncircumcised man.[46][47] It also decreases the likelihood of multiple infections.[16] As of 2012[update] there was no strong evidence that it reduces the rate of new HPV infection,[15][16][48] but the procedure is associated with increased clearance of the virus by the body,[15][16] which can account for the finding of reduced prevalence.[16]
|
38 |
+
|
39 |
+
Although genital warts are caused by a type of HPV, there is no statistically significant relationship between being circumcised and the presence of genital warts.[15][47][48]
|
40 |
+
|
41 |
+
Studies evaluating the effect of circumcision on the rates of other sexually transmitted infections have generally, found it to be protective. A 2006 meta-analysis found that circumcision was associated with lower rates of syphilis, chancroid and possibly genital herpes.[49] A 2010 review found that circumcision reduced the incidence of HSV-2 (herpes simplex virus, type 2) infections by 28%.[50] The researchers found mixed results for protection against trichomonas vaginalis and chlamydia trachomatis, and no evidence of protection against gonorrhea or syphilis.[50] It may also possibly protect against syphilis in men who have sex with men.[51]
|
42 |
+
|
43 |
+
Phimosis is the inability to retract the foreskin over the glans penis.[52] At birth, the foreskin cannot be retracted due to adhesions between the foreskin and glans, and this is considered normal (physiological phimosis).[52] Over time the foreskin naturally separates from the glans, and a majority of boys are able to retract the foreskin by age three.[52] Less than one percent are still having problems at age 18.[52] If the inability to do so becomes problematic (pathological phimosis) circumcision is a treatment option.[5][53] This pathological phimosis may be due to scarring from the skin disease balanitis xerotica obliterans (BXO), repeated episodes of balanoposthitis or forced retraction of the foreskin.[54] Steroid creams are also a reasonable option and may prevent the need for surgery including in those with mild BXO.[54][55] The procedure may also be used to prevent the development of phimosis.[6] Phimosis is also a complication that can result from circumcision.[56]
|
44 |
+
|
45 |
+
An inflammation of the glans penis and foreskin is called balanoposthitis, and the condition affecting the glans alone is called balanitis.[57][58] Most cases of these conditions occur in uncircumcised males,[59] affecting 4–11% of that group.[60] The moist, warm space underneath the foreskin is thought to facilitate the growth of pathogens, particularly when hygiene is poor. Yeasts, especially Candida albicans, are the most common penile infection and are rarely identified in samples taken from circumcised males.[59] Both conditions are usually treated with topical antibiotics (metronidazole cream) and antifungals (clotrimazole cream) or low-potency steroid creams.[57][58] Circumcision is a treatment option for refractory or recurrent balanoposthitis, but in the twenty-first century the availability of the other treatments has made it less necessary.[57][58]
|
46 |
+
|
47 |
+
A UTI affects parts of the urinary system including the urethra, bladder, and kidneys. There is about a one percent risk of UTIs in boys under two years of age, and the majority of incidents occur in the first year of life. There is good but not ideal evidence that circumcision of babies reduces the incidence of UTIs in boys under two years of age, and there is fair evidence that the reduction in incidence is by a factor of 3–10 times (100 circumcisions prevents one UTI).[3][61][62] Circumcision is most likely to benefit boys who have a high risk of UTIs due to anatomical defects,[3] and may be used to treat recurrent UTIs.[5]
|
48 |
+
|
49 |
+
There is a plausible biological explanation for the reduction in UTI risk after circumcision. The orifice through which urine passes at the tip of the penis (the urinary meatus) hosts more urinary system disease-causing bacteria in uncircumcised boys than in circumcised boys, especially in those under six months of age. As these bacteria are a risk factor for UTIs, circumcision may reduce the risk of UTIs through a decrease in the bacterial population.[3][62]
|
50 |
+
|
51 |
+
Circumcision has a protective effect against the risks of penile cancer in men, and cervical cancer in the female sexual partners of heterosexual men. Penile cancer is rare, with about 1 new case per 100,000 people per year in developed countries, and higher incidence rates per 100,000 in sub-Saharan Africa (for example: 1.6 in Zimbabwe, 2.7 in Uganda and 3.2 in Eswatini).[63] The number of new cases is also high in some South American countries including Paraguay and Uruguay, at about 4.3 per 100,000.[64] It is least common in Israeli Jews—0.1 per 100,000—related in part to the very high rate of circumcision of babies.[65]
|
52 |
+
|
53 |
+
Penile cancer development can be detected in the carcinoma in situ (CIS) cancerous precursor stage and at the more advanced invasive squamous cell carcinoma stage.[3] Childhood or adolescent circumcision is associated with a reduced risk of invasive squamous cell carcinoma in particular.[3][63] There is an association between adult circumcision and an increased risk of invasive penile cancer; this is believed to be from men being circumcised as a treatment for penile cancer or a condition that is a precursor to cancer rather than a consequence of circumcision itself.[63] Penile cancer has been observed to be nearly eliminated in populations of males circumcised neonatally.[60]
|
54 |
+
|
55 |
+
Important risk factors for penile cancer include phimosis and HPV infection, both of which are mitigated by circumcision.[63] The mitigating effect circumcision has on the risk factor introduced by the possibility of phimosis is secondary, in that the removal of the foreskin eliminates the possibility of phimosis. This can be inferred from study results that show uncircumcised men with no history of phimosis are equally likely to have penile cancer as circumcised men.[3][63] Circumcision is also associated with a reduced prevalence of cancer-causing types of HPV in men[16] and a reduced risk of cervical cancer (which is caused by a type of HPV) in female partners of men.[6] As penile cancer is rare (and may become increasingly rare as HPV vaccination rates rise), and circumcision has risks, the practice is not considered to be valuable solely as a prophylactic measure against penile cancer in the United States.[3][60][66]
|
56 |
+
|
57 |
+
There is some evidence that circumcision is associated with lower risk of prostate cancer. A 2015 meta-analysis found a reduced risk of prostate cancer associated with circumcision in black men.[67] A 2016 meta-analysis found that men with prostate cancer were less likely to be circumcised.[68]
|
58 |
+
|
59 |
+
A 2017 systematic review found consistent evidence that male circumcision prior to heterosexual contact was associated with a decreased risk of cervical cancer, cervical dysplasia, HSV-2, chlamydia, and syphilis among women. The evidence was less consistent in regards to the potential association of circumcision with women's risk of HPV and HIV.[69]
|
60 |
+
|
61 |
+
Neonatal circumcision is generally safe when done by an experienced practitioner.[70][71] The most common acute complications are bleeding, infection and the removal of either too much or too little foreskin.[3][72] These complications occur in approximately 0.13% of procedures, with bleeding being the most common acute complication in the United States.[72] Minor complications are reported to occur in three percent of procedures.[70] Severe complications are rare.[73] A specific complication rate is difficult to determine due to scant data on complications and inconsistencies in their classification.[3] Complication rates are greater when the procedure is performed by an inexperienced operator, in unsterile conditions, or when the child is at an older age.[18] Significant acute complications happen rarely,[3][18] occurring in about 1 in 500 newborn procedures in the United States.[3] Severe to catastrophic complications, including death, are so rare that they are reported only as individual case reports.[3][71] Where a Plastibell device is used, the most common complication is the retention of the device occurring in around 3.5% of procedures.[19] Other possible complications include buried penis, chordee, phimosis, skin bridges, urethral fistulas, and meatal stenosis.[71][74] These complications may be partly avoided with proper technique, and are often treatable without requiring surgical revision.[71] The most common long-term complication is meatal stenosis, this is almost exclusively seen in circumcised children, it is thought to be caused by ammonia producing bacteria coming into contact with the meatus in circumcised infants.[19] It can be treated by meatotomy.[19]
|
62 |
+
|
63 |
+
Effective pain management should be used.[3] Inadequate pain relief may carry the risks of heightened pain response for newborns.[34] Newborns that experience pain due to being circumcised have different responses to vaccines given afterwards, with higher pain scores observed.[75] For adult men who have been circumcised, there is a risk that the circumcision scar may be tender.[76]
|
64 |
+
|
65 |
+
The question of how circumcision affects penile sensitivity and sexual satisfaction is controversial; some research has found a loss of sensation while other research has found enhanced sensation.[77] The highest quality evidence indicates that circumcision does not decrease the sensitivity of the penis, harm sexual function or reduce sexual satisfaction.[20][78][79] A 2013 systematic review found that circumcision did not appear to adversely affect sexual desire, pain with intercourse, premature ejaculation, time until ejaculation, erectile dysfunction or difficulties with orgasm.[80] However, the study found that the existing evidence is not very good.[80] A 2017 review found that circumcision did not affect premature ejaculation.[81] When it comes to sexual partners' experiences, circumcision has an unclear effect as it has not been well studied.[82]
|
66 |
+
|
67 |
+
Reduced sexual sensation is a possible complication of male circumcision.[76]
|
68 |
+
|
69 |
+
In general, there is controversy over whether non-therapeutic circumcision can confer psychological benefits, or whether it causes psychological harms.[83]
|
70 |
+
|
71 |
+
Overall, as of 2019[update] it is unclear what the psychological outcomes of circumcision are, with some studies showing negative effects, and others showing that the effects are negligible.[84] There is no good evidence that circumcision adversely affects cognitive abilities or that it induces post-traumatic stress disorder.[84] There is debate in the literature over whether the pain of circumcision has lasting psychological impact, with only weak underlying data available.[84]
|
72 |
+
|
73 |
+
Circumcision is one of the world's most widely performed medical procedures.[24] Approximately 37% to 39% of males worldwide are circumcised, about half for religious or cultural reasons.[85] It is most often practiced between infancy and the early twenties.[4] The WHO estimated in 2007 that 664,500,000 males aged 15 and over were circumcised (30–33% global prevalence), almost 70% of whom were Muslim.[4] Circumcision is most common in the Muslim world, Israel, South Korea, the United States and parts of Southeast Asia and Africa. It is relatively rare in Europe, Latin America, parts of Southern Africa and Oceania and most of non-Muslim Asia. Prevalence is near-universal in the Middle East and Central Asia.[4][86] Non-religious circumcision in Asia, outside of the Republic of Korea and the Philippines, is fairly rare,[4] and prevalence is generally low (less than 20%) across Europe.[4][87] Estimates for individual countries include Taiwan at 9%[88] and Australia 58.7%.[89] Prevalence in the United States and Canada is estimated at 75% and 30% respectively.[4] Prevalence in Africa varies from less than 20% in some southern African countries to near universal in North and West Africa.[86]
|
74 |
+
|
75 |
+
The rates of routine neonatal circumcision over time have varied significantly by country. In the United States, hospital discharge surveys estimated rates at 64.7% in the year 1980, 59.0% in the year 1990, 62.4% in the year 2000, and 58.3% in the year 2010.[90] These estimates are lower than the overall circumcision rates, as they do not account for non-hospital circumcisions,[90] or for procedures performed for medical or cosmetic reasons later in life;[4][90] community surveys have reported higher neonatal circumcision.[4] Canada has seen a slow decline since the early 1970s, possibly influenced by statements from the AAP and the Canadian Pediatric Society issued in the 1970s saying that the procedure was not medically indicated.[4] In Australia, the rate declined in the 1970s and 80s, but has been increasing slowly as of 2004.[4] In the United Kingdom, rates are likely to have been 20–30% in the 1940s but declined at the end of that decade. One possible reason may have been a 1949 British Medical Journal article which stated that there was no medical reason for the general circumcision of babies.[4] The overall prevalence of circumcision in South Korea has increased markedly in the second half of the 20th century, rising from near zero around 1950 to about 60% in 2000, with the most significant jumps in the last two decades of that time period.[4] This is probably due to the influence of the United States, which established a trusteeship for the country following World War II.[4]
|
76 |
+
|
77 |
+
Medical organizations can affect the neonatal circumcision rate of a country by influencing whether the costs of the procedure are borne by the parents or are covered by insurance or a national health care system.[7] Policies that require the costs to be paid by the parents yield lower neonatal circumcision rates.[7] The decline in the rates in the UK is one example; another is that in the United States, the individual states where insurance or Medicaid covers the costs have higher rates.[7] Changes to policy are driven by the results of new research, and moderated by the politics, demographics, and culture of the communities.[7]
|
78 |
+
|
79 |
+
Circumcision is the world's oldest planned surgical procedure, suggested by anatomist and hyperdiffusionist historian Grafton Elliot Smith to be over 15,000 years old, pre-dating recorded history. There is no firm consensus as to how it came to be practiced worldwide. One theory is that it began in one geographic area and spread from there; another is that several different cultural groups began its practice independently. In his 1891 work History of Circumcision, physician Peter Charles Remondino suggested that it began as a less severe form of emasculating a captured enemy: penectomy or castration would likely have been fatal, while some form of circumcision would permanently mark the defeated yet leave him alive to serve as a slave.[25][91]
|
80 |
+
|
81 |
+
The history of the migration and evolution of the practice of circumcision is followed mainly through the cultures and peoples in two separate regions. In the lands south and east of the Mediterranean, starting with Sudan and Ethiopia, the procedure was practiced by the ancient Egyptians and the Semites, and then by the Jews and Muslims, with whom the practice travelled to and was adopted by the Bantu Africans. In Oceania, circumcision is practiced by the Australian Aboriginals and Polynesians.[91] There is also evidence that circumcision was practiced among the Aztec and Mayan civilizations in the Americas,[4] but little detail is available about its history.[24][25]
|
82 |
+
|
83 |
+
Evidence suggests that circumcision was practiced in the Middle East by the 4th millennium BCE, when the Sumerians and the Semites moved into the area that is modern-day Iraq from the North and West.[24] The earliest historical record of circumcision comes from Egypt, in the form of an image of the circumcision of an adult carved into the tomb of Ankh-Mahor at Saqqara, dating to about 2400–2300 BCE. Circumcision was done by the Egyptians possibly for hygienic reasons, but also was part of their obsession with purity and was associated with spiritual and intellectual development. No well-accepted theory explains the significance of circumcision to the Egyptians, but it appears to have been endowed with great honor and importance as a rite of passage into adulthood, performed in a public ceremony emphasizing the continuation of family generations and fertility. It may have been a mark of distinction for the elite: the Egyptian Book of the Dead describes the sun god Ra as having circumcised himself.[25][91]
|
84 |
+
|
85 |
+
Though secular scholars consider the story to be literary and not historical,[92] circumcision features prominently in the Hebrew Bible. The narrative in Genesis chapter 17 describes the circumcision of Abraham and his relatives and slaves. In the same chapter, Abraham's descendants are commanded to circumcise their sons on the eighth day of life as part of a covenant with God.
|
86 |
+
|
87 |
+
In addition to proposing that circumcision was taken up by the Israelites purely as a religious mandate, scholars have suggested that Judaism's patriarchs and their followers adopted circumcision to make penile hygiene easier in hot, sandy climates; as a rite of passage into adulthood; or as a form of blood sacrifice.[24][91][93]
|
88 |
+
|
89 |
+
Alexander the Great conquered the Middle East in the 4th century BCE, and in the following centuries ancient Greek cultures and values came to the Middle East. The Greeks abhorred circumcision, making life for circumcised Jews living among the Greeks (and later the Romans) very difficult. Antiochus Epiphanes outlawed circumcision, as did Hadrian, which helped cause the Bar Kokhba revolt. During this period in history, Jewish circumcision called for the removal of only a part of the prepuce, and some Hellenized Jews attempted to look uncircumcised by stretching the extant parts of their foreskins. This was considered by the Jewish leaders to be a serious problem, and during the 2nd century CE they changed the requirements of Jewish circumcision to call for the complete removal of the foreskin,[94] emphasizing the Jewish view of circumcision as intended to be not just the fulfillment of a Biblical commandment but also an essential and permanent mark of membership in a people.[91][93]
|
90 |
+
|
91 |
+
A narrative in the Christian Gospel of Luke makes a brief mention of the circumcision of Jesus, but the subject of physical circumcision itself is not part of the received teachings of Jesus. Paul the Apostle reinterpreted circumcision as a spiritual concept, arguing the physical one to be unnecessary for Gentile converts to Christianity. The teaching that physical circumcision was unnecessary for membership in a divine covenant was instrumental in the separation of Christianity from Judaism. Although it is not explicitly mentioned in the Quran (early 7th century CE), circumcision is considered essential to Islam, and it is nearly universally performed among Muslims. The practice of circumcision spread across the Middle East, North Africa, and Southern Europe with Islam.[95]
|
92 |
+
|
93 |
+
Genghis Khan and the following Yuan Emperors in China forbade Islamic practices such as halal butchering and circumcision.[96][97] This led Chinese Muslims to eventually take an active part in rebelling against the Mongols and installing the Ming Dynasty.
|
94 |
+
|
95 |
+
The practice of circumcision is thought to have been brought to the Bantu-speaking tribes of Africa by either the Jews after one of their many expulsions from European countries, or by Muslim Moors escaping after the 1492 reconquest of Spain. In the second half of the 1st millennium CE, inhabitants from the North East of Africa moved south and encountered groups from Arabia, the Middle East, and West Africa. These people moved south and formed what is known today as the Bantu. Bantu tribes were observed to be upholding what was described as Jewish law, including circumcision, in the 16th century. Circumcision and elements of Jewish dietary restrictions are still found among Bantu tribes.[24]
|
96 |
+
|
97 |
+
Circumcision is practiced by some groups amongst Australian Aboriginal peoples, Polynesians, and Native Americans. Little information is available about the origins and history of circumcision among these peoples, compared to circumcision in the Middle East.
|
98 |
+
|
99 |
+
For Aboriginal Australians and Polynesians, circumcision likely started as a blood sacrifice and a test of bravery and became an initiation rite with attendant instruction in manhood in more recent centuries. Often seashells were used to remove the foreskin, and the bleeding was stopped with eucalyptus smoke.[24][98]
|
100 |
+
|
101 |
+
Christopher Columbus reported circumcision being practiced by Native Americans.[25] It was also practiced by the Incas, Aztecs, and Mayans. It probably started among South American tribes as a blood sacrifice or ritual mutilation to test bravery and endurance, and its use later evolved into a rite of initiation.[24]
|
102 |
+
|
103 |
+
Circumcision did not become a common medical procedure in the Anglophone world until the late 19th century.[99] At that time, British and American doctors began recommending it primarily as a deterrent to masturbation.[99][100] Prior to the 20th century, masturbation was believed to be the cause of a wide range of physical and mental illnesses including epilepsy, paralysis, impotence, gonorrhea, tuberculosis, feeblemindedness, and insanity.[101][102] In 1855, motivated in part by an interest in promoting circumcision to reduce masturbation, English physician Jonathan Hutchinson published his findings that Jews had a lower prevalence of certain venereal diseases.[103] While pursuing a successful career as a general practitioner, Hutchinson went on to advocate circumcision for health reasons for the next fifty years,[103] and eventually earned a knighthood for his overall contributions to medicine.[104] In America, one of the first modern physicians to advocate the procedure was Lewis Sayre, a founder of the American Medical Association. In 1870, Sayre began using circumcision as a purported cure for several cases of young boys diagnosed with paralysis or significant motor problems. He thought the procedure ameliorated such problems based on a "reflex neurosis" theory of disease, which held that excessive stimulation of the genitals was a disturbance to the equilibrium of the nervous system and a cause of systemic problems.[99] The use of circumcision to promote good health also fit in with the germ theory of disease during that time, which saw the foreskin as being filled with infection-causing smegma (a mixture of shed skin cells and oils). Sayre published works on the subject and promoted it energetically in speeches. Contemporary physicians picked up on Sayre's new treatment, which they believed could prevent or cure a wide-ranging array of medical problems and social ills. Its popularity spread with publications such as Peter Charles Remondino's History of Circumcision. By the turn of the century infant circumcision was near universally recommended in America and Great Britain.[25][100] David Gollaher proposes that "Americans found circumcision appealing not merely on medical grounds, but also for its connotations of science, health, and cleanliness—newly important class distinctions" in a country where 17 million immigrants arrived between 1890 and 1914.[105]
|
104 |
+
|
105 |
+
After the end of World War II, Britain implemented a National Health Service, and so looked to ensure that each medical procedure covered by the new system was cost-effective and the procedure for non-medical reasons was not covered by the national healthcare system. Douglas Gairdner's 1949 article "The Fate of the Foreskin" argued that the evidence available at that time showed that the risks outweighed the known benefits.[106] Circumcision rates dropped in Britain and in the rest of Europe. In the 1970s, national medical associations in Australia and Canada issued recommendations against routine infant circumcision, leading to drops in the rates of both of those countries. The United States made similar statements in the 1970s, but stopped short of recommending against it, simply stating that it has no medical benefit. Since then they have amended their policy statements several times, with the current recommendation being that the benefits outweigh the risks, but they do not recommend it routinely.[25][100]
|
106 |
+
|
107 |
+
An association between circumcision and reduced heterosexual HIV infection rates was suggested in 1986.[25] Experimental evidence was needed to establish a causal relationship, so three randomized controlled trials were commissioned as a means to reduce the effect of any confounding factors.[11] Trials took place in South Africa, Kenya and Uganda.[11] All three trials were stopped early by their monitoring boards because those in the circumcised group had a lower rate of HIV contraction than the control group.[11] Subsequently, the World Health Organization promoted circumcision in high-risk populations as part of an overall program to reduce the spread of HIV,[12] although some have challenged the validity of the African randomized controlled trials, prompting a number of researchers to question the effectiveness of circumcision as an HIV prevention strategy.[107][108][109][110] The Male Circumcision Clearinghouse website was formed in 2009 by WHO, UNAIDS, FHI and AVAC to provide current evidence-based guidance, information, and resources to support the delivery of safe male circumcision services in countries that choose to scale up the procedure as one component of comprehensive HIV prevention services.[111][112]
|
108 |
+
|
109 |
+
In some cultures, males are generally required to be circumcised shortly after birth, during childhood or around puberty as part of a rite of passage. Circumcision is commonly practiced in the Jewish and Islamic faiths and in Coptic Christianity and the Ethiopian Orthodox Church and the Eritrean Orthodox Tewahedo Church.[7][26][27][28][113][114][115]
|
110 |
+
|
111 |
+
Circumcision is very important to most branches of Judaism, with over 90% of male adherents having the procedure performed as a religious obligation. The basis for its observance is found in the Torah of the Hebrew Bible, in Genesis chapter 17, in which a covenant of circumcision is made with Abraham and his descendants. Jewish circumcision is part of the brit milah ritual, to be performed by a specialist ritual circumciser, a mohel, on the eighth day of a newborn son's life, with certain exceptions for poor health. Jewish law requires that the circumcision leaves the glans bare when the penis is flaccid. Converts to Conservative and Orthodox Judaism must also be circumcised; those who are already circumcised undergo a symbolic circumcision ritual. Circumcision is not required by Judaism for one to be considered Jewish, but some adherents foresee serious negative spiritual consequences if it is neglected.[26][116]
|
112 |
+
|
113 |
+
According to traditional Jewish law, in the absence of an adult free Jewish male expert, a woman, a slave, or a child who has the required skills is also authorized to perform the circumcision, provided that they are Jewish.[117] However, most streams of non-Orthodox Judaism allow female mohels, called mohalot (Hebrew: מוֹהֲלוֹת, the plural of מוֹהֶלֶת mohelet, feminine of mohel), without restriction. In 1984 Deborah Cohen became the first certified Reform mohelet; she was certified by the Berit Mila program of Reform Judaism.[118]
|
114 |
+
Some contemporary Jews in the United States choose not to circumcise their sons.[119] They are assisted by a small number of Reform and Reconstructionist rabbis, and have developed a welcoming ceremony that they call the brit shalom ("Covenant [of] Peace") for such children, also accepted by Humanistic Judaism.[120][121]
|
115 |
+
|
116 |
+
This ceremony of brit shalom is not officially approved of by the Reform or Reconstructionist rabbinical organizations, who make the recommendation that male infants should be circumcised, though the issue of converts remains controversial[122][123] and circumcision of converts is not mandatory in either movement.[124]
|
117 |
+
|
118 |
+
Although there is some debate within Islam over whether it is a religious requirement, circumcision (called khitan) is practiced nearly universally by Muslim males. Islam bases its practice of circumcision on the Genesis 17 narrative, the same Biblical chapter referred to by Jews. The procedure is not explicitly mentioned in the Quran, however, it is a tradition established by Islam's prophet Muhammad directly (following Abraham), and so its practice is considered a sunnah (prophet's tradition) and is very important in Islam. For Muslims, circumcision is also a matter of cleanliness, purification and control over one's baser self (nafs). There is no agreement across the many Islamic communities about the age at which circumcision should be performed. It may be done from soon after birth up to about age 15; most often it is performed at around six to seven years of age. The timing can correspond with the boy's completion of his recitation of the whole Quran, with a coming-of-age event such as taking on the responsibility of daily prayer or betrothal. Circumcision may be celebrated with an associated family or community event. Circumcision is recommended for, but is not required of, converts to Islam.[28][113][125]
|
119 |
+
|
120 |
+
The New Testament chapter Acts 15 records that Christianity did not require circumcision. In 1442 the Catholic Church banned the practice of religious circumcision in the 11th Council of Florence [126] and currently maintains a neutral position on the practice of non-religious circumcision.[127] Coptic Christians practice circumcision as a rite of passage.[4][27][115][128] The Ethiopian Orthodox Church calls for circumcision, with near-universal prevalence among Orthodox men in Ethiopia.[4] Some Christian churches in South Africa disapprove of the practice, while others require it of their members.[4]
|
121 |
+
|
122 |
+
Certain African cultural groups, such as the Yoruba and the Igbo of Nigeria, customarily circumcise their infant sons. The procedure is also practiced by some cultural groups or individual family lines in Sudan, Democratic Republic of the Congo, Uganda and in southern Africa. For some of these groups, circumcision appears to be purely cultural, done with no particular religious significance or intention to distinguish members of a group. For others, circumcision might be done for purification, or it may be interpreted as a mark of subjugation. Among these groups, even when circumcision is done for reasons of tradition, it is often done in hospitals.[114] The Maasai people, who live predominantly in Kenya and Tanzania, use circumcision as a rite of passage. It is also used for distinguished age groups. This is usually done after every fifteen years where a new "age set" are formed. The new members are to undergo initiation at the same time. Whenever new age groups are initiated, they will become novice warriors and replace the previous group. The new initiates will be given a unique name that will be an important marker of the history of the Maasai. No anesthesia is used, and initiates have to endure the pain or be called flinchers.[129] The Xhosa community practice circumcision as a sacrifice. In doing so, young boys will announce to their family members when they are ready for circumcision by singing. The sacrifice is the blood spilt during the initiation procedure. Young boys will be considered an "outsiders" unless they undergo circumcision.[130] It is not clear how many deaths and injuries result from non-clinical circumcisions.[131]
|
123 |
+
|
124 |
+
Some Australian Aborigines use circumcision as a test of bravery and self-control as a part of a rite of passage into manhood, which results in full societal and ceremonial membership. It may be accompanied by body scarification and the removal of teeth, and may be followed later by penile subincision. Circumcision is one of many trials and ceremonies required before a youth is considered to have become knowledgeable enough to maintain and pass on the cultural traditions. During these trials, the maturing youth bonds in solidarity with the men. Circumcision is also strongly associated with a man's family, and it is part of the process required to prepare a man to take a wife and produce his own family.[114]
|
125 |
+
|
126 |
+
In the Philippines, circumcision known as "tuli" is sometimes viewed as a rite of passage.[132] About 93% of Filipino men are circumcised.[132] Often this occurs, in April and May, when Filipino boys are taken by their parents. The practice dates back to the arrival of Islam in 1450. Pressure to be circumcised is even in the language: one Tagalog word for 'uncircumcised' is supot, meaning 'coward' literally. A circumcised eight or ten year-old is no longer considered a boy and is given more adult roles in the family and society.[133]
|
127 |
+
|
128 |
+
There is a long-running and vigorous debate over ethical concerns regarding circumcision, particularly neonatal circumcision for reasons other than intended direct medical benefit. There are three parties involved in the decision to circumcise a minor: the minor as the patient, the parents (or other guardians) and the physician. The physician is bound under the ethical principles of beneficence (promoting well-being) and non-maleficence ("first, do no harm"), and so is charged with the responsibility to promote the best interests of the patient while minimizing unnecessary harms. Those involved must weigh the factors of what is in the best interest of the minor against the potential harms of the procedure.[9]
|
129 |
+
|
130 |
+
With a newborn involved, the decision is made more complex due to the principles of respect for autonomy and consent, as a newborn cannot understand or engage in a logical discussion of his own values and best interests.[8][9] A mentally more mature child can understand the issues involved to some degree, and the physician and parents may elicit input from the child and weigh it appropriately in the decision-making process, although the law may not treat such input as legally informative. Ethicists and legal theorists also state that it is questionable for parents to make a decision for the child that precludes the child from making a different decision for himself later. Such a question can be raised for the decision by the parents either to circumcise or not to circumcise the child.[9]
|
131 |
+
|
132 |
+
Generally, circumcision on a minor is not ethically controversial or legally questionable when there is a clear and pressing medical indication for which it is the accepted best practice to resolve. Where circumcision is the chosen intervention, the physician has an ethical responsibility to ensure the procedure is performed competently and safely to minimize potential harms.[8][9] Worldwide, most legal jurisdictions do not have specific laws concerning the circumcision of males,[4] but infant circumcision is not illegal in many countries.[134] A few countries have passed legislation on the procedure: Germany allows non-therapeutic circumcision,[135] while non-religious routine circumcision is illegal in South Africa and Sweden.[4][134]
|
133 |
+
|
134 |
+
Throughout society, circumcision is often considered for reasons other than medical need. Public health advocates of circumcision consider it to have a net benefit, and therefore feel that increasing the circumcision rate is an ethical imperative. They recommend performing the procedure during the neonatal period when it is less expensive and has a lower risk of complications.[8] While studies show there is a modest epidemiological benefit to circumcision, critics argue that the number of circumcisions that would have to be performed would yield an overall negative public health outcome due to the resulting number of complications or other negative effects (such as pain). Pinto (2012) writes "sober proponents and detractors of circumcision agree that there is no overwhelming medical evidence to support either side."[8] This type of cost-benefit analysis is highly dependent on the kinds and frequencies of health problems in the population under discussion and how circumcision affects those health problems.[9]
|
135 |
+
|
136 |
+
Parents are assumed to have the child's best interests in mind. Ethically, it is imperative that the medical practitioner inform the parents about the benefits and risks of the procedure and obtain informed consent before performing it. Practically, however, many parents come to a decision about circumcising the child before he is born, and a discussion of the benefits and risks of the procedure with a physician has not been shown to have a significant effect on the decision. Some parents request to have their newborn or older child circumcised for non-therapeutic reasons, such as the parents' desires to adhere to family tradition, cultural norms or religious beliefs. In considering such a request, the physician may consider (in addition to any potential medical benefits and harms) such non-medical factors in determining the child's best interests and may ethically perform the procedure. Equally, without a clear medical benefit relative to the potential harms, a physician may take the ethical position that non-medical factors do not contribute enough as benefits to outweigh the potential harms and refuse to perform the procedure. Medical organization such as the British Medical Association state that their member physicians are not obliged to perform the procedure in such situations.[8][9]
|
137 |
+
|
138 |
+
In 2012 the International NGO Council on Violence against Children identified non-therapeutic circumcision of infants and boys as being among harmful practices that constitute violence against children and violate their rights.[136] The German Academy for Pediatric and Adolescent Medicine (Deutsche Akademie für Kinder- und Jugendmedizin e.V., DAKJ) recommend against routine non-medical infant circumcision.[137] The Royal Dutch Medical Association questions why the ethics regarding male genital alterations should be viewed any differently from female genital alterations.[29]
|
139 |
+
|
140 |
+
The cost-effectiveness of circumcision has been studied to determine whether a policy of circumcising all newborns or a policy of promoting and providing inexpensive or free access to circumcision for all adult men who choose it would result in lower overall societal healthcare costs. As HIV/AIDS is an incurable disease that is expensive to manage, significant effort has been spent studying the cost-effectiveness of circumcision to reduce its spread in parts of Africa that have a relatively high infection rate and low circumcision prevalence.[138] Several analyses have concluded that circumcision programs for adult men in Africa are cost-effective and in some cases are cost-saving.[39][139] In Rwanda, circumcision has been found to be cost-effective across a wide range of age groups from newborn to adult,[48][140] with the greatest savings achieved when the procedure is performed in the newborn period due to the lower cost per procedure and greater timeframe for HIV infection protection.[13][140] Circumcision for the prevention of HIV transmission in adults has also been found to be cost-effective in South Africa, Kenya, and Uganda, with cost savings estimated in the billions of US dollars over 20 years.[138] Hankins et al. (2011) estimated that a $1.5 billion investment in circumcision for adults in 13 high-priority African countries would yield $16.5 billion in savings.[141]
|
141 |
+
|
142 |
+
The overall cost-effectiveness of neonatal circumcision has also been studied in the United States, which has a different cost setting from Africa in areas such as public health infrastructure, availability of medications, and medical technology and the willingness to use it.[142] A study by the CDC suggests that newborn circumcision would be societally cost-effective in the United States based on circumcision's efficacy against the heterosexual transmission of HIV alone, without considering any other cost benefits.[3] The American Academy of Pediatrics (2012) recommends that neonatal circumcision in the United States be covered by third-party payers such as Medicaid and insurance.[3] A 2014 review that considered reported benefits of circumcision such as reduced risks from HIV, HPV, and HSV-2 stated that circumcision is cost-effective in both the United States and Africa and may result in health care savings.[143] However, a 2014 literature review found that there are significant gaps in the current literature on male and female sexual health that need to be addressed for the literature to be applicable to North American populations.[82]
|
143 |
+
|
en/1142.html.txt
ADDED
@@ -0,0 +1,377 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
A circle is a shape consisting of all points in a plane that are a given distance from a given point, the centre; equivalently it is the curve traced out by a point that moves in a plane so that its distance from a given point is constant. The distance between any point of the circle and the centre is called the radius. This article is about circles in Euclidean geometry, and, in particular, the Euclidean plane, except where otherwise noted.
|
4 |
+
|
5 |
+
Specifically, a circle is a simple closed curve that divides the plane into two regions: an interior and an exterior. In everyday use, the term "circle" may be used interchangeably to refer to either the boundary of the figure, or to the whole figure including its interior; in strict technical usage, the circle is only the boundary and the whole figure is called a disc.
|
6 |
+
|
7 |
+
A circle may also be defined as a special kind of ellipse in which the two foci are coincident and the eccentricity is 0, or the two-dimensional shape enclosing the most area per unit perimeter squared, using calculus of variations.
|
8 |
+
|
9 |
+
A circle is a plane figure bounded by one curved line, and such that all straight lines drawn from a certain point within it to the bounding line, are equal. The bounding line is called its circumference and the point, its centre.
|
10 |
+
|
11 |
+
In the field of topology, a circle isn't limited to the geometric concept, but to all of its homeomorphisms. Two topological circles are equivalent if one can be transformed into the other via a deformation of R3 upon itself (known as an ambient isotopy).[2]
|
12 |
+
|
13 |
+
All of the specified regions may be considered as open, that is, not containing their boundaries, or as closed, including their respective boundaries.
|
14 |
+
|
15 |
+
The word circle derives from the Greek κίρκος/κύκλος (kirkos/kuklos), itself a metathesis of the Homeric Greek κρίκος (krikos), meaning "hoop" or "ring".[3] The origins of the words circus and circuit are closely related.
|
16 |
+
|
17 |
+
The circle has been known since before the beginning of recorded history. Natural circles would have been observed, such as the Moon, Sun, and a short plant stalk blowing in the wind on sand, which forms a circle shape in the sand. The circle is the basis for the wheel, which, with related inventions such as gears, makes much of modern machinery possible. In mathematics, the study of the circle has helped inspire the development of geometry, astronomy and calculus.
|
18 |
+
|
19 |
+
Early science, particularly geometry and astrology and astronomy, was connected to the divine for most medieval scholars, and many believed that there was something intrinsically "divine" or "perfect" that could be found in circles.[4][5]
|
20 |
+
|
21 |
+
Some highlights in the history of the circle are:
|
22 |
+
|
23 |
+
The ratio of a circle's circumference to its diameter is π (pi), an irrational constant approximately equal to 3.141592654. Thus the circumference C is related to the radius r and diameter d by:
|
24 |
+
|
25 |
+
As proved by Archimedes, in his Measurement of a Circle, the area enclosed by a circle is equal to that of a triangle whose base has the length of the circle's circumference and whose height equals the circle's radius,[8] which comes to π multiplied by the radius squared:
|
26 |
+
|
27 |
+
Equivalently, denoting diameter by d,
|
28 |
+
|
29 |
+
that is, approximately 79% of the circumscribing square (whose side is of length d).
|
30 |
+
|
31 |
+
The circle is the plane curve enclosing the maximum area for a given arc length. This relates the circle to a problem in the calculus of variations, namely the isoperimetric inequality.
|
32 |
+
|
33 |
+
Equation of a circle
|
34 |
+
In an x–y Cartesian coordinate system, the circle with centre coordinates (a, b) and radius r is the set of all points (x, y) such that
|
35 |
+
|
36 |
+
This equation, known as the Equation of the Circle, follows from the Pythagorean theorem applied to any point on the circle: as shown in the adjacent diagram, the radius is the hypotenuse of a right-angled triangle whose other sides are of length |x − a| and |y − b|. If the circle is centred at the origin (0, 0), then the equation simplifies to
|
37 |
+
|
38 |
+
Parametric form
|
39 |
+
The equation can be written in parametric form using the trigonometric functions sine and cosine as
|
40 |
+
|
41 |
+
where t is a parametric variable in the range 0 to 2π, interpreted geometrically as the angle that the ray from (a, b) to (x, y) makes with the positive x-axis.
|
42 |
+
|
43 |
+
An alternative parametrisation of the circle is:
|
44 |
+
|
45 |
+
In this parameterisation, the ratio of t to r can be interpreted geometrically as the stereographic projection of the line passing through the centre parallel to the x-axis (see Tangent half-angle substitution). However, this parameterisation works only if t is made to range not only through all reals but also to a point at infinity; otherwise, the leftmost point of the circle would be omitted.
|
46 |
+
|
47 |
+
3-point-form
|
48 |
+
The equation of the circle determined by three points
|
49 |
+
|
50 |
+
|
51 |
+
|
52 |
+
(
|
53 |
+
|
54 |
+
x
|
55 |
+
|
56 |
+
1
|
57 |
+
|
58 |
+
|
59 |
+
,
|
60 |
+
|
61 |
+
y
|
62 |
+
|
63 |
+
1
|
64 |
+
|
65 |
+
|
66 |
+
)
|
67 |
+
,
|
68 |
+
(
|
69 |
+
|
70 |
+
x
|
71 |
+
|
72 |
+
2
|
73 |
+
|
74 |
+
|
75 |
+
,
|
76 |
+
|
77 |
+
y
|
78 |
+
|
79 |
+
2
|
80 |
+
|
81 |
+
|
82 |
+
)
|
83 |
+
,
|
84 |
+
(
|
85 |
+
|
86 |
+
x
|
87 |
+
|
88 |
+
3
|
89 |
+
|
90 |
+
|
91 |
+
,
|
92 |
+
|
93 |
+
y
|
94 |
+
|
95 |
+
3
|
96 |
+
|
97 |
+
|
98 |
+
)
|
99 |
+
|
100 |
+
|
101 |
+
{\displaystyle (x_{1},y_{1}),(x_{2},y_{2}),(x_{3},y_{3})}
|
102 |
+
|
103 |
+
not on a line is obtained by a conversion of the 3-point-form of a circle's equation
|
104 |
+
|
105 |
+
Homogeneous form
|
106 |
+
In homogeneous coordinates, each conic section with the equation of a circle has the form
|
107 |
+
|
108 |
+
It can be proven that a conic section is a circle exactly when it contains (when extended to the complex projective plane) the points I(1: i: 0) and J(1: −i: 0). These points are called the circular points at infinity.
|
109 |
+
|
110 |
+
In polar coordinates, the equation of a circle is:
|
111 |
+
|
112 |
+
where a is the radius of the circle,
|
113 |
+
|
114 |
+
|
115 |
+
|
116 |
+
(
|
117 |
+
r
|
118 |
+
,
|
119 |
+
θ
|
120 |
+
)
|
121 |
+
|
122 |
+
|
123 |
+
{\displaystyle (r,\theta )}
|
124 |
+
|
125 |
+
is the polar coordinate of a generic point on the circle, and
|
126 |
+
|
127 |
+
|
128 |
+
|
129 |
+
(
|
130 |
+
|
131 |
+
r
|
132 |
+
|
133 |
+
0
|
134 |
+
|
135 |
+
|
136 |
+
,
|
137 |
+
ϕ
|
138 |
+
)
|
139 |
+
|
140 |
+
|
141 |
+
{\displaystyle (r_{0},\phi )}
|
142 |
+
|
143 |
+
is the polar coordinate of the centre of the circle (i.e., r0 is the distance from the origin to the centre of the circle, and φ is the anticlockwise angle from the positive x-axis to the line connecting the origin to the centre of the circle). For a circle centred on the origin, i.e. r0 = 0, this reduces to simply r = a. When r0 = a, or when the origin lies on the circle, the equation becomes
|
144 |
+
|
145 |
+
In the general case, the equation can be solved for r, giving
|
146 |
+
|
147 |
+
Note that without the ± sign, the equation would in some cases describe only half a circle.
|
148 |
+
|
149 |
+
In the complex plane, a circle with a centre at c and radius r has the equation:
|
150 |
+
|
151 |
+
In parametric form, this can be written:
|
152 |
+
|
153 |
+
The slightly generalised equation
|
154 |
+
|
155 |
+
for real p, q and complex g is sometimes called a generalised circle. This becomes the above equation for a circle with
|
156 |
+
|
157 |
+
|
158 |
+
|
159 |
+
p
|
160 |
+
=
|
161 |
+
1
|
162 |
+
,
|
163 |
+
|
164 |
+
g
|
165 |
+
=
|
166 |
+
−
|
167 |
+
|
168 |
+
|
169 |
+
c
|
170 |
+
¯
|
171 |
+
|
172 |
+
|
173 |
+
,
|
174 |
+
|
175 |
+
q
|
176 |
+
=
|
177 |
+
|
178 |
+
r
|
179 |
+
|
180 |
+
2
|
181 |
+
|
182 |
+
|
183 |
+
−
|
184 |
+
|
185 |
+
|
|
186 |
+
|
187 |
+
c
|
188 |
+
|
189 |
+
|
190 |
+
|
|
191 |
+
|
192 |
+
|
193 |
+
2
|
194 |
+
|
195 |
+
|
196 |
+
|
197 |
+
|
198 |
+
{\displaystyle p=1,\ g=-{\overline {c}},\ q=r^{2}-|c|^{2}}
|
199 |
+
|
200 |
+
, since
|
201 |
+
|
202 |
+
|
203 |
+
|
204 |
+
|
205 |
+
|
|
206 |
+
|
207 |
+
z
|
208 |
+
−
|
209 |
+
c
|
210 |
+
|
211 |
+
|
212 |
+
|
|
213 |
+
|
214 |
+
|
215 |
+
2
|
216 |
+
|
217 |
+
|
218 |
+
=
|
219 |
+
z
|
220 |
+
|
221 |
+
|
222 |
+
z
|
223 |
+
¯
|
224 |
+
|
225 |
+
|
226 |
+
−
|
227 |
+
|
228 |
+
|
229 |
+
c
|
230 |
+
¯
|
231 |
+
|
232 |
+
|
233 |
+
z
|
234 |
+
−
|
235 |
+
c
|
236 |
+
|
237 |
+
|
238 |
+
z
|
239 |
+
¯
|
240 |
+
|
241 |
+
|
242 |
+
+
|
243 |
+
c
|
244 |
+
|
245 |
+
|
246 |
+
c
|
247 |
+
¯
|
248 |
+
|
249 |
+
|
250 |
+
|
251 |
+
|
252 |
+
{\displaystyle |z-c|^{2}=z{\overline {z}}-{\overline {c}}z-c{\overline {z}}+c{\overline {c}}}
|
253 |
+
|
254 |
+
. Not all generalised circles are actually circles: a generalised circle is either a (true) circle or a line.
|
255 |
+
|
256 |
+
The tangent line through a point P on the circle is perpendicular to the diameter passing through P. If P = (x1, y1) and the circle has centre (a, b) and radius r, then the tangent line is perpendicular to the line from (a, b) to (x1, y1), so it has the form (x1 − a)x + (y1 – b)y = c. Evaluating at (x1, y1) determines the value of c and the result is that the equation of the tangent is
|
257 |
+
|
258 |
+
or
|
259 |
+
|
260 |
+
If y1 ≠ b then the slope of this line is
|
261 |
+
|
262 |
+
This can also be found using implicit differentiation.
|
263 |
+
|
264 |
+
When the centre of the circle is at the origin then the equation of the tangent line becomes
|
265 |
+
|
266 |
+
and its slope is
|
267 |
+
|
268 |
+
An inscribed angle (examples are the blue and green angles in the figure) is exactly half the corresponding central angle (red). Hence, all inscribed angles that subtend the same arc (pink) are equal. Angles inscribed on the arc (brown) are supplementary. In particular, every inscribed angle that subtends a diameter is a right angle (since the central angle is 180 degrees).
|
269 |
+
|
270 |
+
Another proof of this result, which relies only on two chord properties given above, is as follows. Given a chord of length y and with sagitta of length x, since the sagitta intersects the midpoint of the chord, we know it is part of a diameter of the circle. Since the diameter is twice the radius, the "missing" part of the diameter is (2r − x) in length. Using the fact that one part of one chord times the other part is equal to the same product taken along a chord intersecting the first chord, we find that (2r − x)x = (y / 2)2. Solving for r, we find the required result.
|
271 |
+
|
272 |
+
There are many compass-and-straightedge constructions resulting in circles.
|
273 |
+
|
274 |
+
The simplest and most basic is the construction given the centre of the circle and a point on the circle. Place the fixed leg of the compass on the centre point, the movable leg on the point on the circle and rotate the compass.
|
275 |
+
|
276 |
+
Apollonius of Perga showed that a circle may also be defined as the set of points in a plane having a constant ratio (other than 1) of distances to two fixed foci, A and B.[12][13] (The set of points where the distances are equal is the perpendicular bisector of segment AB, a line.) That circle is sometimes said to be drawn about two points.
|
277 |
+
|
278 |
+
The proof is in two parts. First, one must prove that, given two foci A and B and a ratio of distances, any point P satisfying the ratio of distances must fall on a particular circle. Let C be another point, also satisfying the ratio and lying on segment AB. By the angle bisector theorem the line segment PC will bisect the interior angle APB, since the segments are similar:
|
279 |
+
|
280 |
+
Analogously, a line segment PD through some point D on AB extended bisects the corresponding exterior angle BPQ where Q is on AP extended. Since the interior and exterior angles sum to 180 degrees, the angle CPD is exactly 90 degrees, i.e., a right angle. The set of points P such that angle CPD is a right angle forms a circle, of which CD is a diameter.
|
281 |
+
|
282 |
+
Second, see[14]:p.15 for a proof that every point on the indicated circle satisfies the given ratio.
|
283 |
+
|
284 |
+
A closely related property of circles involves the geometry of the cross-ratio of points in the complex plane. If A, B, and C are as above, then the circle of Apollonius for these three points is the collection of points P for which the absolute value of the cross-ratio is equal to one:
|
285 |
+
|
286 |
+
Stated another way, P is a point on the circle of Apollonius if and only if the cross-ratio [A,B;C,P] is on the unit circle in the complex plane.
|
287 |
+
|
288 |
+
If C is the midpoint of the segment AB, then the collection of points P satisfying the Apollonius condition
|
289 |
+
|
290 |
+
is not a circle, but rather a line.
|
291 |
+
|
292 |
+
Thus, if A, B, and C are given distinct points in the plane, then the locus of points P satisfying the above equation is called a "generalised circle." It may either be a true circle or a line. In this sense a line is a generalised circle of infinite radius.
|
293 |
+
|
294 |
+
In every triangle a unique circle, called the incircle, can be inscribed such that it is tangent to each of the three sides of the triangle.[15]
|
295 |
+
|
296 |
+
About every triangle a unique circle, called the circumcircle, can be circumscribed such that it goes through each of the triangle's three vertices.[16]
|
297 |
+
|
298 |
+
A tangential polygon, such as a tangential quadrilateral, is any convex polygon within which a circle can be inscribed that is tangent to each side of the polygon.[17] Every regular polygon and every triangle is a tangential polygon.
|
299 |
+
|
300 |
+
A cyclic polygon is any convex polygon about which a circle can be circumscribed, passing through each vertex. A well-studied example is the cyclic quadrilateral. Every regular polygon and every triangle is a cyclic polygon. A polygon that is both cyclic and tangential is called a bicentric polygon.
|
301 |
+
|
302 |
+
A hypocycloid is a curve that is inscribed in a given circle by tracing a fixed point on a smaller circle that rolls within and tangent to the given circle.
|
303 |
+
|
304 |
+
The circle can be viewed as a limiting case of each of various other figures:
|
305 |
+
|
306 |
+
Defining a circle as the set of points with a fixed distance from a point, different shapes can be considered circles under different definitions of distance. In p-norm, distance is determined by
|
307 |
+
|
308 |
+
In Euclidean geometry, p = 2, giving the familiar
|
309 |
+
|
310 |
+
In taxicab geometry, p = 1. Taxicab circles are squares with sides oriented at a 45° angle to the coordinate axes. While each side would have length
|
311 |
+
|
312 |
+
|
313 |
+
|
314 |
+
|
315 |
+
|
316 |
+
2
|
317 |
+
|
318 |
+
|
319 |
+
r
|
320 |
+
|
321 |
+
|
322 |
+
{\displaystyle {\sqrt {2}}r}
|
323 |
+
|
324 |
+
using a Euclidean metric, where r is the circle's radius, its length in taxicab geometry is 2r. Thus, a circle's circumference is 8r. Thus, the value of a geometric analog to
|
325 |
+
|
326 |
+
|
327 |
+
|
328 |
+
π
|
329 |
+
|
330 |
+
|
331 |
+
{\displaystyle \pi }
|
332 |
+
|
333 |
+
is 4 in this geometry. The formula for the unit circle in taxicab geometry is
|
334 |
+
|
335 |
+
|
336 |
+
|
337 |
+
|
338 |
+
|
|
339 |
+
|
340 |
+
x
|
341 |
+
|
342 |
+
|
|
343 |
+
|
344 |
+
+
|
345 |
+
|
346 |
+
|
|
347 |
+
|
348 |
+
y
|
349 |
+
|
350 |
+
|
|
351 |
+
|
352 |
+
=
|
353 |
+
1
|
354 |
+
|
355 |
+
|
356 |
+
{\displaystyle |x|+|y|=1}
|
357 |
+
|
358 |
+
in Cartesian coordinates and
|
359 |
+
|
360 |
+
in polar coordinates.
|
361 |
+
|
362 |
+
A circle of radius 1 (using this distance) is the von Neumann neighborhood of its center.
|
363 |
+
|
364 |
+
A circle of radius r for the Chebyshev distance (L∞ metric) on a plane is also a square with side length 2r parallel to the coordinate axes, so planar Chebyshev distance can be viewed as equivalent by rotation and scaling to planar taxicab distance. However, this equivalence between L1 and L∞ metrics does not generalize to higher dimensions.
|
365 |
+
|
366 |
+
Squaring the circle is the problem, proposed by ancient geometers, of constructing a square with the same area as a given circle by using only a finite number of steps with compass and straightedge.
|
367 |
+
|
368 |
+
In 1882, the task was proven to be impossible, as a consequence of the Lindemann–Weierstrass theorem, which proves that pi (π) is a transcendental number, rather than an algebraic irrational number; that is, it is not the root of any polynomial with rational coefficients.
|
369 |
+
|
370 |
+
From the time of the earliest known civilisations – such as the Assyrians and ancient Egyptians, those in the Indus Valley and along the Yellow River in China, and the Western civilisations of ancient Greece and Rome during classical Antiquity – the circle has been used directly or indirectly in visual art to convey the artist’s message and to express certain ideas.
|
371 |
+
However, differences in worldview (beliefs and culture) had a great impact on artists’ perceptions. While some emphasised the circle’s perimeter to demonstrate their democratic manifestation, others focused on its centre to symbolise the concept of cosmic unity. In mystical doctrines, the circle mainly symbolises the infinite and cyclical nature of existence, but in religious traditions it represents heavenly bodies and divine spirits.
|
372 |
+
The circle signifies many sacred and spiritual concepts, including unity, infinity, wholeness, the universe, divinity, balance, stability and perfection, among others. Such concepts have been conveyed in cultures worldwide through the use of symbols, for example, a compass, a halo, the vesica piscis and its derivatives (fish, eye, aureole, mandorla, etc.), the ouroboros, the Dharma wheel, a rainbow, mandalas, rose windows and so forth. [18]
|
373 |
+
|
374 |
+
|
375 |
+
|
376 |
+
|
377 |
+
|
en/1143.html.txt
ADDED
@@ -0,0 +1,130 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
The circulatory system, also called the cardiovascular system or the vascular system, is an organ system that permits blood to circulate and transport nutrients (such as amino acids and electrolytes), oxygen, carbon dioxide, hormones, and blood cells to and from the cells in the body to provide nourishment and help in fighting diseases, stabilize temperature and pH, and maintain homeostasis.
|
4 |
+
|
5 |
+
The circulatory system includes the lymphatic system, which circulates lymph.[1] The passage of lymph takes much longer than that of blood.[2] Blood is a fluid consisting of plasma, red blood cells, white blood cells, and platelets that is circulated by the heart through the vertebrate vascular system, carrying oxygen and nutrients to and waste materials away from all body tissues. Lymph is essentially recycled excess blood plasma after it has been filtered from the interstitial fluid (between cells) and returned to the lymphatic system. The cardiovascular (from Latin words meaning "heart" and "vessel") system comprises the blood, heart, and blood vessels.[3] The lymph, lymph nodes, and lymph vessels form the lymphatic system, which returns filtered blood plasma from the interstitial fluid (between cells) as lymph.
|
6 |
+
|
7 |
+
The circulatory system of the blood is seen as having two components, a systemic circulation and a pulmonary circulation.[4]
|
8 |
+
|
9 |
+
While humans, as well as other vertebrates, have a closed cardiovascular system (meaning that the blood never leaves the network of arteries, veins and capillaries), some invertebrate groups have an open cardiovascular system. The lymphatic system, on the other hand, is an open system providing an accessory route for excess interstitial fluid to be returned to the blood.[5] The more primitive, diploblastic animal phyla lack circulatory systems.
|
10 |
+
|
11 |
+
Many diseases affect the circulatory system. This includes cardiovascular disease, affecting the cardiovascular system, and lymphatic disease affecting the lymphatic system. Cardiologists are medical professionals which specialise in the heart, and cardiothoracic surgeons specialise in operating on the heart and its surrounding areas. Vascular surgeons focus on other parts of the circulatory system.
|
12 |
+
|
13 |
+
|
14 |
+
|
15 |
+
The essential components of the human cardiovascular system are the heart, blood and blood vessels.[6] It includes the pulmonary circulation, a "loop" through the lungs where blood is oxygenated; and the systemic circulation, a "loop" through the rest of the body to provide oxygenated blood. The systemic circulation can also be seen to function in two parts – a macrocirculation and a microcirculation. An average adult contains five to six quarts (roughly 4.7 to 5.7 liters) of blood, accounting for approximately 7% of their total body weight.[7] Blood consists of plasma, red blood cells, white blood cells, and platelets. Also, the digestive system works with the circulatory system to provide the nutrients the system needs to keep the heart pumping.[8]
|
16 |
+
|
17 |
+
The cardiovascular systems of humans are closed, meaning that the blood never leaves the network of blood vessels. In contrast, oxygen and nutrients diffuse across the blood vessel layers and enter interstitial fluid, which carries oxygen and nutrients to the target cells, and carbon dioxide and wastes in the opposite direction. The other component of the circulatory system, the lymphatic system, is open.
|
18 |
+
|
19 |
+
Oxygenated blood enters the systemic circulation when leaving the left ventricle, through the aortic semilunar valve. The first part of the systemic circulation is the aorta, a massive and thick-walled artery. The aorta arches and gives branches supplying the upper part of the body after passing through the aortic opening of the diaphragm at the level of thoracic ten vertebra, it enters the abdomen. Later it descends down and supplies branches to abdomen, pelvis, perineum and the lower limbs. The walls of aorta are elastic. This elasticity helps to maintain the blood pressure throughout the body. When the aorta receives almost five litres of blood from the heart, it recoils and is responsible for pulsating blood pressure. Moreover, as aorta branches into smaller arteries, their elasticity goes on decreasing and their compliance goes on increasing.
|
20 |
+
|
21 |
+
Arteries branch into small passages called arterioles and then into the capillaries.[9] The capillaries merge to bring blood into the venous system.[10]
|
22 |
+
|
23 |
+
Capillaries merge into venules, which merge into veins. The venous system feeds into the two major veins: the superior vena cava – which mainly drains tissues above the heart – and the inferior vena cava – which mainly drains tissues below the heart. These two large veins empty into the right atrium of the heart.
|
24 |
+
|
25 |
+
The general rule is that arteries from the heart branch out into capillaries, which collect into veins leading back to the heart. Portal veins are a slight exception to this. In humans the only significant example is the hepatic portal vein which combines from capillaries around the gastrointestinal tract where the blood absorbs the various products of digestion; rather than leading directly back to the heart, the hepatic portal vein branches into a second capillary system in the liver.
|
26 |
+
|
27 |
+
The heart pumps oxygenated blood to the body and deoxygenated blood to the lungs. In the human heart there is one atrium and one ventricle for each circulation, and with both a systemic and a pulmonary circulation there are four chambers in total: left atrium, left ventricle, right atrium and right ventricle. The right atrium is the upper chamber of the right side of the heart. The blood that is returned to the right atrium is deoxygenated (poor in oxygen) and passed into the right ventricle to be pumped through the pulmonary artery to the lungs for re-oxygenation and removal of carbon dioxide. The left atrium receives newly oxygenated blood from the lungs as well as the pulmonary vein which is passed into the strong left ventricle to be pumped through the aorta to the different organs of the body.
|
28 |
+
|
29 |
+
The heart itself is supplied with oxygen and nutrients through a small "loop" of the systemic circulation and derives very little from the blood contained within the four chambers.
|
30 |
+
The coronary circulation system provides a blood supply to the heart muscle itself. The coronary circulation begins near the origin of the aorta by two coronary arteries: the right coronary artery and the left coronary artery. After nourishing the heart muscle, blood returns through the coronary veins into the coronary sinus and from this one into the right atrium. Back flow of blood through its opening during atrial systole is prevented by Thebesian valve. The smallest cardiac veins drain directly into the heart chambers.[8]
|
31 |
+
|
32 |
+
The circulatory system of the lungs is the portion of the cardiovascular system in which oxygen-depleted blood is pumped away from the heart, via the pulmonary artery, to the lungs and returned, oxygenated, to the heart via the pulmonary vein.
|
33 |
+
|
34 |
+
Oxygen deprived blood from the superior and inferior vena cava enters the right atrium of the heart and flows through the tricuspid valve (right atrioventricular valve) into the right ventricle, from which it is then pumped through the pulmonary semilunar valve into the pulmonary artery to the lungs. Gas exchange occurs in the lungs, whereby CO2 is released from the blood, and oxygen is absorbed. The pulmonary vein returns the now oxygen-rich blood to the left atrium.[8]
|
35 |
+
|
36 |
+
A separate system known as the bronchial circulation supplies blood to the tissue of the larger airways of the lung.
|
37 |
+
|
38 |
+
Systemic circulation is the portion of the cardiovascular system which transports oxygenated blood away from the heart through the aorta from the left ventricle where the blood has been previously deposited from pulmonary circulation, to the rest of the body, and returns oxygen-depleted blood back to the heart.[8]
|
39 |
+
|
40 |
+
The brain has a dual blood supply that comes from arteries at its front and back. These are called the "anterior" and "posterior" circulation respectively. The anterior circulation arises from the internal carotid arteries and supplies the front of the brain. The posterior circulation arises from the vertebral arteries, and supplies the back of the brain and brainstem. The circulation from the front and the back join together (anastomise) at the Circle of Willis.
|
41 |
+
|
42 |
+
The renal circulation receives around 20% of the cardiac output. It branches from the abdominal aorta and returns blood to the ascending vena cava. It is the blood supply to the kidneys, and contains many specialized blood vessels.
|
43 |
+
|
44 |
+
The lymphatic system is part of the circulatory system. It is a network of lymphatic vessels and lymph capillaries, lymph nodes and organs, and lymphatic tissues and circulating lymph. One of its major functions is to carry the lymph, draining and returning interstitial fluid back towards the heart for return to the cardiovascular system, by emptying into the lymphatic ducts. Its other main function is in the adaptive immune system.[11]
|
45 |
+
|
46 |
+
The development of the circulatory system starts with vasculogenesis in the embryo. The human arterial and venous systems develop from different areas in the embryo. The arterial system develops mainly from the aortic arches, six pairs of arches which develop on the upper part of the embryo. The venous system arises from three bilateral veins during weeks 4 – 8 of embryogenesis. Fetal circulation begins within the 8th week of development. Fetal circulation does not include the lungs, which are bypassed via the truncus arteriosus. Before birth the fetus obtains oxygen (and nutrients) from the mother through the placenta and the umbilical cord.[12]
|
47 |
+
|
48 |
+
The human arterial system originates from the aortic arches and from the dorsal aortae starting from week 4 of embryonic life. The first and second aortic arches regress and forms only the maxillary arteries and stapedial arteries respectively. The arterial system itself arises from aortic arches 3, 4 and 6 (aortic arch 5 completely regresses).
|
49 |
+
|
50 |
+
The dorsal aortae, present on the dorsal side of the embryo, are initially present on both sides of the embryo. They later fuse to form the basis for the aorta itself. Approximately thirty smaller arteries branch from this at the back and sides. These branches form the intercostal arteries, arteries of the arms and legs, lumbar arteries and the lateral sacral arteries. Branches to the sides of the aorta will form the definitive renal, suprarenal and gonadal arteries. Finally, branches at the front of the aorta consist of the vitelline arteries and umbilical arteries. The vitelline arteries form the celiac, superior and inferior mesenteric arteries of the gastrointestinal tract. After birth, the umbilical arteries will form the internal iliac arteries.
|
51 |
+
|
52 |
+
The human venous system develops mainly from the vitelline veins, the umbilical veins and the cardinal veins, all of which empty into the sinus venosus.
|
53 |
+
|
54 |
+
About 98.5% of the oxygen in a sample of arterial blood in a healthy human, breathing air at sea-level pressure, is chemically combined with hemoglobin molecules. About 1.5% is physically dissolved in the other blood liquids and not connected to hemoglobin. The hemoglobin molecule is the primary transporter of oxygen in mammals and many other species.
|
55 |
+
|
56 |
+
Many diseases affect the circulatory system. These include a number of cardiovascular diseases, affecting the cardiovascular system, and lymphatic diseases affecting the lymphatic system. Cardiologists are medical professionals which specialise in the heart, and cardiothoracic surgeons specialise in operating on the heart and its surrounding areas. Vascular surgeons focus on other parts of the circulatory system.
|
57 |
+
|
58 |
+
Diseases affecting the cardiovascular system are called cardiovascular disease.
|
59 |
+
|
60 |
+
Many of these diseases are called "lifestyle diseases" because they develop over time and are related to a person's exercise habits, diet, whether they smoke, and other lifestyle choices a person makes. Atherosclerosis is the precursor to many of these diseases. It is where small atheromatous plaques build up in the walls of medium and large arteries. This may eventually grow or rupture to occlude the arteries. It is also a risk factor for acute coronary syndromes, which are diseases that are characterised by a sudden deficit of oxygenated blood to the heart tissue. Atherosclerosis is also associated with problems such as aneurysm formation or splitting ("dissection") of arteries.
|
61 |
+
|
62 |
+
Another major cardiovascular disease involves the creation of a clot, called a "thrombus". These can originate in veins or arteries. Deep venous thrombosis, which mostly occurs in the legs, is one cause of clots in the veins of the legs, particularly when a person has been stationary for a long time. These clots may embolise, meaning travel to another location in the body. The results of this may include pulmonary embolus, transient ischaemic attacks, or stroke.
|
63 |
+
|
64 |
+
Cardiovascular diseases may also be congenital in nature, such as heart defects or persistent fetal circulation, where the circulatory changes that are supposed to happen after birth do not. Not all congenital changes to the circulatory system are associated with diseases, a large number are anatomical variations.
|
65 |
+
|
66 |
+
The function and health of the circulatory system and its parts are measured in a variety of manual and automated ways. These include simple methods such as those that are part of the cardiovascular examination, including the taking of a person's pulse as an indicator of a person's heart rate, the taking of blood pressure through a sphygmomanometer or the use of a stethoscope to listen to the heart for murmurs which may indicate problems with the heart's valves. An electrocardiogram can also be used to evaluate the way in which electricity is conducted through the heart.
|
67 |
+
|
68 |
+
Other more invasive means can also be used. A cannula or catheter inserted into an artery may be used to measure pulse pressure or pulmonary wedge pressures. Angiography, which involves injecting a dye into an artery to visualise an arterial tree, can be used in the heart (coronary angiography) or brain. At the same time as the arteries are visualised, blockages or narrowings may be fixed through the insertion of stents, and active bleeds may be managed by the insertion of coils. An MRI may be used to image arteries, called an MRI angiogram. For evaluation of the blood supply to the lungs a CT pulmonary angiogram may be used.
|
69 |
+
|
70 |
+
Vascular ultrasonography include for example:
|
71 |
+
|
72 |
+
There are a number of surgical procedures performed on the circulatory system:
|
73 |
+
|
74 |
+
Cardiovascular procedures are more likely to be performed in an inpatient setting than in an ambulatory care setting; in the United States, only 28% of cardiovascular surgeries were performed in the ambulatory care setting.[13]
|
75 |
+
|
76 |
+
In Ancient Greece, the heart was thought to be the source of innate heat for the body.
|
77 |
+
The circulatory system as we know it was discovered by William Harvey.
|
78 |
+
|
79 |
+
While humans, as well as other vertebrates, have a closed cardiovascular system (meaning that the blood never leaves the network of arteries, veins and capillaries), some invertebrate groups have an open cardiovascular system. The lymphatic system, on the other hand, is an open system providing an accessory route for excess interstitial fluid to be returned to the blood.[5] The more primitive, diploblastic animal phyla lack circulatory systems.
|
80 |
+
|
81 |
+
The blood vascular system first appeared probably in an ancestor of the triploblasts over 600 million years ago, overcoming the time-distance constraints of diffusion, while endothelium evolved in an ancestral vertebrate some 540–510 million years ago.[14]
|
82 |
+
|
83 |
+
In arthropods, the open circulatory system is a system in which a fluid in a cavity called the hemocoel bathes the organs directly with oxygen and nutrients and there is no distinction between blood and interstitial fluid; this combined fluid is called hemolymph or haemolymph.[15] Muscular movements by the animal during locomotion can facilitate hemolymph movement, but diverting flow from one area to another is limited. When the heart relaxes, blood is drawn back toward the heart through open-ended pores (ostia).
|
84 |
+
|
85 |
+
Hemolymph fills all of the interior hemocoel of the body and surrounds all cells. Hemolymph is composed of water, inorganic salts (mostly sodium, chloride, potassium, magnesium, and calcium), and organic compounds (mostly carbohydrates, proteins, and lipids). The primary oxygen transporter molecule is hemocyanin.
|
86 |
+
|
87 |
+
There are free-floating cells, the hemocytes, within the hemolymph. They play a role in the arthropod immune system.
|
88 |
+
|
89 |
+
The circulatory systems of all vertebrates, as well as of annelids (for example, earthworms) and cephalopods (squids, octopuses and relatives) are closed, just as in humans. Still, the systems of fish, amphibians, reptiles, and birds show various stages of the evolution of the circulatory system.[16]
|
90 |
+
|
91 |
+
In fish, the system has only one circuit, with the blood being pumped through the capillaries of the gills and on to the capillaries of the body tissues. This is known as single cycle circulation. The heart of fish is, therefore, only a single pump (consisting of two chambers).
|
92 |
+
|
93 |
+
In amphibians and most reptiles, a double circulatory system is used, but the heart is not always completely separated into two pumps. Amphibians have a three-chambered heart.
|
94 |
+
|
95 |
+
In reptiles, the ventricular septum of the heart is incomplete and the pulmonary artery is equipped with a sphincter muscle. This allows a second possible route of blood flow. Instead of blood flowing through the pulmonary artery to the lungs, the sphincter may be contracted to divert this blood flow through the incomplete ventricular septum into the left ventricle and out through the aorta. This means the blood flows from the capillaries to the heart and back to the capillaries instead of to the lungs. This process is useful to ectothermic (cold-blooded) animals in the regulation of their body temperature.
|
96 |
+
|
97 |
+
Birds, mammals, and crocodilians show complete separation of the heart into two pumps, for a total of four heart chambers; it is thought that the four-chambered heart of birds and crocodilians evolved independently from that of mammals.[17]
|
98 |
+
|
99 |
+
Circulatory systems are absent in some animals, including flatworms. Their body cavity has no lining or enclosed fluid. Instead a muscular pharynx leads to an extensively branched digestive system that facilitates direct diffusion of nutrients to all cells. The flatworm's dorso-ventrally flattened body shape also restricts the distance of any cell from the digestive system or the exterior of the organism. Oxygen can diffuse from the surrounding water into the cells, and carbon dioxide can diffuse out. Consequently, every cell is able to obtain nutrients, water and oxygen without the need of a transport system.
|
100 |
+
|
101 |
+
Some animals, such as jellyfish, have more extensive branching from their gastrovascular cavity (which functions as both a place of digestion and a form of circulation), this branching allows for bodily fluids to reach the outer layers, since the digestion begins in the inner layers.
|
102 |
+
|
103 |
+
The earliest known writings on the circulatory system are found in the Ebers Papyrus (16th century BCE), an ancient Egyptian medical papyrus containing over 700 prescriptions and remedies, both physical and spiritual. In the papyrus, it acknowledges the connection of the heart to the arteries. The Egyptians thought air came in through the mouth and into the lungs and heart. From the heart, the air travelled to every member through the arteries. Although this concept of the circulatory system is only partially correct, it represents one of the earliest accounts of scientific thought.
|
104 |
+
|
105 |
+
In the 6th century BCE, the knowledge of circulation of vital fluids through the body was known to the Ayurvedic physician Sushruta in ancient India.[18] He also seems to have possessed knowledge of the arteries, described as 'channels' by Dwivedi & Dwivedi (2007).[18] The valves of the heart were discovered by a physician of the Hippocratean school around the 4th century BCE. However their function was not properly understood then. Because blood pools in the veins after death, arteries look empty. Ancient anatomists assumed they were filled with air and that they were for transport of air.
|
106 |
+
|
107 |
+
The Greek physician, Herophilus, distinguished veins from arteries but thought that the pulse was a property of arteries themselves. Greek anatomist Erasistratus observed that arteries that were cut during life bleed. He ascribed the fact to the phenomenon that air escaping from an artery is replaced with blood that entered by very small vessels between veins and arteries. Thus he apparently postulated capillaries but with reversed flow of blood.[19]
|
108 |
+
|
109 |
+
In 2nd century AD Rome, the Greek physician Galen knew that blood vessels carried blood and identified venous (dark red) and arterial (brighter and thinner) blood, each with distinct and separate functions. Growth and energy were derived from venous blood created in the liver from chyle, while arterial blood gave vitality by containing pneuma (air) and originated in the heart. Blood flowed from both creating organs to all parts of the body where it was consumed and there was no return of blood to the heart or liver. The heart did not pump blood around, the heart's motion sucked blood in during diastole and the blood moved by the pulsation of the arteries themselves.
|
110 |
+
|
111 |
+
Galen believed that the arterial blood was created by venous blood passing from the left ventricle to the right by passing through 'pores' in the interventricular septum, air passed from the lungs via the pulmonary artery to the left side of the heart. As the arterial blood was created 'sooty' vapors were created and passed to the lungs also via the pulmonary artery to be exhaled.
|
112 |
+
|
113 |
+
In 1025, The Canon of Medicine by the Persian physician, Avicenna, "erroneously accepted the Greek notion regarding the existence of a hole in the ventricular septum by which the blood traveled between the ventricles." Despite this, Avicenna "correctly wrote on the cardiac cycles and valvular function", and "had a vision of blood circulation" in his Treatise on Pulse.[20][verification needed] While also refining Galen's erroneous theory of the pulse, Avicenna provided the first correct explanation of pulsation: "Every beat of the pulse comprises two movements and two pauses. Thus, expansion : pause : contraction : pause. [...] The pulse is a movement in the heart and arteries ... which takes the form of alternate expansion and contraction."[21]
|
114 |
+
|
115 |
+
In 1242, the Arabian physician, Ibn al-Nafis, became the first person to accurately describe the process of pulmonary circulation, for which he is sometimes considered the father of circulatory physiology.[22][failed verification] Ibn al-Nafis stated in his Commentary on Anatomy in Avicenna's Canon:
|
116 |
+
|
117 |
+
"...the blood from the right chamber of the heart must arrive at the left chamber but there is no direct pathway between them. The thick septum of the heart is not perforated and does not have visible pores as some people thought or invisible pores as Galen thought. The blood from the right chamber must flow through the vena arteriosa (pulmonary artery) to the lungs, spread through its substances, be mingled there with air, pass through the arteria venosa (pulmonary vein) to reach the left chamber of the heart and there form the vital spirit..."
|
118 |
+
|
119 |
+
In addition, Ibn al-Nafis had an insight into what would become a larger theory of the capillary circulation. He stated that "there must be small communications or pores (manafidh in Arabic) between the pulmonary artery and vein," a prediction that preceded the discovery of the capillary system by more than 400 years.[23] Ibn al-Nafis' theory, however, was confined to blood transit in the lungs and did not extend to the entire body.
|
120 |
+
|
121 |
+
Michael Servetus was the first European to describe the function of pulmonary circulation, although his achievement was not widely recognized at the time, for a few reasons. He firstly described it in the "Manuscript of Paris"[24][25] (near 1546), but this work was never published. And later he published this description, but in a theological treatise, Christianismi Restitutio, not in a book on medicine. Only three copies of the book survived but these remained hidden for decades, the rest were burned shortly after its publication in 1553 because of persecution of Servetus by religious authorities.
|
122 |
+
|
123 |
+
Better known discovery of pulmonary circulation was by Vesalius's successor at Padua, Realdo Colombo, in 1559.
|
124 |
+
|
125 |
+
Finally, the English physician William Harvey, a pupil of Hieronymus Fabricius (who had earlier described the valves of the veins without recognizing their function), performed a sequence of experiments and published his Exercitatio Anatomica de Motu Cordis et Sanguinis in Animalibus in 1628, which "demonstrated that there had to be a direct connection between the venous and arterial systems throughout the body, and not just the lungs. Most importantly, he argued that the beat of the heart produced a continuous circulation of blood through minute connections at the extremities of the body. This is a conceptual leap that was quite different from Ibn al-Nafis' refinement of the anatomy and bloodflow in the heart and lungs."[26] This work, with its essentially correct exposition, slowly convinced the medical world. However, Harvey was not able to identify the capillary system connecting arteries and veins; these were later discovered by Marcello Malpighi in 1661.
|
126 |
+
|
127 |
+
In 1956, André Frédéric Cournand, Werner Forssmann and Dickinson W. Richards were awarded the Nobel Prize in Medicine "for their discoveries concerning heart catheterization and pathological changes in the circulatory system."[27]
|
128 |
+
In his Nobel lecture, Forssmann credits Harvey as birthing cardiology with the publication of his book in 1628.[28]
|
129 |
+
|
130 |
+
In the 1970s, Diana McSherry developed computer-based systems to create images of the circulatory system and heart without the need for surgery.[29]
|
en/1144.html.txt
ADDED
@@ -0,0 +1,130 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
The circulatory system, also called the cardiovascular system or the vascular system, is an organ system that permits blood to circulate and transport nutrients (such as amino acids and electrolytes), oxygen, carbon dioxide, hormones, and blood cells to and from the cells in the body to provide nourishment and help in fighting diseases, stabilize temperature and pH, and maintain homeostasis.
|
4 |
+
|
5 |
+
The circulatory system includes the lymphatic system, which circulates lymph.[1] The passage of lymph takes much longer than that of blood.[2] Blood is a fluid consisting of plasma, red blood cells, white blood cells, and platelets that is circulated by the heart through the vertebrate vascular system, carrying oxygen and nutrients to and waste materials away from all body tissues. Lymph is essentially recycled excess blood plasma after it has been filtered from the interstitial fluid (between cells) and returned to the lymphatic system. The cardiovascular (from Latin words meaning "heart" and "vessel") system comprises the blood, heart, and blood vessels.[3] The lymph, lymph nodes, and lymph vessels form the lymphatic system, which returns filtered blood plasma from the interstitial fluid (between cells) as lymph.
|
6 |
+
|
7 |
+
The circulatory system of the blood is seen as having two components, a systemic circulation and a pulmonary circulation.[4]
|
8 |
+
|
9 |
+
While humans, as well as other vertebrates, have a closed cardiovascular system (meaning that the blood never leaves the network of arteries, veins and capillaries), some invertebrate groups have an open cardiovascular system. The lymphatic system, on the other hand, is an open system providing an accessory route for excess interstitial fluid to be returned to the blood.[5] The more primitive, diploblastic animal phyla lack circulatory systems.
|
10 |
+
|
11 |
+
Many diseases affect the circulatory system. This includes cardiovascular disease, affecting the cardiovascular system, and lymphatic disease affecting the lymphatic system. Cardiologists are medical professionals which specialise in the heart, and cardiothoracic surgeons specialise in operating on the heart and its surrounding areas. Vascular surgeons focus on other parts of the circulatory system.
|
12 |
+
|
13 |
+
|
14 |
+
|
15 |
+
The essential components of the human cardiovascular system are the heart, blood and blood vessels.[6] It includes the pulmonary circulation, a "loop" through the lungs where blood is oxygenated; and the systemic circulation, a "loop" through the rest of the body to provide oxygenated blood. The systemic circulation can also be seen to function in two parts – a macrocirculation and a microcirculation. An average adult contains five to six quarts (roughly 4.7 to 5.7 liters) of blood, accounting for approximately 7% of their total body weight.[7] Blood consists of plasma, red blood cells, white blood cells, and platelets. Also, the digestive system works with the circulatory system to provide the nutrients the system needs to keep the heart pumping.[8]
|
16 |
+
|
17 |
+
The cardiovascular systems of humans are closed, meaning that the blood never leaves the network of blood vessels. In contrast, oxygen and nutrients diffuse across the blood vessel layers and enter interstitial fluid, which carries oxygen and nutrients to the target cells, and carbon dioxide and wastes in the opposite direction. The other component of the circulatory system, the lymphatic system, is open.
|
18 |
+
|
19 |
+
Oxygenated blood enters the systemic circulation when leaving the left ventricle, through the aortic semilunar valve. The first part of the systemic circulation is the aorta, a massive and thick-walled artery. The aorta arches and gives branches supplying the upper part of the body after passing through the aortic opening of the diaphragm at the level of thoracic ten vertebra, it enters the abdomen. Later it descends down and supplies branches to abdomen, pelvis, perineum and the lower limbs. The walls of aorta are elastic. This elasticity helps to maintain the blood pressure throughout the body. When the aorta receives almost five litres of blood from the heart, it recoils and is responsible for pulsating blood pressure. Moreover, as aorta branches into smaller arteries, their elasticity goes on decreasing and their compliance goes on increasing.
|
20 |
+
|
21 |
+
Arteries branch into small passages called arterioles and then into the capillaries.[9] The capillaries merge to bring blood into the venous system.[10]
|
22 |
+
|
23 |
+
Capillaries merge into venules, which merge into veins. The venous system feeds into the two major veins: the superior vena cava – which mainly drains tissues above the heart – and the inferior vena cava – which mainly drains tissues below the heart. These two large veins empty into the right atrium of the heart.
|
24 |
+
|
25 |
+
The general rule is that arteries from the heart branch out into capillaries, which collect into veins leading back to the heart. Portal veins are a slight exception to this. In humans the only significant example is the hepatic portal vein which combines from capillaries around the gastrointestinal tract where the blood absorbs the various products of digestion; rather than leading directly back to the heart, the hepatic portal vein branches into a second capillary system in the liver.
|
26 |
+
|
27 |
+
The heart pumps oxygenated blood to the body and deoxygenated blood to the lungs. In the human heart there is one atrium and one ventricle for each circulation, and with both a systemic and a pulmonary circulation there are four chambers in total: left atrium, left ventricle, right atrium and right ventricle. The right atrium is the upper chamber of the right side of the heart. The blood that is returned to the right atrium is deoxygenated (poor in oxygen) and passed into the right ventricle to be pumped through the pulmonary artery to the lungs for re-oxygenation and removal of carbon dioxide. The left atrium receives newly oxygenated blood from the lungs as well as the pulmonary vein which is passed into the strong left ventricle to be pumped through the aorta to the different organs of the body.
|
28 |
+
|
29 |
+
The heart itself is supplied with oxygen and nutrients through a small "loop" of the systemic circulation and derives very little from the blood contained within the four chambers.
|
30 |
+
The coronary circulation system provides a blood supply to the heart muscle itself. The coronary circulation begins near the origin of the aorta by two coronary arteries: the right coronary artery and the left coronary artery. After nourishing the heart muscle, blood returns through the coronary veins into the coronary sinus and from this one into the right atrium. Back flow of blood through its opening during atrial systole is prevented by Thebesian valve. The smallest cardiac veins drain directly into the heart chambers.[8]
|
31 |
+
|
32 |
+
The circulatory system of the lungs is the portion of the cardiovascular system in which oxygen-depleted blood is pumped away from the heart, via the pulmonary artery, to the lungs and returned, oxygenated, to the heart via the pulmonary vein.
|
33 |
+
|
34 |
+
Oxygen deprived blood from the superior and inferior vena cava enters the right atrium of the heart and flows through the tricuspid valve (right atrioventricular valve) into the right ventricle, from which it is then pumped through the pulmonary semilunar valve into the pulmonary artery to the lungs. Gas exchange occurs in the lungs, whereby CO2 is released from the blood, and oxygen is absorbed. The pulmonary vein returns the now oxygen-rich blood to the left atrium.[8]
|
35 |
+
|
36 |
+
A separate system known as the bronchial circulation supplies blood to the tissue of the larger airways of the lung.
|
37 |
+
|
38 |
+
Systemic circulation is the portion of the cardiovascular system which transports oxygenated blood away from the heart through the aorta from the left ventricle where the blood has been previously deposited from pulmonary circulation, to the rest of the body, and returns oxygen-depleted blood back to the heart.[8]
|
39 |
+
|
40 |
+
The brain has a dual blood supply that comes from arteries at its front and back. These are called the "anterior" and "posterior" circulation respectively. The anterior circulation arises from the internal carotid arteries and supplies the front of the brain. The posterior circulation arises from the vertebral arteries, and supplies the back of the brain and brainstem. The circulation from the front and the back join together (anastomise) at the Circle of Willis.
|
41 |
+
|
42 |
+
The renal circulation receives around 20% of the cardiac output. It branches from the abdominal aorta and returns blood to the ascending vena cava. It is the blood supply to the kidneys, and contains many specialized blood vessels.
|
43 |
+
|
44 |
+
The lymphatic system is part of the circulatory system. It is a network of lymphatic vessels and lymph capillaries, lymph nodes and organs, and lymphatic tissues and circulating lymph. One of its major functions is to carry the lymph, draining and returning interstitial fluid back towards the heart for return to the cardiovascular system, by emptying into the lymphatic ducts. Its other main function is in the adaptive immune system.[11]
|
45 |
+
|
46 |
+
The development of the circulatory system starts with vasculogenesis in the embryo. The human arterial and venous systems develop from different areas in the embryo. The arterial system develops mainly from the aortic arches, six pairs of arches which develop on the upper part of the embryo. The venous system arises from three bilateral veins during weeks 4 – 8 of embryogenesis. Fetal circulation begins within the 8th week of development. Fetal circulation does not include the lungs, which are bypassed via the truncus arteriosus. Before birth the fetus obtains oxygen (and nutrients) from the mother through the placenta and the umbilical cord.[12]
|
47 |
+
|
48 |
+
The human arterial system originates from the aortic arches and from the dorsal aortae starting from week 4 of embryonic life. The first and second aortic arches regress and forms only the maxillary arteries and stapedial arteries respectively. The arterial system itself arises from aortic arches 3, 4 and 6 (aortic arch 5 completely regresses).
|
49 |
+
|
50 |
+
The dorsal aortae, present on the dorsal side of the embryo, are initially present on both sides of the embryo. They later fuse to form the basis for the aorta itself. Approximately thirty smaller arteries branch from this at the back and sides. These branches form the intercostal arteries, arteries of the arms and legs, lumbar arteries and the lateral sacral arteries. Branches to the sides of the aorta will form the definitive renal, suprarenal and gonadal arteries. Finally, branches at the front of the aorta consist of the vitelline arteries and umbilical arteries. The vitelline arteries form the celiac, superior and inferior mesenteric arteries of the gastrointestinal tract. After birth, the umbilical arteries will form the internal iliac arteries.
|
51 |
+
|
52 |
+
The human venous system develops mainly from the vitelline veins, the umbilical veins and the cardinal veins, all of which empty into the sinus venosus.
|
53 |
+
|
54 |
+
About 98.5% of the oxygen in a sample of arterial blood in a healthy human, breathing air at sea-level pressure, is chemically combined with hemoglobin molecules. About 1.5% is physically dissolved in the other blood liquids and not connected to hemoglobin. The hemoglobin molecule is the primary transporter of oxygen in mammals and many other species.
|
55 |
+
|
56 |
+
Many diseases affect the circulatory system. These include a number of cardiovascular diseases, affecting the cardiovascular system, and lymphatic diseases affecting the lymphatic system. Cardiologists are medical professionals which specialise in the heart, and cardiothoracic surgeons specialise in operating on the heart and its surrounding areas. Vascular surgeons focus on other parts of the circulatory system.
|
57 |
+
|
58 |
+
Diseases affecting the cardiovascular system are called cardiovascular disease.
|
59 |
+
|
60 |
+
Many of these diseases are called "lifestyle diseases" because they develop over time and are related to a person's exercise habits, diet, whether they smoke, and other lifestyle choices a person makes. Atherosclerosis is the precursor to many of these diseases. It is where small atheromatous plaques build up in the walls of medium and large arteries. This may eventually grow or rupture to occlude the arteries. It is also a risk factor for acute coronary syndromes, which are diseases that are characterised by a sudden deficit of oxygenated blood to the heart tissue. Atherosclerosis is also associated with problems such as aneurysm formation or splitting ("dissection") of arteries.
|
61 |
+
|
62 |
+
Another major cardiovascular disease involves the creation of a clot, called a "thrombus". These can originate in veins or arteries. Deep venous thrombosis, which mostly occurs in the legs, is one cause of clots in the veins of the legs, particularly when a person has been stationary for a long time. These clots may embolise, meaning travel to another location in the body. The results of this may include pulmonary embolus, transient ischaemic attacks, or stroke.
|
63 |
+
|
64 |
+
Cardiovascular diseases may also be congenital in nature, such as heart defects or persistent fetal circulation, where the circulatory changes that are supposed to happen after birth do not. Not all congenital changes to the circulatory system are associated with diseases, a large number are anatomical variations.
|
65 |
+
|
66 |
+
The function and health of the circulatory system and its parts are measured in a variety of manual and automated ways. These include simple methods such as those that are part of the cardiovascular examination, including the taking of a person's pulse as an indicator of a person's heart rate, the taking of blood pressure through a sphygmomanometer or the use of a stethoscope to listen to the heart for murmurs which may indicate problems with the heart's valves. An electrocardiogram can also be used to evaluate the way in which electricity is conducted through the heart.
|
67 |
+
|
68 |
+
Other more invasive means can also be used. A cannula or catheter inserted into an artery may be used to measure pulse pressure or pulmonary wedge pressures. Angiography, which involves injecting a dye into an artery to visualise an arterial tree, can be used in the heart (coronary angiography) or brain. At the same time as the arteries are visualised, blockages or narrowings may be fixed through the insertion of stents, and active bleeds may be managed by the insertion of coils. An MRI may be used to image arteries, called an MRI angiogram. For evaluation of the blood supply to the lungs a CT pulmonary angiogram may be used.
|
69 |
+
|
70 |
+
Vascular ultrasonography include for example:
|
71 |
+
|
72 |
+
There are a number of surgical procedures performed on the circulatory system:
|
73 |
+
|
74 |
+
Cardiovascular procedures are more likely to be performed in an inpatient setting than in an ambulatory care setting; in the United States, only 28% of cardiovascular surgeries were performed in the ambulatory care setting.[13]
|
75 |
+
|
76 |
+
In Ancient Greece, the heart was thought to be the source of innate heat for the body.
|
77 |
+
The circulatory system as we know it was discovered by William Harvey.
|
78 |
+
|
79 |
+
While humans, as well as other vertebrates, have a closed cardiovascular system (meaning that the blood never leaves the network of arteries, veins and capillaries), some invertebrate groups have an open cardiovascular system. The lymphatic system, on the other hand, is an open system providing an accessory route for excess interstitial fluid to be returned to the blood.[5] The more primitive, diploblastic animal phyla lack circulatory systems.
|
80 |
+
|
81 |
+
The blood vascular system first appeared probably in an ancestor of the triploblasts over 600 million years ago, overcoming the time-distance constraints of diffusion, while endothelium evolved in an ancestral vertebrate some 540–510 million years ago.[14]
|
82 |
+
|
83 |
+
In arthropods, the open circulatory system is a system in which a fluid in a cavity called the hemocoel bathes the organs directly with oxygen and nutrients and there is no distinction between blood and interstitial fluid; this combined fluid is called hemolymph or haemolymph.[15] Muscular movements by the animal during locomotion can facilitate hemolymph movement, but diverting flow from one area to another is limited. When the heart relaxes, blood is drawn back toward the heart through open-ended pores (ostia).
|
84 |
+
|
85 |
+
Hemolymph fills all of the interior hemocoel of the body and surrounds all cells. Hemolymph is composed of water, inorganic salts (mostly sodium, chloride, potassium, magnesium, and calcium), and organic compounds (mostly carbohydrates, proteins, and lipids). The primary oxygen transporter molecule is hemocyanin.
|
86 |
+
|
87 |
+
There are free-floating cells, the hemocytes, within the hemolymph. They play a role in the arthropod immune system.
|
88 |
+
|
89 |
+
The circulatory systems of all vertebrates, as well as of annelids (for example, earthworms) and cephalopods (squids, octopuses and relatives) are closed, just as in humans. Still, the systems of fish, amphibians, reptiles, and birds show various stages of the evolution of the circulatory system.[16]
|
90 |
+
|
91 |
+
In fish, the system has only one circuit, with the blood being pumped through the capillaries of the gills and on to the capillaries of the body tissues. This is known as single cycle circulation. The heart of fish is, therefore, only a single pump (consisting of two chambers).
|
92 |
+
|
93 |
+
In amphibians and most reptiles, a double circulatory system is used, but the heart is not always completely separated into two pumps. Amphibians have a three-chambered heart.
|
94 |
+
|
95 |
+
In reptiles, the ventricular septum of the heart is incomplete and the pulmonary artery is equipped with a sphincter muscle. This allows a second possible route of blood flow. Instead of blood flowing through the pulmonary artery to the lungs, the sphincter may be contracted to divert this blood flow through the incomplete ventricular septum into the left ventricle and out through the aorta. This means the blood flows from the capillaries to the heart and back to the capillaries instead of to the lungs. This process is useful to ectothermic (cold-blooded) animals in the regulation of their body temperature.
|
96 |
+
|
97 |
+
Birds, mammals, and crocodilians show complete separation of the heart into two pumps, for a total of four heart chambers; it is thought that the four-chambered heart of birds and crocodilians evolved independently from that of mammals.[17]
|
98 |
+
|
99 |
+
Circulatory systems are absent in some animals, including flatworms. Their body cavity has no lining or enclosed fluid. Instead a muscular pharynx leads to an extensively branched digestive system that facilitates direct diffusion of nutrients to all cells. The flatworm's dorso-ventrally flattened body shape also restricts the distance of any cell from the digestive system or the exterior of the organism. Oxygen can diffuse from the surrounding water into the cells, and carbon dioxide can diffuse out. Consequently, every cell is able to obtain nutrients, water and oxygen without the need of a transport system.
|
100 |
+
|
101 |
+
Some animals, such as jellyfish, have more extensive branching from their gastrovascular cavity (which functions as both a place of digestion and a form of circulation), this branching allows for bodily fluids to reach the outer layers, since the digestion begins in the inner layers.
|
102 |
+
|
103 |
+
The earliest known writings on the circulatory system are found in the Ebers Papyrus (16th century BCE), an ancient Egyptian medical papyrus containing over 700 prescriptions and remedies, both physical and spiritual. In the papyrus, it acknowledges the connection of the heart to the arteries. The Egyptians thought air came in through the mouth and into the lungs and heart. From the heart, the air travelled to every member through the arteries. Although this concept of the circulatory system is only partially correct, it represents one of the earliest accounts of scientific thought.
|
104 |
+
|
105 |
+
In the 6th century BCE, the knowledge of circulation of vital fluids through the body was known to the Ayurvedic physician Sushruta in ancient India.[18] He also seems to have possessed knowledge of the arteries, described as 'channels' by Dwivedi & Dwivedi (2007).[18] The valves of the heart were discovered by a physician of the Hippocratean school around the 4th century BCE. However their function was not properly understood then. Because blood pools in the veins after death, arteries look empty. Ancient anatomists assumed they were filled with air and that they were for transport of air.
|
106 |
+
|
107 |
+
The Greek physician, Herophilus, distinguished veins from arteries but thought that the pulse was a property of arteries themselves. Greek anatomist Erasistratus observed that arteries that were cut during life bleed. He ascribed the fact to the phenomenon that air escaping from an artery is replaced with blood that entered by very small vessels between veins and arteries. Thus he apparently postulated capillaries but with reversed flow of blood.[19]
|
108 |
+
|
109 |
+
In 2nd century AD Rome, the Greek physician Galen knew that blood vessels carried blood and identified venous (dark red) and arterial (brighter and thinner) blood, each with distinct and separate functions. Growth and energy were derived from venous blood created in the liver from chyle, while arterial blood gave vitality by containing pneuma (air) and originated in the heart. Blood flowed from both creating organs to all parts of the body where it was consumed and there was no return of blood to the heart or liver. The heart did not pump blood around, the heart's motion sucked blood in during diastole and the blood moved by the pulsation of the arteries themselves.
|
110 |
+
|
111 |
+
Galen believed that the arterial blood was created by venous blood passing from the left ventricle to the right by passing through 'pores' in the interventricular septum, air passed from the lungs via the pulmonary artery to the left side of the heart. As the arterial blood was created 'sooty' vapors were created and passed to the lungs also via the pulmonary artery to be exhaled.
|
112 |
+
|
113 |
+
In 1025, The Canon of Medicine by the Persian physician, Avicenna, "erroneously accepted the Greek notion regarding the existence of a hole in the ventricular septum by which the blood traveled between the ventricles." Despite this, Avicenna "correctly wrote on the cardiac cycles and valvular function", and "had a vision of blood circulation" in his Treatise on Pulse.[20][verification needed] While also refining Galen's erroneous theory of the pulse, Avicenna provided the first correct explanation of pulsation: "Every beat of the pulse comprises two movements and two pauses. Thus, expansion : pause : contraction : pause. [...] The pulse is a movement in the heart and arteries ... which takes the form of alternate expansion and contraction."[21]
|
114 |
+
|
115 |
+
In 1242, the Arabian physician, Ibn al-Nafis, became the first person to accurately describe the process of pulmonary circulation, for which he is sometimes considered the father of circulatory physiology.[22][failed verification] Ibn al-Nafis stated in his Commentary on Anatomy in Avicenna's Canon:
|
116 |
+
|
117 |
+
"...the blood from the right chamber of the heart must arrive at the left chamber but there is no direct pathway between them. The thick septum of the heart is not perforated and does not have visible pores as some people thought or invisible pores as Galen thought. The blood from the right chamber must flow through the vena arteriosa (pulmonary artery) to the lungs, spread through its substances, be mingled there with air, pass through the arteria venosa (pulmonary vein) to reach the left chamber of the heart and there form the vital spirit..."
|
118 |
+
|
119 |
+
In addition, Ibn al-Nafis had an insight into what would become a larger theory of the capillary circulation. He stated that "there must be small communications or pores (manafidh in Arabic) between the pulmonary artery and vein," a prediction that preceded the discovery of the capillary system by more than 400 years.[23] Ibn al-Nafis' theory, however, was confined to blood transit in the lungs and did not extend to the entire body.
|
120 |
+
|
121 |
+
Michael Servetus was the first European to describe the function of pulmonary circulation, although his achievement was not widely recognized at the time, for a few reasons. He firstly described it in the "Manuscript of Paris"[24][25] (near 1546), but this work was never published. And later he published this description, but in a theological treatise, Christianismi Restitutio, not in a book on medicine. Only three copies of the book survived but these remained hidden for decades, the rest were burned shortly after its publication in 1553 because of persecution of Servetus by religious authorities.
|
122 |
+
|
123 |
+
Better known discovery of pulmonary circulation was by Vesalius's successor at Padua, Realdo Colombo, in 1559.
|
124 |
+
|
125 |
+
Finally, the English physician William Harvey, a pupil of Hieronymus Fabricius (who had earlier described the valves of the veins without recognizing their function), performed a sequence of experiments and published his Exercitatio Anatomica de Motu Cordis et Sanguinis in Animalibus in 1628, which "demonstrated that there had to be a direct connection between the venous and arterial systems throughout the body, and not just the lungs. Most importantly, he argued that the beat of the heart produced a continuous circulation of blood through minute connections at the extremities of the body. This is a conceptual leap that was quite different from Ibn al-Nafis' refinement of the anatomy and bloodflow in the heart and lungs."[26] This work, with its essentially correct exposition, slowly convinced the medical world. However, Harvey was not able to identify the capillary system connecting arteries and veins; these were later discovered by Marcello Malpighi in 1661.
|
126 |
+
|
127 |
+
In 1956, André Frédéric Cournand, Werner Forssmann and Dickinson W. Richards were awarded the Nobel Prize in Medicine "for their discoveries concerning heart catheterization and pathological changes in the circulatory system."[27]
|
128 |
+
In his Nobel lecture, Forssmann credits Harvey as birthing cardiology with the publication of his book in 1628.[28]
|
129 |
+
|
130 |
+
In the 1970s, Diana McSherry developed computer-based systems to create images of the circulatory system and heart without the need for surgery.[29]
|
en/1145.html.txt
ADDED
@@ -0,0 +1,53 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
The Circus Maximus (Latin for greatest or largest circus; Italian: Circo Massimo) is an ancient Roman chariot-racing stadium and mass entertainment venue located in Rome, Italy. Situated in the valley between the Aventine and Palatine hills, it was the first and largest stadium in ancient Rome and its later Empire. It measured 621 m (2,037 ft) in length and 118 m (387 ft) in width and could accommodate over 150,000 spectators.[2] In its fully developed form, it became the model for circuses throughout the Roman Empire. The site is now a public park.
|
2 |
+
|
3 |
+
The Circus was Rome's largest venue for ludi, public games connected to Roman religious festivals. Ludi were sponsored by leading Romans or the Roman state for the benefit of the Roman people (populus Romanus) and gods. Most were held annually or at annual intervals on the Roman calendar. Others might be given to fulfill a religious vow, such as the games in celebration of a triumph. In Roman tradition, the earliest triumphal ludi at the Circus were vowed by Tarquin the Proud to Jupiter in the late Regal era for his victory over Pometia.[3]
|
4 |
+
|
5 |
+
Ludi ranged in duration and scope from one-day or even half-day events to spectacular multi-venue celebrations held over several days, with religious ceremonies and public feasts, horse and chariot racing, athletics, plays and recitals, beast-hunts and gladiator fights. Some included public executions. The greater ludi ([4]meaning sport or game in latin) at the Circus began with a flamboyant parade (pompa circensis), much like the triumphal procession, which marked the purpose of the games and introduced the participants.[5]
|
6 |
+
|
7 |
+
During Rome's Republican era, the aediles organized the games. The most costly and complex of the ludi offered opportunities to assess an aedile's competence, generosity, and fitness for higher office.[6] Some Circus events, however, seem to have been relatively small and intimate affairs. In 167 BC, "flute players, scenic artists and dancers" performed on a temporary stage, probably erected between the two central seating banks. Others were enlarged at enormous expense to fit the entire space. A venatio held there in 169 BC, one of several in the 2nd century, employed "63 leopards and 40 bears and elephants", with spectators presumably kept safe by a substantial barrier.[7]
|
8 |
+
|
9 |
+
As Rome's provinces expanded, existing ludi were embellished and new ludi invented by politicians who competed for divine and popular support. By the late Republic, ludi were held on 57 days of the year;[8] an unknown number of these would have required full use of the Circus. On many other days, charioteers and jockeys would need to practice on its track. Otherwise, it would have made a convenient corral for the animals traded in the nearby cattle market, just outside the starting gate. Beneath the outer stands, next to the Circus' multiple entrances, were workshops and shops. When no games were being held, the Circus at the time of Catullus (mid-1st century BC) was likely "a dusty open space with shops and booths ... a colourful crowded disreputable area"[9] frequented by "prostitutes, jugglers, fortune tellers and low-class performing artists."[10]
|
10 |
+
|
11 |
+
Rome's emperors met the ever-burgeoning popular demand for regular ludi and the need for more specialised venues, as essential obligations of their office and cult. Over the several centuries of its development, the Circus Maximus became Rome's paramount specialist venue for chariot races. By the late 1st century AD, the Colosseum had been built to host most of the city's gladiator shows and smaller beast-hunts, and most track-athletes competed at the purpose-designed Stadium of Domitian, though long-distance foot races were still held at the Circus.[11] Eventually, 135 days of the year were devoted to ludi.[8]
|
12 |
+
|
13 |
+
Even at the height of its development as a chariot-racing circuit, the circus remained the most suitable space in Rome for religious processions on a grand scale, and was the most popular venue for large-scale venationes;[12] in the late 3rd century, the emperor Probus laid on a spectacular Circus show in which beasts were hunted through a veritable forest of trees, on a specially built stage.[13] With the advent of Christianity as the official religion of the Empire, ludi gradually fell out of favour. The last known beast-hunt at the Circus Maximus took place in 523, and the last known races there were held by Totila in 549.[14]
|
14 |
+
|
15 |
+
The Circus Maximus was sited on the level ground of the Valley of Murcia (Vallis Murcia), between Rome's Aventine and Palatine Hills. In Rome's early days, the valley would have been rich agricultural land, prone to flooding from the river Tiber and the stream which divided the valley. The stream was probably bridged at an early date, at the two points where the track had to cross it, and the earliest races would have been held within an agricultural landscape, "with nothing more than turning posts, banks where spectators could sit, and some shrines and sacred spots".[15]
|
16 |
+
|
17 |
+
In Livy's history of Rome, the first Etruscan king of Rome, Lucius Tarquinius Priscus, built raised, wooden perimeter seating at the Circus for Rome's highest echelons (the equites and patricians), probably midway along the Palatine straight, with an awning against the sun and rain. His grandson, Tarquinius Superbus, added the first seating for citizen-commoners (plebs, or plebeians), either adjacent or on the opposite, Aventine side of the track.[16] Otherwise, the Circus was probably still little more than a trackway through surrounding farmland. By this time, it may have been drained[17] but the wooden stands and seats would have frequently rotted and been rebuilt. The turning posts (metae), each made of three conical stone pillars, may have been the earliest permanent Circus structures; an open drainage canal between the posts would have served as a dividing barrier.[18]
|
18 |
+
|
19 |
+
The games' sponsor (Latin editor) usually sat beside the images of attending gods, on a conspicuous, elevated stand (pulvinar) but seats at the track's perimeter offered the best, most dramatic close-ups. In 494 BC (very early in the Republican era) the dictator Manius Valerius Maximus and his descendants were granted rights to a curule chair at the southeastern turn, an excellent viewpoint for the thrills and spills of chariot racing.[19] In the 190s BC, stone track-side seating was built, exclusively for senators.[20]
|
20 |
+
|
21 |
+
Permanent wooden starting stalls were built in 329 BC. They were gated, brightly painted,[21] and staggered to equalise the distances from each start place to the central barrier. In theory, they might have accommodated up to 25 four-horse chariots (Quadrigas) abreast but when team-racing was introduced,[22] they were widened, and their number reduced. By the late Republican or early Imperial era, there were twelve stalls. Their divisions were fronted by herms that served as stops for spring-loaded gates, so that twelve light-weight, four-horse or two-horse chariots could be simultaneously released onto the track. The stalls were allocated by lottery, and the various racing teams were identified by their colors.[23] Typically, there were seven laps per race.[24] From at least 174 BC, they were counted off using large sculpted eggs. In 33 BC, an additional system of large bronze dolphin-shaped lap counters was added, positioned well above the central dividing barrier (euripus) for maximum visibility.[25]
|
22 |
+
|
23 |
+
Julius Caesar's development of the Circus, commencing around 50 BC, extended the seating tiers to run almost the entire circuit of the track, barring the starting gates and a processional entrance at the semi-circular end.[26] The track measured approximately 621 m (2,037 ft) in length and 150 m (387 ft) in breadth. A canal was cut between the track perimeter and its seating to protect spectators and help drain the track.[27] The inner third of the seating formed a trackside cavea. Its front sections along the central straight were reserved for senators, and those immediately behind for equites. The outer tiers, two thirds of the total, were meant for Roman plebs and non-citizens. They were timber-built, with wooden-framed service buildings, shops and entrance-ways beneath. The total number of seats is uncertain, but was probably in the order of 150,000;[28] Pliny the Elder's estimate of 250,000 is unlikely. The wooden bleachers were damaged in a fire of 31 BC, either during or after construction.[29]
|
24 |
+
|
25 |
+
The fire damage of 31 was probably repaired by Augustus (Caesar's successor and Rome's first emperor). He modestly claimed credit only for an obelisk and pulvinar at the site but both were major projects. Ever since its quarrying, long before Rome existed, the obelisk had been sacred to Egyptian Sun-gods.[31] Augustus had it brought from Heliopolis[32] at enormous expense, and erected midway along the dividing barrier of the Circus. It was Rome's first obelisk, an exotically sacred object and a permanent reminder of Augustus' victory over his Roman foes and their Egyptian allies in the recent civil wars. Thanks to him, Rome had secured both a lasting peace and a new Egyptian Province. The pulvinar was built on monumental scale, a shrine or temple (aedes) raised high above the trackside seats. Sometimes, while games were in progress, Augustus watched from there, alongside the gods. Occasionally, his family would join him there. This is the Circus described by Dionysius of Halicarnassus as "one of the most beautiful and admirable structures in Rome", with "entrances and ascents for the spectators at every shop, so that the countless thousands of people may enter and depart without inconvenience."[33]
|
26 |
+
|
27 |
+
The site remained prone to flooding,[34] probably through the starting gates, until Claudius made improvements there; they probably included an extramural anti-flooding embankment. Fires in the crowded, wooden perimeter workshops and bleachers were a far greater danger. A fire of 36 AD seems to have started in a basket-maker's workshop under the stands, on the Aventine side; the emperor Tiberius compensated various small businesses there for their losses.[35] In AD 64, during Nero's reign, fire broke out at the semi-circular end of the Circus, swept through the stands and shops, and destroyed much of the city. Games and festivals continued at the Circus, which was rebuilt over several years to the same footprint and design.[36]
|
28 |
+
|
29 |
+
By the late 1st century AD, the central dividing barrier comprised a series of water basins, or else a single watercourse open in some places and bridged over in others. It offered opportunities for artistic embellishment and decorative swagger, and included the temples and statues of various deities, fountains, and refuges for those assistants involved in more dangerous circus activities, such as beast-hunts and the recovery of casualties during races.[37]
|
30 |
+
|
31 |
+
In AD 81 the Senate built a triple arch honoring Titus at the semi-circular end of the Circus, to replace or augment a former processional entrance.[38] The emperor Domitian built a new, multi-storey palace on the Palatine, connected somehow to the Circus; he likely watched the games in autocratic style, from high above and barely visible to those below. Repairs to fire damage during his reign may already have been under way before his assassination.[39]
|
32 |
+
|
33 |
+
The risk of further fire-damage, coupled with Domitian's fate, may have prompted Trajan's decision to rebuild the Circus entirely in stone, and provide a new pulvinar in the stands where Rome's emperor could be seen and honoured as part of the Roman community, alongside their gods. Under Trajan, the Circus Maximus found its definitive form, which was unchanged thereafter save for some monumental additions by later emperors, an extensive, planned rebuilding of the starting gate area under Caracalla, and repairs and renewals to existing fabric. Some repairs were unforeseen and extensive, such as those carried out in Diocletian's reign, after the collapse of a seating section killed some 13,000 people.[40]
|
34 |
+
|
35 |
+
The southeastern turn of the track ran between two shrines which may have predated the Circus' formal development. One, located at the outer southeast perimeter, was dedicated to the valley's eponymous goddess Murcia, an obscure deity associated with Venus, the myrtle shrub, a sacred spring, the stream that divided the valley, and the lesser peak of the Aventine Hill.[41] The other was at the southeastern turning-post; where there was an underground shrine to Consus, a minor god of grain-stores, connected to the grain-goddess Ceres and to the underworld. According to Roman tradition, Romulus discovered this shrine shortly after the founding of Rome. He invented the Consualia festival, as a way of gathering his Sabine neighbours at a celebration that included horse-races and drinking. During these distractions, Romulus's men then abducted the Sabine daughters as brides. Thus the famous Roman myth of the Rape of the Sabine women had as its setting the Circus and the Consualia.
|
36 |
+
|
37 |
+
In this quasi-legendary era, horse or chariot races would have been held at the Circus site. The track width may have been determined by the distance between Murcia's and Consus' shrines at the southeastern end, and its length by the distance between these two shrines and Hercules' Ara Maxima, supposedly older than Rome itself and sited behind the Circus' starting place.[42] The position of Consus' shrine at the turn of the track recalls the placing of shrines to Roman Neptune's Greek equivalent, Poseidon, in Greek hippodromes.[43] In later developments, the altar of Consus, as one of the Circus' patron deities, was incorporated into the fabric of the south-eastern turning post. When Murcia's stream was partly built over, to form a dividing barrier (the spina or euripus)[44] between the turning posts, her shrine was either retained or rebuilt. In the Late Imperial period, both the southeastern turn and the circus itself were sometimes known as Vallis Murcia.[45] The symbols used to count race-laps also held religious significance; Castor and Pollux, who were born from an egg, were patrons of horses, horsemen, and the equestrian order (equites). Likewise, the later use of dolphin-shaped lap counters reinforced associations between the races, swiftness, and Neptune, as god of earthquakes and horses; the Romans believed dolphins to be the swiftest of all creatures.[25] When the Romans adopted the Phrygian Great Mother as an ancestral deity, a statue of her on lion-back was erected within the circus, probably on the dividing barrier.
|
38 |
+
|
39 |
+
Sun and Moon cults were probably represented at the Circus from its earliest phases. Their importance grew with the introduction of Roman cult to Apollo, and the development of Stoic and solar monism as a theological basis for the Roman Imperial cult. In the Imperial era, the Sun-god was divine patron of the Circus and its games. His sacred obelisk towered over the arena, set in the central barrier, close to his temple and the finishing line. The Sun-god was the ultimate, victorious charioteer, driving his four-horse chariot (quadriga) through the heavenly circuit from sunrise to sunset. His partner Luna drove her two-horse chariot (biga); together, they represented the predictable, orderly movement of the cosmos and the circuit of time, which found analogy in the Circus track.[46] In Imperial cosmology, the emperor was Sol-Apollo's earthly equivalent, and Luna may have been linked to the empress.[citation needed] Luna's temple, built long before Apollo's, burned down in the Great Fire of 64 AD and was probably not replaced. Her cult was closely identified with that of Diana, who seems to have been represented in the processions that started Circus games, and with Sol Indiges, usually identified as her brother. After the loss of her temple, her cult may have been transferred to Sol's temple on the dividing barrier, or one beside it; both would have been open to the sky.[47]
|
40 |
+
|
41 |
+
Temples to several deities overlooked the Circus; most are now lost. The temples to Ceres and Flora stood close together on the Aventine, more or less opposite the Circus' starting gate, which remained under Hercules' protection. Further southeast along the Aventine was a temple to Luna, the moon goddess. Aventine temples to Venus Obsequens, Mercury and Dis (or perhaps Summanus) stood on the slopes above the southeast turn. On the Palatine hill, opposite to Ceres's temple, stood the temple to Magna Mater and, more or less opposite Luna's temple, one to the sun-god Apollo.
|
42 |
+
|
43 |
+
Several festivals, some of uncertain foundation and date, were held at the Circus in historical times. The Consualia, with its semi-mythical establishment by Romulus, and the Cerealia, the major festival of Ceres, were probably older than the earliest historically attested "Roman Games" (Ludi Romani) held at the Circus in honour of Jupiter in 366 BC.[48] In the early Imperial era, Ovid describes the opening of Cerealia (mid to late April) with a horse race at the Circus,[49] followed by the nighttime release of foxes into the stadium, their tails ablaze with lighted torches.[50] Some early connection is likely between Ceres as goddess of grain crops and Consus as a god of grain storage and patron of the Circus.
|
44 |
+
|
45 |
+
After the 6th century, the Circus fell into disuse and decay, and was quarried for building materials. The lower levels, ever prone to flooding, were gradually buried under waterlogged alluvial soil and accumulated debris, so that the original track is now buried 6 meters beneath the modern surface. In the 11th century, the Circus was "replaced by dwellings rented out by the congregation of Saint-Guy."[51] In the 12th century, a watercourse was dug there to drain the soil, and by the 16th century the area was used as a market garden.[52] Many of the Circus's standing structures survived these changes; in 1587, two obelisks were removed from the central barrier by Pope Sixtus V, and one of these was re-sited at the Piazza del Popolo.[32] In 1852 a gas works was built on the site by the Anglo-Italian Gas Society. It remained in situ until 1910 when it was relocated to the edge of Rome.[53] Mid 19th century workings at the circus site uncovered the lower parts of a seating tier and outer portico. Since then, a series of excavations has exposed further sections of the seating, curved turn and central barrier but further exploration has been limited by the scale, depth and waterlogging of the site.[1]
|
46 |
+
|
47 |
+
The Circus site now functions as a large park area, in the centre of the city. It is often used for concerts and meetings. The Rome concert of Live 8 (July 2, 2005) was held there. The English band Genesis performed a concert before an estimated audience of 500,000 people in 2007 (this was filmed and released as When in Rome 2007). The Rolling Stones played there in front of 71,527 people on June 22, 2014 for the Italian date of their 14 On Fire tour. The Circus has also hosted victory celebrations, following the Italian World Cup 2006 victory and the A.S. Roma Serie A victory in 1983 and 2001. In May 2019, a new virtual/augmented reality experience, the Circo Maximo Experience, opened on the site, taking visitors on a journey through the site and its history.
|
48 |
+
|
49 |
+
(https://www.youtube.com/watch?v=t8Thra4T80c)- AP Archive. Video of celebrations after Italy won the 2006 World Cup Finals in Germany. http://www.aparchive.com/metadata/youtube/c195a43af063a416da53a7ec15430ef2.
|
50 |
+
|
51 |
+
Media related to Circus Maximus at Wikimedia Commons
|
52 |
+
|
53 |
+
Coordinates: 41°53′09″N 12°29′09″E / 41.8859°N 12.4857°E / 41.8859; 12.4857
|
en/1146.html.txt
ADDED
@@ -0,0 +1,53 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
The Circus Maximus (Latin for greatest or largest circus; Italian: Circo Massimo) is an ancient Roman chariot-racing stadium and mass entertainment venue located in Rome, Italy. Situated in the valley between the Aventine and Palatine hills, it was the first and largest stadium in ancient Rome and its later Empire. It measured 621 m (2,037 ft) in length and 118 m (387 ft) in width and could accommodate over 150,000 spectators.[2] In its fully developed form, it became the model for circuses throughout the Roman Empire. The site is now a public park.
|
2 |
+
|
3 |
+
The Circus was Rome's largest venue for ludi, public games connected to Roman religious festivals. Ludi were sponsored by leading Romans or the Roman state for the benefit of the Roman people (populus Romanus) and gods. Most were held annually or at annual intervals on the Roman calendar. Others might be given to fulfill a religious vow, such as the games in celebration of a triumph. In Roman tradition, the earliest triumphal ludi at the Circus were vowed by Tarquin the Proud to Jupiter in the late Regal era for his victory over Pometia.[3]
|
4 |
+
|
5 |
+
Ludi ranged in duration and scope from one-day or even half-day events to spectacular multi-venue celebrations held over several days, with religious ceremonies and public feasts, horse and chariot racing, athletics, plays and recitals, beast-hunts and gladiator fights. Some included public executions. The greater ludi ([4]meaning sport or game in latin) at the Circus began with a flamboyant parade (pompa circensis), much like the triumphal procession, which marked the purpose of the games and introduced the participants.[5]
|
6 |
+
|
7 |
+
During Rome's Republican era, the aediles organized the games. The most costly and complex of the ludi offered opportunities to assess an aedile's competence, generosity, and fitness for higher office.[6] Some Circus events, however, seem to have been relatively small and intimate affairs. In 167 BC, "flute players, scenic artists and dancers" performed on a temporary stage, probably erected between the two central seating banks. Others were enlarged at enormous expense to fit the entire space. A venatio held there in 169 BC, one of several in the 2nd century, employed "63 leopards and 40 bears and elephants", with spectators presumably kept safe by a substantial barrier.[7]
|
8 |
+
|
9 |
+
As Rome's provinces expanded, existing ludi were embellished and new ludi invented by politicians who competed for divine and popular support. By the late Republic, ludi were held on 57 days of the year;[8] an unknown number of these would have required full use of the Circus. On many other days, charioteers and jockeys would need to practice on its track. Otherwise, it would have made a convenient corral for the animals traded in the nearby cattle market, just outside the starting gate. Beneath the outer stands, next to the Circus' multiple entrances, were workshops and shops. When no games were being held, the Circus at the time of Catullus (mid-1st century BC) was likely "a dusty open space with shops and booths ... a colourful crowded disreputable area"[9] frequented by "prostitutes, jugglers, fortune tellers and low-class performing artists."[10]
|
10 |
+
|
11 |
+
Rome's emperors met the ever-burgeoning popular demand for regular ludi and the need for more specialised venues, as essential obligations of their office and cult. Over the several centuries of its development, the Circus Maximus became Rome's paramount specialist venue for chariot races. By the late 1st century AD, the Colosseum had been built to host most of the city's gladiator shows and smaller beast-hunts, and most track-athletes competed at the purpose-designed Stadium of Domitian, though long-distance foot races were still held at the Circus.[11] Eventually, 135 days of the year were devoted to ludi.[8]
|
12 |
+
|
13 |
+
Even at the height of its development as a chariot-racing circuit, the circus remained the most suitable space in Rome for religious processions on a grand scale, and was the most popular venue for large-scale venationes;[12] in the late 3rd century, the emperor Probus laid on a spectacular Circus show in which beasts were hunted through a veritable forest of trees, on a specially built stage.[13] With the advent of Christianity as the official religion of the Empire, ludi gradually fell out of favour. The last known beast-hunt at the Circus Maximus took place in 523, and the last known races there were held by Totila in 549.[14]
|
14 |
+
|
15 |
+
The Circus Maximus was sited on the level ground of the Valley of Murcia (Vallis Murcia), between Rome's Aventine and Palatine Hills. In Rome's early days, the valley would have been rich agricultural land, prone to flooding from the river Tiber and the stream which divided the valley. The stream was probably bridged at an early date, at the two points where the track had to cross it, and the earliest races would have been held within an agricultural landscape, "with nothing more than turning posts, banks where spectators could sit, and some shrines and sacred spots".[15]
|
16 |
+
|
17 |
+
In Livy's history of Rome, the first Etruscan king of Rome, Lucius Tarquinius Priscus, built raised, wooden perimeter seating at the Circus for Rome's highest echelons (the equites and patricians), probably midway along the Palatine straight, with an awning against the sun and rain. His grandson, Tarquinius Superbus, added the first seating for citizen-commoners (plebs, or plebeians), either adjacent or on the opposite, Aventine side of the track.[16] Otherwise, the Circus was probably still little more than a trackway through surrounding farmland. By this time, it may have been drained[17] but the wooden stands and seats would have frequently rotted and been rebuilt. The turning posts (metae), each made of three conical stone pillars, may have been the earliest permanent Circus structures; an open drainage canal between the posts would have served as a dividing barrier.[18]
|
18 |
+
|
19 |
+
The games' sponsor (Latin editor) usually sat beside the images of attending gods, on a conspicuous, elevated stand (pulvinar) but seats at the track's perimeter offered the best, most dramatic close-ups. In 494 BC (very early in the Republican era) the dictator Manius Valerius Maximus and his descendants were granted rights to a curule chair at the southeastern turn, an excellent viewpoint for the thrills and spills of chariot racing.[19] In the 190s BC, stone track-side seating was built, exclusively for senators.[20]
|
20 |
+
|
21 |
+
Permanent wooden starting stalls were built in 329 BC. They were gated, brightly painted,[21] and staggered to equalise the distances from each start place to the central barrier. In theory, they might have accommodated up to 25 four-horse chariots (Quadrigas) abreast but when team-racing was introduced,[22] they were widened, and their number reduced. By the late Republican or early Imperial era, there were twelve stalls. Their divisions were fronted by herms that served as stops for spring-loaded gates, so that twelve light-weight, four-horse or two-horse chariots could be simultaneously released onto the track. The stalls were allocated by lottery, and the various racing teams were identified by their colors.[23] Typically, there were seven laps per race.[24] From at least 174 BC, they were counted off using large sculpted eggs. In 33 BC, an additional system of large bronze dolphin-shaped lap counters was added, positioned well above the central dividing barrier (euripus) for maximum visibility.[25]
|
22 |
+
|
23 |
+
Julius Caesar's development of the Circus, commencing around 50 BC, extended the seating tiers to run almost the entire circuit of the track, barring the starting gates and a processional entrance at the semi-circular end.[26] The track measured approximately 621 m (2,037 ft) in length and 150 m (387 ft) in breadth. A canal was cut between the track perimeter and its seating to protect spectators and help drain the track.[27] The inner third of the seating formed a trackside cavea. Its front sections along the central straight were reserved for senators, and those immediately behind for equites. The outer tiers, two thirds of the total, were meant for Roman plebs and non-citizens. They were timber-built, with wooden-framed service buildings, shops and entrance-ways beneath. The total number of seats is uncertain, but was probably in the order of 150,000;[28] Pliny the Elder's estimate of 250,000 is unlikely. The wooden bleachers were damaged in a fire of 31 BC, either during or after construction.[29]
|
24 |
+
|
25 |
+
The fire damage of 31 was probably repaired by Augustus (Caesar's successor and Rome's first emperor). He modestly claimed credit only for an obelisk and pulvinar at the site but both were major projects. Ever since its quarrying, long before Rome existed, the obelisk had been sacred to Egyptian Sun-gods.[31] Augustus had it brought from Heliopolis[32] at enormous expense, and erected midway along the dividing barrier of the Circus. It was Rome's first obelisk, an exotically sacred object and a permanent reminder of Augustus' victory over his Roman foes and their Egyptian allies in the recent civil wars. Thanks to him, Rome had secured both a lasting peace and a new Egyptian Province. The pulvinar was built on monumental scale, a shrine or temple (aedes) raised high above the trackside seats. Sometimes, while games were in progress, Augustus watched from there, alongside the gods. Occasionally, his family would join him there. This is the Circus described by Dionysius of Halicarnassus as "one of the most beautiful and admirable structures in Rome", with "entrances and ascents for the spectators at every shop, so that the countless thousands of people may enter and depart without inconvenience."[33]
|
26 |
+
|
27 |
+
The site remained prone to flooding,[34] probably through the starting gates, until Claudius made improvements there; they probably included an extramural anti-flooding embankment. Fires in the crowded, wooden perimeter workshops and bleachers were a far greater danger. A fire of 36 AD seems to have started in a basket-maker's workshop under the stands, on the Aventine side; the emperor Tiberius compensated various small businesses there for their losses.[35] In AD 64, during Nero's reign, fire broke out at the semi-circular end of the Circus, swept through the stands and shops, and destroyed much of the city. Games and festivals continued at the Circus, which was rebuilt over several years to the same footprint and design.[36]
|
28 |
+
|
29 |
+
By the late 1st century AD, the central dividing barrier comprised a series of water basins, or else a single watercourse open in some places and bridged over in others. It offered opportunities for artistic embellishment and decorative swagger, and included the temples and statues of various deities, fountains, and refuges for those assistants involved in more dangerous circus activities, such as beast-hunts and the recovery of casualties during races.[37]
|
30 |
+
|
31 |
+
In AD 81 the Senate built a triple arch honoring Titus at the semi-circular end of the Circus, to replace or augment a former processional entrance.[38] The emperor Domitian built a new, multi-storey palace on the Palatine, connected somehow to the Circus; he likely watched the games in autocratic style, from high above and barely visible to those below. Repairs to fire damage during his reign may already have been under way before his assassination.[39]
|
32 |
+
|
33 |
+
The risk of further fire-damage, coupled with Domitian's fate, may have prompted Trajan's decision to rebuild the Circus entirely in stone, and provide a new pulvinar in the stands where Rome's emperor could be seen and honoured as part of the Roman community, alongside their gods. Under Trajan, the Circus Maximus found its definitive form, which was unchanged thereafter save for some monumental additions by later emperors, an extensive, planned rebuilding of the starting gate area under Caracalla, and repairs and renewals to existing fabric. Some repairs were unforeseen and extensive, such as those carried out in Diocletian's reign, after the collapse of a seating section killed some 13,000 people.[40]
|
34 |
+
|
35 |
+
The southeastern turn of the track ran between two shrines which may have predated the Circus' formal development. One, located at the outer southeast perimeter, was dedicated to the valley's eponymous goddess Murcia, an obscure deity associated with Venus, the myrtle shrub, a sacred spring, the stream that divided the valley, and the lesser peak of the Aventine Hill.[41] The other was at the southeastern turning-post; where there was an underground shrine to Consus, a minor god of grain-stores, connected to the grain-goddess Ceres and to the underworld. According to Roman tradition, Romulus discovered this shrine shortly after the founding of Rome. He invented the Consualia festival, as a way of gathering his Sabine neighbours at a celebration that included horse-races and drinking. During these distractions, Romulus's men then abducted the Sabine daughters as brides. Thus the famous Roman myth of the Rape of the Sabine women had as its setting the Circus and the Consualia.
|
36 |
+
|
37 |
+
In this quasi-legendary era, horse or chariot races would have been held at the Circus site. The track width may have been determined by the distance between Murcia's and Consus' shrines at the southeastern end, and its length by the distance between these two shrines and Hercules' Ara Maxima, supposedly older than Rome itself and sited behind the Circus' starting place.[42] The position of Consus' shrine at the turn of the track recalls the placing of shrines to Roman Neptune's Greek equivalent, Poseidon, in Greek hippodromes.[43] In later developments, the altar of Consus, as one of the Circus' patron deities, was incorporated into the fabric of the south-eastern turning post. When Murcia's stream was partly built over, to form a dividing barrier (the spina or euripus)[44] between the turning posts, her shrine was either retained or rebuilt. In the Late Imperial period, both the southeastern turn and the circus itself were sometimes known as Vallis Murcia.[45] The symbols used to count race-laps also held religious significance; Castor and Pollux, who were born from an egg, were patrons of horses, horsemen, and the equestrian order (equites). Likewise, the later use of dolphin-shaped lap counters reinforced associations between the races, swiftness, and Neptune, as god of earthquakes and horses; the Romans believed dolphins to be the swiftest of all creatures.[25] When the Romans adopted the Phrygian Great Mother as an ancestral deity, a statue of her on lion-back was erected within the circus, probably on the dividing barrier.
|
38 |
+
|
39 |
+
Sun and Moon cults were probably represented at the Circus from its earliest phases. Their importance grew with the introduction of Roman cult to Apollo, and the development of Stoic and solar monism as a theological basis for the Roman Imperial cult. In the Imperial era, the Sun-god was divine patron of the Circus and its games. His sacred obelisk towered over the arena, set in the central barrier, close to his temple and the finishing line. The Sun-god was the ultimate, victorious charioteer, driving his four-horse chariot (quadriga) through the heavenly circuit from sunrise to sunset. His partner Luna drove her two-horse chariot (biga); together, they represented the predictable, orderly movement of the cosmos and the circuit of time, which found analogy in the Circus track.[46] In Imperial cosmology, the emperor was Sol-Apollo's earthly equivalent, and Luna may have been linked to the empress.[citation needed] Luna's temple, built long before Apollo's, burned down in the Great Fire of 64 AD and was probably not replaced. Her cult was closely identified with that of Diana, who seems to have been represented in the processions that started Circus games, and with Sol Indiges, usually identified as her brother. After the loss of her temple, her cult may have been transferred to Sol's temple on the dividing barrier, or one beside it; both would have been open to the sky.[47]
|
40 |
+
|
41 |
+
Temples to several deities overlooked the Circus; most are now lost. The temples to Ceres and Flora stood close together on the Aventine, more or less opposite the Circus' starting gate, which remained under Hercules' protection. Further southeast along the Aventine was a temple to Luna, the moon goddess. Aventine temples to Venus Obsequens, Mercury and Dis (or perhaps Summanus) stood on the slopes above the southeast turn. On the Palatine hill, opposite to Ceres's temple, stood the temple to Magna Mater and, more or less opposite Luna's temple, one to the sun-god Apollo.
|
42 |
+
|
43 |
+
Several festivals, some of uncertain foundation and date, were held at the Circus in historical times. The Consualia, with its semi-mythical establishment by Romulus, and the Cerealia, the major festival of Ceres, were probably older than the earliest historically attested "Roman Games" (Ludi Romani) held at the Circus in honour of Jupiter in 366 BC.[48] In the early Imperial era, Ovid describes the opening of Cerealia (mid to late April) with a horse race at the Circus,[49] followed by the nighttime release of foxes into the stadium, their tails ablaze with lighted torches.[50] Some early connection is likely between Ceres as goddess of grain crops and Consus as a god of grain storage and patron of the Circus.
|
44 |
+
|
45 |
+
After the 6th century, the Circus fell into disuse and decay, and was quarried for building materials. The lower levels, ever prone to flooding, were gradually buried under waterlogged alluvial soil and accumulated debris, so that the original track is now buried 6 meters beneath the modern surface. In the 11th century, the Circus was "replaced by dwellings rented out by the congregation of Saint-Guy."[51] In the 12th century, a watercourse was dug there to drain the soil, and by the 16th century the area was used as a market garden.[52] Many of the Circus's standing structures survived these changes; in 1587, two obelisks were removed from the central barrier by Pope Sixtus V, and one of these was re-sited at the Piazza del Popolo.[32] In 1852 a gas works was built on the site by the Anglo-Italian Gas Society. It remained in situ until 1910 when it was relocated to the edge of Rome.[53] Mid 19th century workings at the circus site uncovered the lower parts of a seating tier and outer portico. Since then, a series of excavations has exposed further sections of the seating, curved turn and central barrier but further exploration has been limited by the scale, depth and waterlogging of the site.[1]
|
46 |
+
|
47 |
+
The Circus site now functions as a large park area, in the centre of the city. It is often used for concerts and meetings. The Rome concert of Live 8 (July 2, 2005) was held there. The English band Genesis performed a concert before an estimated audience of 500,000 people in 2007 (this was filmed and released as When in Rome 2007). The Rolling Stones played there in front of 71,527 people on June 22, 2014 for the Italian date of their 14 On Fire tour. The Circus has also hosted victory celebrations, following the Italian World Cup 2006 victory and the A.S. Roma Serie A victory in 1983 and 2001. In May 2019, a new virtual/augmented reality experience, the Circo Maximo Experience, opened on the site, taking visitors on a journey through the site and its history.
|
48 |
+
|
49 |
+
(https://www.youtube.com/watch?v=t8Thra4T80c)- AP Archive. Video of celebrations after Italy won the 2006 World Cup Finals in Germany. http://www.aparchive.com/metadata/youtube/c195a43af063a416da53a7ec15430ef2.
|
50 |
+
|
51 |
+
Media related to Circus Maximus at Wikimedia Commons
|
52 |
+
|
53 |
+
Coordinates: 41°53′09″N 12°29′09″E / 41.8859°N 12.4857°E / 41.8859; 12.4857
|
en/1147.html.txt
ADDED
@@ -0,0 +1,112 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
A circus is a company of performers who put on diverse entertainment shows that may include clowns, acrobats, trained animals, trapeze acts, musicians, dancers, hoopers, tightrope walkers, jugglers, magicians, unicyclists, as well as other object manipulation and stunt-oriented artists. The term circus also describes the performance which has followed various formats through its 250-year modern history. Although not the inventor of the medium, Philip Astley is credited as the father of the modern circus. In 1768 Astley, a skilled equestrian, began performing exhibitions of trick horse riding in an open field called Ha'Penny Hatch on the south side of the Thames River.[1] In 1770 he hired acrobats, tightrope walkers, jugglers and a clown to fill in the pauses between the equestrian demonstrations and thus chanced on the format which was later named a "circus". Performances developed significantly over the next fifty years, with large-scale theatrical battle reenactments becoming a significant feature. The traditional format, in which a ringmaster introduces a variety of choreographed acts set to music, developed in the latter part of the 19th century and remained the dominant format until the 1970s.
|
4 |
+
|
5 |
+
As styles of performance have developed since the time of Astley, so too have the types of venues where these circuses have performed. The earliest modern circuses were performed in open-air structures with limited covered seating. From the late 18th to late 19th century, custom-made circus buildings (often wooden) were built with various types of seating, a centre ring, and sometimes a stage. The traditional large tents commonly known as "big tops" were introduced in the mid-19th century as touring circuses superseded static venues. These tents eventually became the most common venue. Contemporary circuses perform in a variety of venues including tents, theatres and casinos. Many circus performances are still held in a ring, usually 13 m (42 ft) in diameter. This dimension was adopted by Astley in the late 18th century as the minimum diameter that enabled an acrobatic horse rider to stand upright on a cantering horse to perform their tricks.
|
6 |
+
|
7 |
+
Contemporary circus has been credited with a revival of the circus tradition since the late 1970s, when a number of groups began to experiment with new circus formats and aesthetics, typically avoiding the use of animals to focus exclusively on human artistry. Circuses within the movement have tended to favor a theatrical approach, combining character-driven circus acts with original music in a broad variety of styles to convey complex themes or stories. Contemporary circus continues to develop new variations on the circus tradition while absorbing new skills, techniques and stylistic influences from other performing arts.
|
8 |
+
|
9 |
+
First attested in English 14th century, the word circus derives from Latin circus,[2] which is the romanization of the Greek κίρκος (kirkos), itself a metathesis of the Homeric Greek κρίκος (krikos), meaning "circle" or "ring".[3] In the book De Spectaculis early Christian writer Tertullian claimed that the first circus games were staged by the goddess Circe in honour of her father Helios, the Sun God.[4]
|
10 |
+
|
11 |
+
The modern and commonly held idea of a circus is of a Big Top with various acts providing entertainment therein. However, the history of circuses is more complex, with historians disagreeing on its origin, as well as revisions being done about the history due to the changing nature of historical research, and the ongoing circus phenomenon. For many, circus history begins with Englishman Philip Astley, while for others its origins go back much further—to Roman times.
|
12 |
+
|
13 |
+
In Ancient Rome, the circus was a building for the exhibition of horse and chariot races, equestrian shows, staged battles, gladiatorial combat and displays of (and fights with) trained animals. The circuses of Rome were similar to the ancient Greek hippodromes, although circuses served varying purposes and differed in design and construction, and for events that involved re-enactments of naval battles, the circus was flooded with water. The Roman circus buildings were, however, not circular but rectangular with semi circular ends. The lower seats were reserved for persons of rank; there were also various state boxes for the giver of the games and his friends. The circus was the only public spectacle at which men and women were not separated. Some circus historians such as George Speaight have stated "these performances may have taken place in the great arenas that were called 'circuses' by the Romans, but it is a mistake to equate these places, or the entertainments presented there, with the modern circus" [5] Others have argued that the lineage of the circus does go back to the Roman circuses and a chronology of circus-related entertainment can be traced to Roman times, continued by the Hippodrome of Constantinople that operated until the 13th century, through medieval and renaissance jesters, minstrels and troubadours to the late 18th century and the time of Astley. [6] [7]
|
14 |
+
|
15 |
+
The first circus in the city of Rome was the Circus Maximus, in the valley between the Palatine and Aventine hills. It was constructed during the monarchy and, at first, built completely from wood. After being rebuilt several times, the final version of the Circus Maximus could seat 250,000 people; it was built of stone and measured 400m in length and 90m in width.[8] Next in importance were the Circus Flaminius and the Circus Neronis, from the notoriety which it obtained through the Circensian pleasures of Nero. A fourth circus was constructed by Maxentius; its ruins have helped archaeologists reconstruct the Roman circus.
|
16 |
+
|
17 |
+
For some time after the fall of Rome, large circus buildings fell out of use as centres of mass entertainment. Instead, itinerant performers, animal trainers and showmen travelled between towns throughout Europe, performing at local fairs.
|
18 |
+
|
19 |
+
The origin of the modern circus has been attributed to Philip Astley, who was born 1742 in Newcastle-under-Lyme, England. He became a cavalry officer who set up the first modern amphitheatre for the display of horse riding tricks in Lambeth, London on 4 April 1768.[9][10][11] Astley did not originate trick horse riding, nor was he first to introduce acts such as acrobats and clowns to the English public, but he was the first to create a space where all these acts were brought together to perform a show.[12] Astley rode in a circle rather than a straight line as his rivals did, and thus chanced on the format of performing in a circle.[13] Astley performed stunts in a 42 ft diameter ring, which is the standard size used by circuses ever since.[12] Astley referred to the performance arena as a circle and the building as an amphitheatre; these would later be known as a circus.[14] In 1770 Astley hired acrobats, tightrope walkers, jugglers and a clown to fill in the pauses between acts.[12]
|
20 |
+
|
21 |
+
Astley was followed by Andrew Ducrow, whose feats of horsemanship had much to do with establishing the traditions of the circus, which were perpetuated by Hengler's and Sanger's celebrated shows in a later generation. In England circuses were often held in purpose-built buildings in large cities, such as the London Hippodrome, which was built as a combination of the circus, the menagerie and the variety theatre, where wild animals such as lions and elephants from time to time appeared in the ring, and where convulsions of nature such as floods, earthquakes and volcanic eruptions have been produced with an extraordinary wealth of realistic display. Joseph Grimaldi, the first mainstream clown, had his first major role as Little Clown in the pantomime The Triumph of Mirth; or, Harlequin's Wedding in 1781.[15] The Royal Circus was opened in London on 4 November 1782 by Charles Dibdin (who coined the term "circus"),[16] aided by his partner Charles Hughes, an equestrian performer.[17] In 1782, Astley established the Amphithéâtre Anglais in Paris, the first purpose-built circus in France, followed by 18 other permanent circuses in cities throughout Europe.[18][19] Astley leased his Parisian circus to the Italian Antonio Franconi in 1793.[20] In 1826, the first circus took place under a canvas big top.[21]
|
22 |
+
|
23 |
+
The Scotsman John Bill Ricketts brought the first modern circus to the United States. He began his theatrical career with Hughes Royal Circus in London in the 1780s, and travelled from England in 1792 to establish his first circus in Philadelphia. The first circus building in the US opened on April 3, 1793 in Philadelphia, where Ricketts gave America's first complete circus performance.[22][23] George Washington attended a performance there later that season.[24]
|
24 |
+
|
25 |
+
In the Americas during the first two decades of the 19th century, the Circus of Pepin and Breschard toured from Montreal to Havana, building circus theatres in many of the cities it visited. Victor Pépin, a native New Yorker,[25] was the first American to operate a major circus in the United States.[26] Later the establishments of Purdy, Welch & Co., and of van Amburgh gave a wider popularity to the circus in the United States. In 1825, Joshuah Purdy Brown was the first circus owner to use a large canvas tent for the circus performance. Circus pioneer Dan Rice was the most famous pre-Civil War circus clown,[27] popularizing such expressions as "The One-Horse Show" and "Hey, Rube!". The American circus was revolutionized by P. T. Barnum and William Cameron Coup, who launched the travelling P. T. Barnum's Museum, Menagerie & Circus, the first freak show. Coup also introduced the first multiple-ring circuses, and was also the first circus entrepreneur to use circus trains to transport the circus between towns.
|
26 |
+
|
27 |
+
In 1838, the equestrian Thomas Taplin Cooke returned to England from the United States, bringing with him a circus tent.[28] At this time, itinerant circuses that could be fitted-up quickly were becoming popular in Britain. William Batty's circus, for example, between 1838 and 1840, travelled from Newcastle to Edinburgh and then to Portsmouth and Southampton. Pablo Fanque, who is noteworthy as Britain's only black circus proprietor and who operated one of the most celebrated travelling circuses in Victorian England, erected temporary structures for his limited engagements or retrofitted existing structures.[29] One such structure in Leeds, which Fanque assumed from a departing circus, collapsed, resulting in minor injuries to many but the death of Fanque's wife.[30][31] Three important circus innovators were the Italian Giuseppe Chiarini, and Frenchmen Louis Soullier and Jacques Tourniaire, whose early travelling circuses introduced the circus to Latin America, Australia, Southeast Asia, China, South Africa and Russia. Soullier was the first circus owner to introduce Chinese acrobatics to the European circus when he returned from his travels in 1866, and Tourniaire was the first to introduce the performing art to Ranga, where it became extremely popular.
|
28 |
+
|
29 |
+
After an 1881 merger with James Anthony Bailey and James L. Hutchinson's circus and Barnum's death in 1891, his circus travelled to Europe as the Barnum & Bailey Greatest Show On Earth, where it toured from 1897 to 1902, impressing other circus owners with its large scale, its touring techniques (including the tent and circus train), and its combination of circus acts, a zoological exhibition and a freak show. This format was adopted by European circuses at the turn of the 20th century.
|
30 |
+
|
31 |
+
The influence of the American circus brought about a considerable change in the character of the modern circus. In arenas too large for speech to be easily audible, the traditional comic dialog of the clown assumed a less prominent place than formerly, while the vastly increased wealth of stage properties relegated to the background the old-fashioned equestrian feats, which were replaced by more ambitious acrobatic performances, and by exhibitions of skill, strength and daring, requiring the employment of immense numbers of performers and often of complicated and expensive machinery.
|
32 |
+
|
33 |
+
From the late 19th century through the first half of the 20th century, travelling circuses were a major form of spectator entertainment in the US and attracted huge attention whenever they arrived in a city. After World War II, the popularity of the circus declined as new forms of entertainment (such as television) arrived and the public's tastes became more sophisticated. From the 1960s onward, circuses attracted growing criticism from animal rights activists. Many circuses went out of business or were forced to merge with other circus companies. Nonetheless, a good number of travelling circuses are still active in various parts of the world, ranging from small family enterprises to three-ring extravaganzas. Other companies found new ways to draw in the public with innovative new approaches to the circus form itself.
|
34 |
+
|
35 |
+
In 1919, Lenin, head of the Soviet Russia, expressed a wish for the circus to become "the people's art-form", with facilities and status on par with theatre, opera and ballet. The USSR nationalized Russian circuses. In 1927, the State University of Circus and Variety Arts, better known as the Moscow Circus School, was established; performers were trained using methods developed from the Soviet gymnastics program. When the Moscow State Circus company began international tours in the 1950s, its levels of originality and artistic skill were widely applauded.
|
36 |
+
|
37 |
+
Circuses from China, drawing on Chinese traditions of acrobatics, like the Chinese State Circus are also popular touring acts.
|
38 |
+
|
39 |
+
Contemporary circus (originally known as nouveau cirque) is a performing arts movement that originated in the 1970s in Australia, Canada, France,[32] the West Coast of the United States, and the United Kingdom. Contemporary circus combines traditional circus skills and theatrical techniques to convey a story or theme. Compared with the traditional circus, the contemporary genre of circus tends to focus more attention on the overall aesthetic impact, on character and story development, and on the use of lighting design, original music, and costume design to convey thematic or narrative content. For aesthetic or economic reasons, contemporary circus productions may sometimes be staged in theatres rather than in large outdoor tents. Music used in the production is often composed exclusively for that production, and aesthetic influences are drawn as much from contemporary culture as from circus history. Animal acts appear rarely in contemporary circus, in contrast to traditional circus, where animal acts have often been a significant part of the entertainment.
|
40 |
+
|
41 |
+
Early pioneers of the contemporary circus genre included: Circus Oz, forged in Australia in 1977 from SoapBox Circus (1976) and New Circus (1973);[33] the Pickle Family Circus, founded in San Francisco in 1975; Ra-Ra Zoo in 1984 in London; Nofit State Circus in 1984 from Wales; Cirque du Soleil, founded in Quebec in 1984; Cirque Plume and Archaos from France in 1984 and 1986 respectively. More recent examples include: Cirque Éloize (founded in Quebec in 1993); Sweden's Cirkus Cirkör (1995); Teatro ZinZanni (founded in Seattle in 1998); the West African Circus Baobab (late 1990s);[34] and Montreal's Les 7 doigts de la main (founded in 2002).[35] The genre includes other circus troupes such as the Vermont-based Circus Smirkus (founded in 1987 by Rob Mermin) and Le Cirque Imaginaire (later renamed Le Cirque Invisible, both founded and directed by Victoria Chaplin, daughter of Charlie Chaplin).
|
42 |
+
|
43 |
+
The most conspicuous success story in the contemporary genre has been that of Cirque du Soleil, the Canadian circus company whose estimated annual revenue now exceeds US$810 million,[36] and whose nouveau cirque shows have been seen by nearly 90 million spectators in over 200 cities on five continents.[37]
|
44 |
+
|
45 |
+
A traditional circus performance is often led by a ringmaster who has a role similar to a Master of Ceremonies. The ringmaster presents performers, speaks to the audience, and generally keeps the show moving. The activity of the circus traditionally takes place within a ring; large circuses may have multiple rings, like the six-ringed Moscow State Circus. A circus often travels with its own band, whose instrumentation in the United States has traditionally included brass instruments, drums, glockenspiel, and sometimes the distinctive sound of the calliope.
|
46 |
+
|
47 |
+
Common acts include a variety of acrobatics, gymnastics (including tumbling and trampoline), aerial acts (such as trapeze, aerial silk, corde lisse), contortion, stilt-walking, and a variety of other routines. Juggling is one of the most common acts in a circus; the combination of juggling and gymnastics is called equilibristics and includes acts like plate spinning and the rolling globe. Acts like these are some of the most common and the most traditional. Clowns are common to most circuses and are typically skilled in many circus acts; "clowns getting into the act" is a very familiar theme in any circus. Famous circus clowns have included Austin Miles, the Fratellini Family, Rusty Russell, Emmett Kelly, Grock, and Bill Irwin.
|
48 |
+
|
49 |
+
Daredevil stunt acts, freak shows, and sideshow acts are also parts of some circus acts, these activities may include human cannonball, chapeaugraphy, fire eating, breathing, and dancing, knife throwing, magic shows, sword swallowing, or strongman. Famous sideshow performers include Zip the Pinhead and The Doll Family. A popular sideshow attraction from the early 19th century was the flea circus, where fleas were attached to props and viewed through a Fresnel lens.
|
50 |
+
|
51 |
+
The Movement
|
52 |
+
|
53 |
+
Aspects
|
54 |
+
|
55 |
+
Ideas
|
56 |
+
|
57 |
+
Related
|
58 |
+
|
59 |
+
A variety of animals have historically been used in acts. While the types of animals used vary from circus to circus, big cats (namely lions, tigers, and leopards), camels, llamas, elephants, zebras, horses, donkeys, birds (like parrots, doves, and cockatoos), sea lions, bears, monkeys, and domestic animals such as cats and dogs are the most common.
|
60 |
+
|
61 |
+
The earliest involvement of animals in circus was just the display of exotic creatures in a menagerie. Going as far back as the early eighteenth century, exotic animals were transported to North America for display, and menageries were a popular form of entertainment.[39] The first true animals acts in the circus were equestrian acts. Soon elephants and big cats were displayed as well. Isaac A. Van Amburgh entered a cage with several big cats in 1833, and is generally considered to be the first wild animal trainer in American circus history.[26] Mabel Stark was a famous female tiger-tamer.
|
62 |
+
|
63 |
+
Animal rights groups have documented many cases of animal cruelty in the training of performing circus animals.[41][42] The animal rights group People for the Ethical Treatment of Animals (PETA) contends that animals in circuses are frequently beaten into submission and that physical abuse has always been the method for training circus animals. It is also alleged that the animals are kept in cages that are too small and are given very little opportunity to walk around outside of their enclosure, thereby violating their right to freedom.
|
64 |
+
|
65 |
+
According to PETA, although the US Animal Welfare Act does not permit any sort of punishment that puts the animals in discomfort,[43] trainers will still go against this law and use such things as electric rods and bull hooks.[44] According to PETA, during an undercover investigation of Carson & Barnes Circus, video footage was captured showing animal care director Tim Frisco training endangered Asian elephants with electrical shock prods and instructing other trainers to "beat the elephants with a bullhook as hard as they can and sink the sharp metal hook into the elephant's flesh and twist it until they scream in pain".[44]
|
66 |
+
|
67 |
+
On behalf of the Ministry of Agriculture, Nature and Food Quality of the Netherlands, Wageningen University conducted an investigation into the welfare of circus animals in 2008.[45] The following issues, among others, were found:
|
68 |
+
|
69 |
+
Based on these findings, the researchers called for more stringent regulation regarding the welfare of circus animals. In 2012, the Dutch government announced a ban on the use of wild circus animals.[46]
|
70 |
+
|
71 |
+
In testimony in U.S. District Court in 2009, Ringling Bros. and Barnum & Bailey Circus CEO Kenneth Feld acknowledged that circus elephants are struck behind the ears, under the chin and on their legs with metal tipped prods, called bull hooks. Feld stated that these practices are necessary to protect circus workers. Feld also acknowledged that an elephant trainer was reprimanded for using an electric shock device, known as a hot shot or electric prod, on an elephant, which Feld also stated was appropriate practice. Feld denied that any of these practices harm elephants.[47] In its January 2010 verdict on the case, brought against Feld Entertainment International by the American Society for the Prevention of Cruelty to Animals et al., the Court ruled that evidence against the circus company was "not credible with regard to the allegations".[48] In lieu of a USDA hearing, Feld Entertainment Inc. (parent of Ringling Bros.) agreed to pay an unprecedented $270,000 fine for violations of the Animal Welfare Act that allegedly occurred between June 2007 and August 2011.[49]
|
72 |
+
|
73 |
+
A 14-year litigation against the Ringling Bros. and Barnum & Bailey Circus came to an end in 2014 when The Humane Society of the United States and a number of other animal rights groups paid a $16 million settlement to Feld Entertainment.[50] However, the circus closed in May 2017 after a 146-year run when it experienced a steep decline in ticket sales a year after it discontinued its elephant act and sent its pachyderms to a reserve.[51]
|
74 |
+
|
75 |
+
On February 1, 1992 at the Great American Circus in Palm Bay, Florida, an elephant named Janet (1965 – February 1, 1992) went out of control while giving a ride to a mother, her two children, and three other children. The elephant then stampeded through the circus grounds outside before being shot to death by police.[52] Also, during a Circus International performance in Honolulu, Hawaii on 20 August 1994, an elephant called Tyke (1974 – August 20, 1994) killed her trainer, Allen Campbell, and severely mauled her groomer, Dallas Beckwith, in front of hundreds of spectators. Tyke then bolted from the arena and ran through the streets of Kakaako for more than thirty minutes. Police fired 86 shots at Tyke, who eventually collapsed from the wounds and died.[53]
|
76 |
+
|
77 |
+
In December 2018, New Jersey became the first state in the U.S. to ban circuses, carnivals and fairs from featuring elephants, tigers and other exotic animals.[54]
|
78 |
+
|
79 |
+
In 1998 in the United Kingdom, a parliamentary working group chaired by MP Roger Gale studied living conditions and treatment of animals in UK circuses. All members of this group agreed that a change in the law was needed to protect circus animals. Gale told the BBC, "It's undignified and the conditions under which they are kept are woefully inadequate—the cages are too small, the environments they live in are not suitable and many of us believe the time has come for that practice to end." The group reported concerns about boredom and stress, and noted that an independent study by a member of the Wildlife Conservation Research Unit at Oxford University "found no evidence that circuses contribute to education or conservation."[55] However, in 2007, a different working group under the UK Department for Environment, Food and Rural Affairs, having reviewed information from experts representing both the circus industry and animal welfare, found an absence of "scientific evidence sufficient to demonstrate that travelling circuses are not compatible with meeting the welfare needs of any type of non-domesticated animal presently being used in the United Kingdom." According to that group's report, published in October 2007, "there appears to be little evidence to demonstrate that the welfare of animals kept in travelling circuses is any better or any worse than that of animals kept in other captive environments."[56]
|
80 |
+
|
81 |
+
A ban prohibiting the use of wild animals in circuses in Britain was due to be passed in 2015, but Conservative MP Christopher Chope repeatedly blocked the bill under the reasoning that "The EU Membership Costs and Benefits bill should have been called by the clerk before the circuses bill, so I raised a point of order". He explained that the circus bill was "at the bottom of the list" for discussion.[57] The Animal Defenders International non-profit group dubbed this "a huge embarrassment for Britain that 30 other nations have taken action before us on this simple and popular measure".[58]
|
82 |
+
On May 1, 2019 Environmental Secretary Michael Gove announced a new Bill to ban the use of wild animals in traveling circuses.[59]
|
83 |
+
|
84 |
+
There are nationwide bans on using some if not all animals in circuses in India, Iran, Israel, Singapore, Austria, Belgium, Bosnia and Herzegovina, Bulgaria, Croatia, Czech Republic, Cyprus, Denmark, Estonia, Finland, Greece, Hungary, Ireland, Italy, Malta, Netherlands, Norway, Poland, Portugal, Slovenia, Sweden, Switzerland, Bolivia, Colombia, Costa Rica, Ecuador, El Salvador, Mexico, Panama, Paraguay and Peru.[60][61] Germany, Spain, United Kingdom, Australia, Argentina, Chile, Brazil, Canada, and the United States have locally restricted or banned the use of animals in entertainment.[61] In response to a growing popular concern about the use of animals in entertainment, animal-free circuses are becoming more common around the world.[62] In 2009, Bolivia passed legislation banning the use of any animals, wild or domestic, in circuses. The law states that circuses "constitute an act of cruelty." Circus operators had one year from the bill's passage on July 1, 2009 to comply.[63] In 2018 in Germany, an accident with an elephant during a circus performance, prompted calls to ban animal performances in circuses. PETA called the German politicians to outlaw the keeping of animals for circuses.[64]
|
85 |
+
|
86 |
+
A survey confirmed that on average, wild animals spend around 99 to 91 percent of their time in cages, wagons, or enclosure due to transportation. This causes a huge amount of distress to animals and leads to excessive amounts of drooling.[65]
|
87 |
+
|
88 |
+
City ordinances banning performances by wild animals have been enacted in San Francisco (2015),[66] Los Angeles (2017),[67] and New York City (2017).[68] These bans include movies, TV shows, ads, petting zoos, or any showcase of animals where they are in direct contact with the audience. The reason being the high chance of the animals to harm someone in the audience. This is due to their instincts which humans cannot control.[69]
|
89 |
+
|
90 |
+
Greece became the first European country to ban any animal from performing in any circus in its territory in February 2012, following a campaign by Animal Defenders International and the Greek Animal Welfare Fund (GAWF).[70]
|
91 |
+
|
92 |
+
On June 6, 2015, the Federation of Veterinarians of Europe adopted a position paper in which it recommends the prohibition of the use of wild animals in traveling circuses.[71][72]
|
93 |
+
|
94 |
+
Despite the contemporary circus' shift toward more theatrical techniques and its emphasis on human rather than animal performance, traditional circus companies still exist alongside the new movement. Numerous circuses continue to maintain animal performers, including UniverSoul Circus and the Big Apple Circus from the United States, Circus Krone from Munich, Circus Royale and Lennon Bros Circus from Australia, Vazquez Hermanos Circus, Circo Atayde Hermanos, and Hermanos Mayaror Circus[73] from Mexico, and Moira Orfei Circus[74] from Italy, to name just a few.
|
95 |
+
|
96 |
+
In some towns, there are circus buildings where regular performances are held. The best known are:
|
97 |
+
|
98 |
+
In other countries, purpose-built circus buildings still exist which are no longer used as circuses, or are used for circus only occasionally among a wider programme of events; for example, the Cirkusbygningen (The Circus Building) in Copenhagen, Denmark, Cirkus in Stockholm, Sweden, or Carré Theatre in Amsterdam, Netherlands.
|
99 |
+
|
100 |
+
The International Circus Festival of Monte-Carlo[76] has been held in Monaco since 1974 and was the first of many international awards for circus performers.
|
101 |
+
|
102 |
+
The atmosphere of the circus has served as a dramatic setting for many musicians. The most famous circus theme song is called "Entrance of the Gladiators", and was composed in 1904 by Julius Fučík. Other circus music includes "El Caballero", "Quality Plus", "Sunnyland Waltzes", "The Storming of El Caney", "Pahjamah", "Bull Trombone", "Big Time Boogie", "Royal Bridesmaid March", "The Baby Elephant Walk", "Liberty Bell March", "Java", Strauss's "Radetsky March", and "Pageant of Progress". A poster for Pablo Fanque's Circus Royal, one of the most popular circuses of Victorian England, inspired John Lennon to write Being for the Benefit of Mr. Kite! on The Beatles' album, Sgt. Pepper's Lonely Hearts Club Band. The song title refers to William Kite, a well-known circus performer in the 19th century. Producer George Martin and EMI engineers created the song's fairground atmosphere by assembling a sound collage of collected recordings of calliopes and fairground organs, which they cut into strips of various lengths, threw into a box, and then mixed up and edited together randomly, creating a long loop which was mixed into the final production.[77] Another traditional circus song is the John Philip Sousa march "Stars and Stripes Forever", which is played only to alert circus performers of an emergency.
|
103 |
+
|
104 |
+
Plays set in a circus include the 1896 musical The Circus Girl by Lionel Monckton, Polly of the Circus written in 1907 by Margaret Mayo, He Who Gets Slapped written by Russian Leonid Andreyev 1916 and later adapted into one of the first circus films, Katharina Knie written in 1928 by Carl Zuckmayer and adapted for the English stage in 1932 as Caravan by playwright Cecily Hamilton, the revue Big Top written by Herbert Farjeon in 1942, Top of the Ladder written by Tyrone Guthrie in 1950, Stop the World, I Want to Get Off written by Anthony Newley in 1961, and Barnum with music by Cy Coleman and lyrics and book by Mark Bramble, Roustabout: The Great Circus Train Wreck written by Jay Torrence in 2006.
|
105 |
+
|
106 |
+
Following World War I, circus films became popular. In 1924 He Who Gets Slapped was the first film released by MGM; in 1925 Sally of the Sawdust (remade 1930), Variety, and Vaudeville were produced, followed by The Devil's Circus in 1926 and The Circus starring Charlie Chaplin, Circus Rookies, 4 Devils; and Laugh Clown Laugh in 1928. German film Salto Mortale about trapeze artists was released in 1931 and remade in the United States and released as Trapeze starring Burt Lancaster in 1956; in 1932 Freaks was released; Charlie Chan at the Circus, Circus (USSR) and The Three Maxiums were released in 1936 and At the Circus starring the Marx Brothers and You Can't Cheat an Honest Man in 1939. Circus films continued to be popular during the Second World War; films from this era included The Great Profile starring John Barrymore (1940), the animated Disney film Dumbo (1941), Road Show (1941), The Wagons Roll at Night (1941) and Captive Wild Woman (1943).
|
107 |
+
|
108 |
+
Tromba, a film about a tiger trainer, was released in 1948. In 1952 Cecil B. de Mille's Oscar-winning film The Greatest Show on Earth was first shown. Released in 1953 were Man on a Tightrope and Ingmar Bergman's Gycklarnas afton (released as Sawdust and Tinsel in the United States); these were followed by Life Is a Circus; Ring of Fear; 3 Ring Circus (1954) and La Strada (1954), an Oscar-winning film by Federico Fellini about a girl who is sold to a circus strongman. Fellini made a second film set in the circus called The Clowns in 1970. Films about the circus made since 1959 include Disney's Toby Tyler (1960), the B-movie Circus of Horrors (also in 1960); the musical film Billy Rose's Jumbo (1962); A Tiger Walks, a Disney film about a tiger that escapes from the circus; and Circus World (1964), starring John Wayne. Mera Naam Joker (1970) a Hindi drama film directed by Raj Kapoor which was about a clown who must make his audience laugh at the cost of his own sorrows. In the film Jungle Emperor Leo (1997), Leo's son Lune is captured and placed in a circus, which burns down when a tiger knocks down a ring of fire while jumping through it. The Greatest Showman, a musical film loosely based on the life of P. T. Barnum, was released in 2017.
|
109 |
+
|
110 |
+
The TV series Circus Humberto, based on the novel by Eduard Bass, follows the history of the circus family Humberto between 1826 and 1924. The setting of the HBO television series Carnivàle, which ran from 2003 to 2005, is also largely set in a travelling circus. The circus has also inspired many writers. Numerous books, both non-fiction and fiction, have been published about circus life. Notable examples of circus-based fiction include Circus Humberto by Eduard Bass, Cirque du Freak by Darren Shan, and Spangle by Gary Jennings. The novel Water for Elephants by Sara Gruen tells the fictional tale of a circus veterinarian and was made into a movie with the same title, starring Robert Pattinson and Reese Witherspoon.
|
111 |
+
|
112 |
+
Circus is the central theme in comic books of Super Commando Dhruva, an Indian comic book superhero. According to this series, Dhruva was born and brought up in a fictional Indian circus called Jupiter Circus. When a rival circus burnt down Jupiter Circus, killing everyone in it, including Dhruva's parents, Dhruva vowed to become a crime fighter. A circus-based television series called Circus was also telecast in India in 1989 on DD National, starring Shahrukh Khan as the lead actor.
|
en/1148.html.txt
ADDED
@@ -0,0 +1,53 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
The Circus Maximus (Latin for greatest or largest circus; Italian: Circo Massimo) is an ancient Roman chariot-racing stadium and mass entertainment venue located in Rome, Italy. Situated in the valley between the Aventine and Palatine hills, it was the first and largest stadium in ancient Rome and its later Empire. It measured 621 m (2,037 ft) in length and 118 m (387 ft) in width and could accommodate over 150,000 spectators.[2] In its fully developed form, it became the model for circuses throughout the Roman Empire. The site is now a public park.
|
2 |
+
|
3 |
+
The Circus was Rome's largest venue for ludi, public games connected to Roman religious festivals. Ludi were sponsored by leading Romans or the Roman state for the benefit of the Roman people (populus Romanus) and gods. Most were held annually or at annual intervals on the Roman calendar. Others might be given to fulfill a religious vow, such as the games in celebration of a triumph. In Roman tradition, the earliest triumphal ludi at the Circus were vowed by Tarquin the Proud to Jupiter in the late Regal era for his victory over Pometia.[3]
|
4 |
+
|
5 |
+
Ludi ranged in duration and scope from one-day or even half-day events to spectacular multi-venue celebrations held over several days, with religious ceremonies and public feasts, horse and chariot racing, athletics, plays and recitals, beast-hunts and gladiator fights. Some included public executions. The greater ludi ([4]meaning sport or game in latin) at the Circus began with a flamboyant parade (pompa circensis), much like the triumphal procession, which marked the purpose of the games and introduced the participants.[5]
|
6 |
+
|
7 |
+
During Rome's Republican era, the aediles organized the games. The most costly and complex of the ludi offered opportunities to assess an aedile's competence, generosity, and fitness for higher office.[6] Some Circus events, however, seem to have been relatively small and intimate affairs. In 167 BC, "flute players, scenic artists and dancers" performed on a temporary stage, probably erected between the two central seating banks. Others were enlarged at enormous expense to fit the entire space. A venatio held there in 169 BC, one of several in the 2nd century, employed "63 leopards and 40 bears and elephants", with spectators presumably kept safe by a substantial barrier.[7]
|
8 |
+
|
9 |
+
As Rome's provinces expanded, existing ludi were embellished and new ludi invented by politicians who competed for divine and popular support. By the late Republic, ludi were held on 57 days of the year;[8] an unknown number of these would have required full use of the Circus. On many other days, charioteers and jockeys would need to practice on its track. Otherwise, it would have made a convenient corral for the animals traded in the nearby cattle market, just outside the starting gate. Beneath the outer stands, next to the Circus' multiple entrances, were workshops and shops. When no games were being held, the Circus at the time of Catullus (mid-1st century BC) was likely "a dusty open space with shops and booths ... a colourful crowded disreputable area"[9] frequented by "prostitutes, jugglers, fortune tellers and low-class performing artists."[10]
|
10 |
+
|
11 |
+
Rome's emperors met the ever-burgeoning popular demand for regular ludi and the need for more specialised venues, as essential obligations of their office and cult. Over the several centuries of its development, the Circus Maximus became Rome's paramount specialist venue for chariot races. By the late 1st century AD, the Colosseum had been built to host most of the city's gladiator shows and smaller beast-hunts, and most track-athletes competed at the purpose-designed Stadium of Domitian, though long-distance foot races were still held at the Circus.[11] Eventually, 135 days of the year were devoted to ludi.[8]
|
12 |
+
|
13 |
+
Even at the height of its development as a chariot-racing circuit, the circus remained the most suitable space in Rome for religious processions on a grand scale, and was the most popular venue for large-scale venationes;[12] in the late 3rd century, the emperor Probus laid on a spectacular Circus show in which beasts were hunted through a veritable forest of trees, on a specially built stage.[13] With the advent of Christianity as the official religion of the Empire, ludi gradually fell out of favour. The last known beast-hunt at the Circus Maximus took place in 523, and the last known races there were held by Totila in 549.[14]
|
14 |
+
|
15 |
+
The Circus Maximus was sited on the level ground of the Valley of Murcia (Vallis Murcia), between Rome's Aventine and Palatine Hills. In Rome's early days, the valley would have been rich agricultural land, prone to flooding from the river Tiber and the stream which divided the valley. The stream was probably bridged at an early date, at the two points where the track had to cross it, and the earliest races would have been held within an agricultural landscape, "with nothing more than turning posts, banks where spectators could sit, and some shrines and sacred spots".[15]
|
16 |
+
|
17 |
+
In Livy's history of Rome, the first Etruscan king of Rome, Lucius Tarquinius Priscus, built raised, wooden perimeter seating at the Circus for Rome's highest echelons (the equites and patricians), probably midway along the Palatine straight, with an awning against the sun and rain. His grandson, Tarquinius Superbus, added the first seating for citizen-commoners (plebs, or plebeians), either adjacent or on the opposite, Aventine side of the track.[16] Otherwise, the Circus was probably still little more than a trackway through surrounding farmland. By this time, it may have been drained[17] but the wooden stands and seats would have frequently rotted and been rebuilt. The turning posts (metae), each made of three conical stone pillars, may have been the earliest permanent Circus structures; an open drainage canal between the posts would have served as a dividing barrier.[18]
|
18 |
+
|
19 |
+
The games' sponsor (Latin editor) usually sat beside the images of attending gods, on a conspicuous, elevated stand (pulvinar) but seats at the track's perimeter offered the best, most dramatic close-ups. In 494 BC (very early in the Republican era) the dictator Manius Valerius Maximus and his descendants were granted rights to a curule chair at the southeastern turn, an excellent viewpoint for the thrills and spills of chariot racing.[19] In the 190s BC, stone track-side seating was built, exclusively for senators.[20]
|
20 |
+
|
21 |
+
Permanent wooden starting stalls were built in 329 BC. They were gated, brightly painted,[21] and staggered to equalise the distances from each start place to the central barrier. In theory, they might have accommodated up to 25 four-horse chariots (Quadrigas) abreast but when team-racing was introduced,[22] they were widened, and their number reduced. By the late Republican or early Imperial era, there were twelve stalls. Their divisions were fronted by herms that served as stops for spring-loaded gates, so that twelve light-weight, four-horse or two-horse chariots could be simultaneously released onto the track. The stalls were allocated by lottery, and the various racing teams were identified by their colors.[23] Typically, there were seven laps per race.[24] From at least 174 BC, they were counted off using large sculpted eggs. In 33 BC, an additional system of large bronze dolphin-shaped lap counters was added, positioned well above the central dividing barrier (euripus) for maximum visibility.[25]
|
22 |
+
|
23 |
+
Julius Caesar's development of the Circus, commencing around 50 BC, extended the seating tiers to run almost the entire circuit of the track, barring the starting gates and a processional entrance at the semi-circular end.[26] The track measured approximately 621 m (2,037 ft) in length and 150 m (387 ft) in breadth. A canal was cut between the track perimeter and its seating to protect spectators and help drain the track.[27] The inner third of the seating formed a trackside cavea. Its front sections along the central straight were reserved for senators, and those immediately behind for equites. The outer tiers, two thirds of the total, were meant for Roman plebs and non-citizens. They were timber-built, with wooden-framed service buildings, shops and entrance-ways beneath. The total number of seats is uncertain, but was probably in the order of 150,000;[28] Pliny the Elder's estimate of 250,000 is unlikely. The wooden bleachers were damaged in a fire of 31 BC, either during or after construction.[29]
|
24 |
+
|
25 |
+
The fire damage of 31 was probably repaired by Augustus (Caesar's successor and Rome's first emperor). He modestly claimed credit only for an obelisk and pulvinar at the site but both were major projects. Ever since its quarrying, long before Rome existed, the obelisk had been sacred to Egyptian Sun-gods.[31] Augustus had it brought from Heliopolis[32] at enormous expense, and erected midway along the dividing barrier of the Circus. It was Rome's first obelisk, an exotically sacred object and a permanent reminder of Augustus' victory over his Roman foes and their Egyptian allies in the recent civil wars. Thanks to him, Rome had secured both a lasting peace and a new Egyptian Province. The pulvinar was built on monumental scale, a shrine or temple (aedes) raised high above the trackside seats. Sometimes, while games were in progress, Augustus watched from there, alongside the gods. Occasionally, his family would join him there. This is the Circus described by Dionysius of Halicarnassus as "one of the most beautiful and admirable structures in Rome", with "entrances and ascents for the spectators at every shop, so that the countless thousands of people may enter and depart without inconvenience."[33]
|
26 |
+
|
27 |
+
The site remained prone to flooding,[34] probably through the starting gates, until Claudius made improvements there; they probably included an extramural anti-flooding embankment. Fires in the crowded, wooden perimeter workshops and bleachers were a far greater danger. A fire of 36 AD seems to have started in a basket-maker's workshop under the stands, on the Aventine side; the emperor Tiberius compensated various small businesses there for their losses.[35] In AD 64, during Nero's reign, fire broke out at the semi-circular end of the Circus, swept through the stands and shops, and destroyed much of the city. Games and festivals continued at the Circus, which was rebuilt over several years to the same footprint and design.[36]
|
28 |
+
|
29 |
+
By the late 1st century AD, the central dividing barrier comprised a series of water basins, or else a single watercourse open in some places and bridged over in others. It offered opportunities for artistic embellishment and decorative swagger, and included the temples and statues of various deities, fountains, and refuges for those assistants involved in more dangerous circus activities, such as beast-hunts and the recovery of casualties during races.[37]
|
30 |
+
|
31 |
+
In AD 81 the Senate built a triple arch honoring Titus at the semi-circular end of the Circus, to replace or augment a former processional entrance.[38] The emperor Domitian built a new, multi-storey palace on the Palatine, connected somehow to the Circus; he likely watched the games in autocratic style, from high above and barely visible to those below. Repairs to fire damage during his reign may already have been under way before his assassination.[39]
|
32 |
+
|
33 |
+
The risk of further fire-damage, coupled with Domitian's fate, may have prompted Trajan's decision to rebuild the Circus entirely in stone, and provide a new pulvinar in the stands where Rome's emperor could be seen and honoured as part of the Roman community, alongside their gods. Under Trajan, the Circus Maximus found its definitive form, which was unchanged thereafter save for some monumental additions by later emperors, an extensive, planned rebuilding of the starting gate area under Caracalla, and repairs and renewals to existing fabric. Some repairs were unforeseen and extensive, such as those carried out in Diocletian's reign, after the collapse of a seating section killed some 13,000 people.[40]
|
34 |
+
|
35 |
+
The southeastern turn of the track ran between two shrines which may have predated the Circus' formal development. One, located at the outer southeast perimeter, was dedicated to the valley's eponymous goddess Murcia, an obscure deity associated with Venus, the myrtle shrub, a sacred spring, the stream that divided the valley, and the lesser peak of the Aventine Hill.[41] The other was at the southeastern turning-post; where there was an underground shrine to Consus, a minor god of grain-stores, connected to the grain-goddess Ceres and to the underworld. According to Roman tradition, Romulus discovered this shrine shortly after the founding of Rome. He invented the Consualia festival, as a way of gathering his Sabine neighbours at a celebration that included horse-races and drinking. During these distractions, Romulus's men then abducted the Sabine daughters as brides. Thus the famous Roman myth of the Rape of the Sabine women had as its setting the Circus and the Consualia.
|
36 |
+
|
37 |
+
In this quasi-legendary era, horse or chariot races would have been held at the Circus site. The track width may have been determined by the distance between Murcia's and Consus' shrines at the southeastern end, and its length by the distance between these two shrines and Hercules' Ara Maxima, supposedly older than Rome itself and sited behind the Circus' starting place.[42] The position of Consus' shrine at the turn of the track recalls the placing of shrines to Roman Neptune's Greek equivalent, Poseidon, in Greek hippodromes.[43] In later developments, the altar of Consus, as one of the Circus' patron deities, was incorporated into the fabric of the south-eastern turning post. When Murcia's stream was partly built over, to form a dividing barrier (the spina or euripus)[44] between the turning posts, her shrine was either retained or rebuilt. In the Late Imperial period, both the southeastern turn and the circus itself were sometimes known as Vallis Murcia.[45] The symbols used to count race-laps also held religious significance; Castor and Pollux, who were born from an egg, were patrons of horses, horsemen, and the equestrian order (equites). Likewise, the later use of dolphin-shaped lap counters reinforced associations between the races, swiftness, and Neptune, as god of earthquakes and horses; the Romans believed dolphins to be the swiftest of all creatures.[25] When the Romans adopted the Phrygian Great Mother as an ancestral deity, a statue of her on lion-back was erected within the circus, probably on the dividing barrier.
|
38 |
+
|
39 |
+
Sun and Moon cults were probably represented at the Circus from its earliest phases. Their importance grew with the introduction of Roman cult to Apollo, and the development of Stoic and solar monism as a theological basis for the Roman Imperial cult. In the Imperial era, the Sun-god was divine patron of the Circus and its games. His sacred obelisk towered over the arena, set in the central barrier, close to his temple and the finishing line. The Sun-god was the ultimate, victorious charioteer, driving his four-horse chariot (quadriga) through the heavenly circuit from sunrise to sunset. His partner Luna drove her two-horse chariot (biga); together, they represented the predictable, orderly movement of the cosmos and the circuit of time, which found analogy in the Circus track.[46] In Imperial cosmology, the emperor was Sol-Apollo's earthly equivalent, and Luna may have been linked to the empress.[citation needed] Luna's temple, built long before Apollo's, burned down in the Great Fire of 64 AD and was probably not replaced. Her cult was closely identified with that of Diana, who seems to have been represented in the processions that started Circus games, and with Sol Indiges, usually identified as her brother. After the loss of her temple, her cult may have been transferred to Sol's temple on the dividing barrier, or one beside it; both would have been open to the sky.[47]
|
40 |
+
|
41 |
+
Temples to several deities overlooked the Circus; most are now lost. The temples to Ceres and Flora stood close together on the Aventine, more or less opposite the Circus' starting gate, which remained under Hercules' protection. Further southeast along the Aventine was a temple to Luna, the moon goddess. Aventine temples to Venus Obsequens, Mercury and Dis (or perhaps Summanus) stood on the slopes above the southeast turn. On the Palatine hill, opposite to Ceres's temple, stood the temple to Magna Mater and, more or less opposite Luna's temple, one to the sun-god Apollo.
|
42 |
+
|
43 |
+
Several festivals, some of uncertain foundation and date, were held at the Circus in historical times. The Consualia, with its semi-mythical establishment by Romulus, and the Cerealia, the major festival of Ceres, were probably older than the earliest historically attested "Roman Games" (Ludi Romani) held at the Circus in honour of Jupiter in 366 BC.[48] In the early Imperial era, Ovid describes the opening of Cerealia (mid to late April) with a horse race at the Circus,[49] followed by the nighttime release of foxes into the stadium, their tails ablaze with lighted torches.[50] Some early connection is likely between Ceres as goddess of grain crops and Consus as a god of grain storage and patron of the Circus.
|
44 |
+
|
45 |
+
After the 6th century, the Circus fell into disuse and decay, and was quarried for building materials. The lower levels, ever prone to flooding, were gradually buried under waterlogged alluvial soil and accumulated debris, so that the original track is now buried 6 meters beneath the modern surface. In the 11th century, the Circus was "replaced by dwellings rented out by the congregation of Saint-Guy."[51] In the 12th century, a watercourse was dug there to drain the soil, and by the 16th century the area was used as a market garden.[52] Many of the Circus's standing structures survived these changes; in 1587, two obelisks were removed from the central barrier by Pope Sixtus V, and one of these was re-sited at the Piazza del Popolo.[32] In 1852 a gas works was built on the site by the Anglo-Italian Gas Society. It remained in situ until 1910 when it was relocated to the edge of Rome.[53] Mid 19th century workings at the circus site uncovered the lower parts of a seating tier and outer portico. Since then, a series of excavations has exposed further sections of the seating, curved turn and central barrier but further exploration has been limited by the scale, depth and waterlogging of the site.[1]
|
46 |
+
|
47 |
+
The Circus site now functions as a large park area, in the centre of the city. It is often used for concerts and meetings. The Rome concert of Live 8 (July 2, 2005) was held there. The English band Genesis performed a concert before an estimated audience of 500,000 people in 2007 (this was filmed and released as When in Rome 2007). The Rolling Stones played there in front of 71,527 people on June 22, 2014 for the Italian date of their 14 On Fire tour. The Circus has also hosted victory celebrations, following the Italian World Cup 2006 victory and the A.S. Roma Serie A victory in 1983 and 2001. In May 2019, a new virtual/augmented reality experience, the Circo Maximo Experience, opened on the site, taking visitors on a journey through the site and its history.
|
48 |
+
|
49 |
+
(https://www.youtube.com/watch?v=t8Thra4T80c)- AP Archive. Video of celebrations after Italy won the 2006 World Cup Finals in Germany. http://www.aparchive.com/metadata/youtube/c195a43af063a416da53a7ec15430ef2.
|
50 |
+
|
51 |
+
Media related to Circus Maximus at Wikimedia Commons
|
52 |
+
|
53 |
+
Coordinates: 41°53′09″N 12°29′09″E / 41.8859°N 12.4857°E / 41.8859; 12.4857
|
en/1149.html.txt
ADDED
@@ -0,0 +1,183 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
In meteorology, a cloud is an aerosol consisting of a visible mass of minute liquid droplets, frozen crystals, or other particles suspended in the atmosphere of a planetary body or similar space.[1] Water or various other chemicals may compose the droplets and crystals. On Earth, clouds are formed as a result of saturation of the air when it is cooled to its dew point, or when it gains sufficient moisture (usually in the form of water vapor) from an adjacent source to raise the dew point to the ambient temperature.
|
6 |
+
|
7 |
+
They are seen in the Earth's homosphere, which includes the troposphere, stratosphere, and mesosphere. Nephology is the science of clouds, which is undertaken in the cloud physics branch of meteorology. There are two methods of naming clouds in their respective layers of the homosphere, Latin and common.
|
8 |
+
|
9 |
+
Genus types in the troposphere, the atmospheric layer closest to Earth's surface, have Latin names due to the universal adoption of Luke Howard's nomenclature that was formally proposed in 1802. It became the basis of a modern international system that divides clouds into five physical forms which can be divided or classified further into altitude levels to derive the ten basic genera. The main representative cloud types for each of these forms are stratus, cirrus, stratocumulus, cumulus, and cumulonimbus. Low-level stratiform and stratocumuliform genera do not have any altitude-related prefixes. However mid-level variants of the same physical forms are given the prefix alto- while high-level types carry the prefix cirro-. The other main forms never have prefixes indicating altitude level. Cirriform clouds are always high-level while cumuliform and cumulonimbiform clouds are classified formally as low-level. The latter are also more informally characterized as multi-level or vertical as indicated by the cumulo- prefix. Most of the ten genera derived by this method of classification can be subdivided into species and further subdivided into varieties. Very low stratiform clouds that extend down to the Earth's surface are given the common names fog and mist, but have no Latin names.
|
10 |
+
|
11 |
+
In the stratosphere and mesosphere, clouds have common names for their main types. They may have the appearance of stratiform veils or sheets, cirriform wisps, or stratocumuliform bands or ripples. They are seen infrequently, mostly in the polar regions of Earth. Clouds have been observed in the atmospheres of other planets and moons in the Solar System and beyond. However, due to their different temperature characteristics, they are often composed of other substances such as methane, ammonia, and sulfuric acid, as well as water.
|
12 |
+
|
13 |
+
Tropospheric clouds can have a direct effect on climate change on Earth. They may reflect incoming rays from the sun which can contribute to a cooling effect where and when these clouds occur, or trap longer wave radiation that reflects back up from the Earth's surface which can cause a warming effect. The altitude, form, and thickness of the clouds are the main factors that affect the local heating or cooling of Earth and the atmosphere. Clouds that form above the troposphere are too scarce and too thin to have any influence on climate change.
|
14 |
+
|
15 |
+
The tabular overview that follows is very broad in scope. It draws from several methods of cloud classification, both formal and informal, used in different levels of the Earth's homosphere by a number of cited authorities. Despite some differences in methodologies and terminologies, the classification schemes seen in this article can be harmonized by using an informal cross-classification of physical forms and altitude levels to derive the 10 tropospheric genera, the fog and mist that forms at surface level, and several additional major types above the troposphere. The cumulus genus includes four species that indicate vertical size and structure which can affect both forms and levels. This table should not be seen as a strict or singular classification, but as an illustration of how various major cloud types are related to each other and defined through a full range of altitude levels from Earth's surface to the "edge of space".
|
16 |
+
|
17 |
+
The origin of the term "cloud" can be found in the Old English words clud or clod, meaning a hill or a mass of rock. Around the beginning of the 13th century, the word came to be used as a metaphor for rain clouds, because of the similarity in appearance between a mass of rock and cumulus heap cloud. Over time, the metaphoric usage of the word supplanted the Old English weolcan, which had been the literal term for clouds in general.[2][3]
|
18 |
+
|
19 |
+
Ancient cloud studies were not made in isolation, but were observed in combination with other weather elements and even other natural sciences. Around 340 BC, Greek philosopher Aristotle wrote Meteorologica, a work which represented the sum of knowledge of the time about natural science, including weather and climate. For the first time, precipitation and the clouds from which precipitation fell were called meteors, which originate from the Greek word meteoros, meaning 'high in the sky'. From that word came the modern term meteorology, the study of clouds and weather. Meteorologica was based on intuition and simple observation, but not on what is now considered the scientific method. Nevertheless, it was the first known work that attempted to treat a broad range of meteorological topics in a systematic way, especially the hydrological cycle.[4]
|
20 |
+
|
21 |
+
After centuries of speculative theories about the formation and behavior of clouds, the first truly scientific studies were undertaken by Luke Howard in England and Jean-Baptiste Lamarck in France. Howard was a methodical observer with a strong grounding in the Latin language, and used his background to classify the various tropospheric cloud types during 1802. He believed that the changing cloud forms in the sky could unlock the key to weather forecasting. Lamarck had worked independently on cloud classification the same year and had come up with a different naming scheme that failed to make an impression even in his home country of France because it used unusual French names for cloud types. His system of nomenclature included 12 categories of clouds, with such names as (translated from French) hazy clouds, dappled clouds, and broom-like clouds. By contrast, Howard used universally accepted Latin, which caught on quickly after it was published in 1803.[5] As a sign of the popularity of the naming scheme, German dramatist and poet Johann Wolfgang von Goethe composed four poems about clouds, dedicating them to Howard. An elaboration of Howard's system was eventually formally adopted by the International Meteorological Conference in 1891.[5] This system covered only the tropospheric cloud types, but the discovery of clouds above the troposphere during the late 19th century eventually led to the creation of separate classification schemes using common names for these very high clouds, which were still broadly similar to some cloud forms identiified in the troposphhere.[6]
|
22 |
+
|
23 |
+
Terrestrial clouds can be found throughout most of the homosphere, which includes the troposphere, stratosphere, and mesosphere. Within these layers of the atmosphere, air can become saturated as a result of being cooled to its dew point or by having moisture added from an adjacent source.[7] In the latter case, saturation occurs when the dew point is raised to the ambient air temperature.
|
24 |
+
|
25 |
+
Adiabatic cooling occurs when one or more of three possible lifting agents – convective, cyclonic/frontal, or orographic – cause a parcel of air containing invisible water vapor to rise and cool to its dew point, the temperature at which the air becomes saturated. The main mechanism behind this process is adiabatic cooling.[8] As the air is cooled to its dew point and becomes saturated, water vapor normally condenses to form cloud drops. This condensation normally occurs on cloud condensation nuclei such as salt or dust particles that are small enough to be held aloft by normal circulation of the air.[9][10]
|
26 |
+
|
27 |
+
One agent is the convective upward motion of air caused by daytime solar heating at surface level.[9] Airmass instability allows for the formation of cumuliform clouds that can produce showers if the air is sufficiently moist.[11] On moderately rare occasions, convective lift can be powerful enough to penetrate the tropopause and push the cloud top into the stratosphere.[12]
|
28 |
+
|
29 |
+
Frontal and cyclonic lift occur when stable air is forced aloft at weather fronts and around centers of low pressure by a process called convergence.[13] Warm fronts associated with extratropical cyclones tend to generate mostly cirriform and stratiform clouds over a wide area unless the approaching warm airmass is unstable, in which case cumulus congestus or cumulonimbus clouds are usually embedded in the main precipitating cloud layer.[14] Cold fronts are usually faster moving and generate a narrower line of clouds, which are mostly stratocumuliform, cumuliform, or cumulonimbiform depending on the stability of the warm airmass just ahead of the front.[15]
|
30 |
+
|
31 |
+
A third source of lift is wind circulation forcing air over a physical barrier such as a mountain (orographic lift).[9] If the air is generally stable, nothing more than lenticular cap clouds form. However, if the air becomes sufficiently moist and unstable, orographic showers or thunderstorms may appear.[16]
|
32 |
+
|
33 |
+
Along with adiabatic cooling that requires a lifting agent, three major nonadiabatic mechanisms exist for lowering the temperature of the air to its dew point. Conductive, radiational, and evaporative cooling require no lifting mechanism and can cause condensation at surface level resulting in the formation of fog.[17][18][19]
|
34 |
+
|
35 |
+
Several main sources of water vapor can be added to the air as a way of achieving saturation without any cooling process: water or moist ground,[20][21][22] precipitation or virga,[23] and transpiration from plants[24]
|
36 |
+
|
37 |
+
Tropospheric classification is based on a hierarchy of categories with physical forms and altitude levels at the top.[25][26] These are cross-classified into a total of ten genus types, most of which can be divided into species and further subdivided into varieties which are at the bottom of the hierarchy.[27]
|
38 |
+
|
39 |
+
Clouds in the troposphere assume five physical forms based on structure and process of formation. These forms are commonly used for the purpose of satellite analysis.[25] They are given below in approximate ascending order of instability or convective activity.[28]
|
40 |
+
|
41 |
+
Nonconvective stratiform clouds appear in stable airmass conditions and, in general, have flat, sheet-like structures that can form at any altitude in the troposphere.[29] The stratiform group is divided by altitude range into the genera cirrostratus (high-level), altostratus (mid-level), stratus (low-level), and nimbostratus (multi-level).[26] Fog is commonly considered a surface-based cloud layer.[16] The fog may form at surface level in clear air or it may be the result of a very low stratus cloud subsiding to ground or sea level. Conversely, low stratiform clouds result when advection fog is lifted above surface level during breezy conditions.
|
42 |
+
|
43 |
+
Cirriform clouds in the troposphere are of the genus cirrus and have the appearance of detached or semimerged filaments. They form at high tropospheric altitudes in air that is mostly stable with little or no convective activity, although denser patches may occasionally show buildups caused by limited high-level convection where the air is partly unstable.[30] Clouds resembling cirrus can be found above the troposphere but are classified separately using common names.
|
44 |
+
|
45 |
+
Clouds of this structure have both cumuliform and stratiform characteristics in the form of rolls, ripples, or elements.[31] They generally form as a result of limited convection in an otherwise mostly stable airmass topped by an inversion layer.[32] If the inversion layer is absent or higher in the troposphere, increased airmass instability may cause the cloud layers to develop tops in the form of turrets consisting of embedded cumuliform buildups.[33] The stratocumuliform group is divided into cirrocumulus (high-level), altocumulus (mid-level), and stratocumulus (low-level).[31]
|
46 |
+
|
47 |
+
Cumuliform clouds generally appear in isolated heaps or tufts.[34][35] They are the product of localized but generally free-convective lift where no inversion layers are in the troposphere to limit vertical growth. In general, small cumuliform clouds tend to indicate comparatively weak instability. Larger cumuliform types are a sign of greater atmospheric instability and convective activity.[36] Depending on their vertical size, clouds of the cumulus genus type may be low-level or multi-level with moderate to towering vertical extent.[26]
|
48 |
+
|
49 |
+
The largest free-convective clouds comprise the genus cumulonimbus, which have towering vertical extent. They occur in highly unstable air[9] and often have fuzzy outlines at the upper parts of the clouds that sometimes include anvil tops.[31] These clouds are the product of very strong convection that can penetrate the lower stratosphere.
|
50 |
+
|
51 |
+
Tropospheric clouds form in any of three levels (formerly called étages) based on altitude range above the Earth's surface. The grouping of clouds into levels is commonly done for the purposes of cloud atlases, surface weather observations,[26] and weather maps.[37] The base-height range for each level varies depending on the latitudinal geographical zone.[26] Each altitude level comprises two or three genus-types differentiated mainly by physical form.[38][31]
|
52 |
+
|
53 |
+
The standard levels and genus-types are summarised below in approximate descending order of the altitude at which each is normally based.[39] Multi-level clouds with significant vertical extent are separately listed and summarized in approximate ascending order of instability or convective activity.[28]
|
54 |
+
|
55 |
+
High clouds form at altitudes of 3,000 to 7,600 m (10,000 to 25,000 ft) in the polar regions, 5,000 to 12,200 m (16,500 to 40,000 ft) in the temperate regions, and 6,100 to 18,300 m (20,000 to 60,000 ft) in the tropics.[26] All cirriform clouds are classified as high, thus constitute a single genus cirrus (Ci). Stratocumuliform and stratiform clouds in the high altitude range carry the prefix cirro-, yielding the respective genus names cirrocumulus (Cc) and cirrostratus (Cs). When limited-resolution satellite images of high clouds are analysed without supporting data from direct human observations, distinguishing between individual forms or genus types becomes impossible, and they are then collectively identified as high-type (or informally as cirrus-type, though not all high clouds are of the cirrus form or genus).[40]
|
56 |
+
|
57 |
+
Nonvertical clouds in the middle level are prefixed by alto-, yielding the genus names altocumulus (Ac) for stratocumuliform types and altostratus (As) for stratiform types. These clouds can form as low as 2,000 m (6,500 ft) above surface at any latitude, but may be based as high as 4,000 m (13,000 ft) near the poles, 7,000 m (23,000 ft) at midlatitudes, and 7,600 m (25,000 ft) in the tropics.[26] As with high clouds, the main genus types are easily identified by the human eye, but distinguishing between them using satellite photography is not possible. Without the support of human observations, these clouds are usually collectively identified as middle-type on satellite images.[40]
|
58 |
+
|
59 |
+
Low clouds are found from near the surface up to 2,000 m (6,500 ft).[26] Genus types in this level either have no prefix or carry one that refers to a characteristic other than altitude. Clouds that form in the low level of the troposphere are generally of larger structure than those that form in the middle and high levels, so they can usually be identified by their forms and genus types using satellite photography alone.[40]
|
60 |
+
|
61 |
+
|
62 |
+
|
63 |
+
These clouds have low- to mid-level bases that form anywhere from near the surface to about 2,400 m (8,000 ft) and tops that can extend into the mid-altitude range and sometimes higher in the case of nimbostratus.
|
64 |
+
|
65 |
+
This is a diffuse, dark grey, multi-level stratiform layer with great horizontal extent and usually moderate to deep vertical development. It lacks towering structure and looks feebly illuminated from the inside.[58] Nimbostratus normally forms from mid-level altostratus, and develops at least moderate vertical extent[59][60] when the base subsides into the low level during precipitation that can reach moderate to heavy intensity. It achieves even greater vertical development when it simultaneously grows upward into the high level due to large-scale frontal or cyclonic lift.[61] The nimbo- prefix refers to its ability to produce continuous rain or snow over a wide area, especially ahead of a warm front.[62] This thick cloud layer may be accompanied by embedded towering cumuliform or cumulonimbiform types.[60][63] Meteorologists affiliated with the World Meteorological Organization (WMO) officially classify nimbostratus as mid-level for synoptic purposes while informally characterizing it as multi-level.[26] Independent meteorologists and educators appear split between those who largely follow the WMO model[59][60] and those who classify nimbostratus as low-level, despite its considerable vertical extent and its usual initial formation in the middle altitude range.[64][65]
|
66 |
+
|
67 |
+
These very large cumuliform and cumulonimbiform types have similar low- to mid-level cloud bases as the multi-level and moderate vertical types, and tops that nearly always extend into the high levels. They are required to be identified by their standard names or abbreviations in all aviation observations (METARS) and forecasts (TAFS) to warn pilots of possible severe weather and turbulence.[66]
|
68 |
+
|
69 |
+
Genus types are commonly divided into subtypes called species that indicate specific structural details which can vary according to the stability and windshear characteristics of the atmosphere at any given time and location. Despite this hierarchy, a particular species may be a subtype of more than one genus, especially if the genera are of the same physical form and are differentiated from each other mainly by altitude or level. There are a few species, each of which can be associated with genera of more than one physical form.[72] The species types are grouped below according to the physical forms and genera with which each is normally associated. The forms, genera, and species are listed in approximate ascending order of instability or convective activity.[28]
|
70 |
+
|
71 |
+
Of the stratiform group, high-level cirrostratus comprises two species. Cirrostratus nebulosus has a rather diffuse appearance lacking in structural detail.[73] Cirrostratus fibratus is a species made of semi-merged filaments that are transitional to or from cirrus.[74] Mid-level altostratus and multi-level nimbostratus always have a flat or diffuse appearance and are therefore not subdivided into species. Low stratus is of the species nebulosus[73] except when broken up into ragged sheets of stratus fractus (see below).[59][72][75]
|
72 |
+
|
73 |
+
Cirriform clouds have three non-convective species that can form in mostly stable airmass conditions. Cirrus fibratus comprise filaments that may be straight, wavy, or occasionally twisted by non-convective wind shear.[74] The species uncinus is similar but has upturned hooks at the ends. Cirrus spissatus appear as opaque patches that can show light grey shading.[72]
|
74 |
+
|
75 |
+
Stratocumuliform genus-types (cirrocumulus, altocumulus, and stratocumulus) that appear in mostly stable air have two species each. The stratiformis species normally occur in extensive sheets or in smaller patches where there is only minimal convective activity.[76] Clouds of the lenticularis species tend to have lens-like shapes tapered at the ends. They are most commonly seen as orographic mountain-wave clouds, but can occur anywhere in the troposphere where there is strong wind shear combined with sufficient airmass stability to maintain a generally flat cloud structure. These two species can be found in the high, middle, or low levels of the troposphere depending on the stratocumuliform genus or genera present at any given time.[59][72][75]
|
76 |
+
|
77 |
+
The species fractus shows variable instability because it can be a subdivision of genus-types of different physical forms that have different stability characteristics. This subtype can be in the form of ragged but mostly stable stratiform sheets (stratus fractus) or small ragged cumuliform heaps with somewhat greater instability (cumulus fractus).[72][75][77] When clouds of this species are associated with precipitating cloud systems of considerable vertical and sometimes horizontal extent, they are also classified as accessory clouds under the name pannus (see section on supplementary features).[78]
|
78 |
+
|
79 |
+
These species are subdivisions of genus types that can occur in partly unstable air. The species castellanus appears when a mostly stable stratocumuliform or cirriform layer becomes disturbed by localized areas of airmass instability, usually in the morning or afternoon. This results in the formation of cumuliform buildups of limited convection arising from a common stratiform base.[79] Castellanus resembles the turrets of a castle when viewed from the side, and can be found with stratocumuliform genera at any tropospheric altitude level and with limited-convective patches of high-level cirrus.[80] Tufted clouds of the more detached floccus species are subdivisions of genus-types which may be cirriform or stratocumuliform in overall structure. They are sometimes seen with cirrus, cirrocumulus, altocumulus, and stratocumulus.[81]
|
80 |
+
|
81 |
+
A newly recognized species of stratocumulus or altocumulus has been given the name volutus, a roll cloud that can occur ahead of a cumulonimbus formation.[82] There are some volutus clouds that form as a consequence of interactions with specific geographical features rather than with a parent cloud. Perhaps the strangest geographically specific cloud of this type is the Morning Glory, a rolling cylindrical cloud that appears unpredictably over the Gulf of Carpentaria in Northern Australia. Associated with a powerful "ripple" in the atmosphere, the cloud may be "surfed" in glider aircraft.[83]
|
82 |
+
|
83 |
+
More general airmass instability in the troposphere tends to produce clouds of the more freely convective cumulus genus type, whose species are mainly indicators of degrees of atmospheric instability and resultant vertical development of the clouds. A cumulus cloud initially forms in the low level of the troposphere as a cloudlet of the species humilis that shows only slight vertical development. If the air becomes more unstable, the cloud tends to grow vertically into the species mediocris, then congestus, the tallest cumulus species[72] which is the same type that the International Civil Aviation Organization refers to as 'towering cumulus'.[66]
|
84 |
+
|
85 |
+
With highly unstable atmospheric conditions, large cumulus may continue to grow into cumulonimbus calvus (essentially a very tall congestus cloud that produces thunder), then ultimately into the species capillatus when supercooled water droplets at the top of the cloud turn into ice crystals giving it a cirriform appearance.[72][75]
|
86 |
+
|
87 |
+
Genus and species types are further subdivided into varieties whose names can appear after the species name to provide a fuller description of a cloud. Some cloud varieties are not restricted to a specific altitude level or form, and can therefore be common to more than one genus or species.[84]
|
88 |
+
|
89 |
+
All cloud varieties fall into one of two main groups. One group identifies the opacities of particular low and mid-level cloud structures and comprises the varieties translucidus (thin translucent), perlucidus (thick opaque with translucent or very small clear breaks), and opacus (thick opaque). These varieties are always identifiable for cloud genera and species with variable opacity. All three are associated with the stratiformis species of altocumulus and stratocumulus. However, only two varieties are seen with altostratus and stratus nebulosus whose uniform structures prevent the formation of a perlucidus variety. Opacity-based varieties are not applied to high clouds because they are always translucent, or in the case of cirrus spissatus, always opaque.[84][85]
|
90 |
+
|
91 |
+
A second group describes the occasional arrangements of cloud structures into particular patterns that are discernible by a surface-based observer (cloud fields usually being visible only from a significant altitude above the formations). These varieties are not always present with the genera and species with which they are otherwise associated, but only appear when atmospheric conditions favor their formation. Intortus and vertebratus varieties occur on occasion with cirrus fibratus. They are respectively filaments twisted into irregular shapes, and those that are arranged in fishbone patterns, usually by uneven wind currents that favor the formation of these varieties. The variety radiatus is associated with cloud rows of a particular type that appear to converge at the horizon. It is sometimes seen with the fibratus and uncinus species of cirrus, the stratiformis species of altocumulus and stratocumulus, the mediocris and sometimes humilis species of cumulus,[87][88] and with the genus altostratus.[89]
|
92 |
+
|
93 |
+
Another variety, duplicatus (closely spaced layers of the same type, one above the other), is sometimes found with cirrus of both the fibratus and uncinus species, and with altocumulus and stratocumulus of the species stratiformis and lenticularis. The variety undulatus (having a wavy undulating base) can occur with any clouds of the species stratiformis or lenticularis, and with altostratus. It is only rarely observed with stratus nebulosus. The variety lacunosus is caused by localized downdrafts that create circular holes in the form of a honeycomb or net. It is occasionally seen with cirrocumulus and altocumulus of the species stratiformis, castellanus, and floccus, and with stratocumulus of the species stratiformis and castellanus.[84][85]
|
94 |
+
|
95 |
+
It is possible for some species to show combined varieties at one time, especially if one variety is opacity-based and the other is pattern-based. An example of this would be a layer of altocumulus stratiformis arranged in seemingly converging rows separated by small breaks. The full technical name of a cloud in this configuration would be altocumulus stratiformis radiatus perlucidus, which would identify respectively its genus, species, and two combined varieties.[75][84][85]
|
96 |
+
|
97 |
+
Supplementary features and accessory clouds are not further subdivisions of cloud types below the species and variety level. Rather, they are either hydrometeors or special cloud types with their own Latin names that form in association with certain cloud genera, species, and varieties.[75][85] Supplementary features, whether in the form of clouds or precipitation, are directly attached to the main genus-cloud. Accessory clouds, by contrast, are generally detached from the main cloud.[90]
|
98 |
+
|
99 |
+
One group of supplementary features are not actual cloud formations, but precipitation that falls when water droplets or ice crystals that make up visible clouds have grown too heavy to remain aloft. Virga is a feature seen with clouds producing precipitation that evaporates before reaching the ground, these being of the genera cirrocumulus, altocumulus, altostratus, nimbostratus, stratocumulus, cumulus, and cumulonimbus.[90]
|
100 |
+
|
101 |
+
When the precipitation reaches the ground without completely evaporating, it is designated as the feature praecipitatio.[91] This normally occurs with altostratus opacus, which can produce widespread but usually light precipitation, and with thicker clouds that show significant vertical development. Of the latter, upward-growing cumulus mediocris produces only isolated light showers, while downward growing nimbostratus is capable of heavier, more extensive precipitation. Towering vertical clouds have the greatest ability to produce intense precipitation events, but these tend to be localized unless organized along fast-moving cold fronts. Showers of moderate to heavy intensity can fall from cumulus congestus clouds. Cumulonimbus, the largest of all cloud genera, has the capacity to produce very heavy showers. Low stratus clouds usually produce only light precipitation, but this always occurs as the feature praecipitatio due to the fact this cloud genus lies too close to the ground to allow for the formation of virga.[75][85][90]
|
102 |
+
|
103 |
+
Incus is the most type-specific supplementary feature, seen only with cumulonimbus of the species capillatus. A cumulonimbus incus cloud top is one that has spread out into a clear anvil shape as a result of rising air currents hitting the stability layer at the tropopause where the air no longer continues to get colder with increasing altitude.[92]
|
104 |
+
|
105 |
+
The mamma feature forms on the bases of clouds as downward-facing bubble-like protuberances caused by localized downdrafts within the cloud. It is also sometimes called mammatus, an earlier version of the term used before a standardization of Latin nomenclature brought about by the World Meteorological Organization during the 20th century. The best-known is cumulonimbus with mammatus, but the mamma feature is also seen occasionally with cirrus, cirrocumulus, altocumulus, altostratus, and stratocumulus.[90]
|
106 |
+
|
107 |
+
A tuba feature is a cloud column that may hang from the bottom of a cumulus or cumulonimbus. A newly formed or poorly organized column might be comparatively benign, but can quickly intensify into a funnel cloud or tornado.[90][93][94]
|
108 |
+
|
109 |
+
An arcus feature is a roll cloud with ragged edges attached to the lower front part of cumulus congestus or cumulonimbus that forms along the leading edge of a squall line or thunderstorm outflow.[95] A large arcus formation can have the appearance of a dark menacing arch.[90]
|
110 |
+
|
111 |
+
Several new supplementary features have been formally recognized by the World Meteorological Organization (WMO). The feature fluctus can form under conditions of strong atmospheric wind shear when a stratocumulus, altocumulus, or cirrus cloud breaks into regularly spaced crests. This variant is sometimes known informally as a Kelvin–Helmholtz (wave) cloud. This phenomenon has also been observed in cloud formations over other planets and even in the sun's atmosphere.[96] Another highly disturbed but more chaotic wave-like cloud feature associated with stratocumulus or altocumulus cloud has been given the Latin name asperitas. The supplementary feature cavum is a circular fall-streak hole that occasionally forms in a thin layer of supercooled altocumulus or cirrocumulus. Fall streaks consisting of virga or wisps of cirrus are usually seen beneath the hole as ice crystals fall out to a lower altitude. This type of hole is usually larger than typical lacunosus holes. A murus feature is a cumulonimbus wall cloud with a lowering, rotating cloud base than can lead to the development of tornadoes. A cauda feature is a tail cloud that extends horizontally away from the murus cloud and is the result of air feeding into the storm.[82]
|
112 |
+
|
113 |
+
Supplementary cloud formations detached from the main cloud are known as accessory clouds.[75][85][90] The heavier precipitating clouds, nimbostratus, towering cumulus (cumulus congestus), and cumulonimbus typically see the formation in precipitation of the pannus feature, low ragged clouds of the genera and species cumulus fractus or stratus fractus.[78]
|
114 |
+
|
115 |
+
A group of accessory clouds comprise formations that are associated mainly with upward-growing cumuliform and cumulonimbiform clouds of free convection. Pileus is a cap cloud that can form over a cumulonimbus or large cumulus cloud,[97] whereas a velum feature is a thin horizontal sheet that sometimes forms like an apron around the middle or in front of the parent cloud.[90] An accessory cloud recently officially recognized the World meteorological Organization is the flumen, also known more informally as the beaver's tail. It is formed by the warm, humid inflow of a super-cell thunderstorm, and can be mistaken for a tornado. Although the flumen can indicate a tornado risk, it is similar in appearance to pannus or scud clouds and does not rotate.[82]
|
116 |
+
|
117 |
+
Clouds initially form in clear air or become clouds when fog rises above surface level. The genus of a newly formed cloud is determined mainly by air mass characteristics such as stability and moisture content. If these characteristics change over time, the genus tends to change accordingly. When this happens, the original genus is called a mother cloud. If the mother cloud retains much of its original form after the appearance of the new genus, it is termed a genitus cloud. One example of this is stratocumulus cumulogenitus, a stratocumulus cloud formed by the partial spreading of a cumulus type when there is a loss of convective lift. If the mother cloud undergoes a complete change in genus, it is considered to be a mutatus cloud.[98]
|
118 |
+
|
119 |
+
The genitus and mutatus categories have been expanded to include certain types that do not originate from pre-existing clouds. The term flammagenitus (Latin for 'fire-made') applies to cumulus congestus or cumulonimbus that are formed by large scale fires or volcanic eruptions. Smaller low-level "pyrocumulus" or "fumulus" clouds formed by contained industrial activity are now classified as cumulus homogenitus (Latin for 'man-made'). Contrails formed from the exhaust of aircraft flying in the upper level of the troposphere can persist and spread into formations resembling cirrus which are designated cirrus homogenitus. If a cirrus homogenitus cloud changes fully to any of the high-level genera, they are termed cirrus, cirrostratus, or cirrocumulus homomutatus. Stratus cataractagenitus (Latin for 'cataract-made') are generated by the spray from waterfalls. Silvagenitus (Latin for 'forest-made') is a stratus cloud that forms as water vapor is added to the air above a forest canopy.[98]
|
120 |
+
|
121 |
+
Stratocumulus clouds can be organized into "fields" that take on certain specially classified shapes and characteristics. In general, these fields are more discernible from high altitudes than from ground level. They can often be found in the following forms:
|
122 |
+
|
123 |
+
These patterns are formed from a phenomenon known as a Kármán vortex which is named after the engineer and fluid dynamicist Theodore von Kármán,.[101] Wind driven clouds can form into parallel rows that follow the wind direction. When the wind and clouds encounter high elevation land features such as a vertically prominent islands, they can form eddies around the high land masses that give the clouds a twisted appearance.[102]
|
124 |
+
|
125 |
+
Although the local distribution of clouds can be significantly influenced by topography, the global prevalence of cloud cover in the troposphere tends to vary more by latitude. It is most prevalent in and along low pressure zones of surface tropospheric convergence which encircle the Earth close to the equator and near the 50th parallels of latitude in the northern and southern hemispheres.[105] The adiabatic cooling processes that lead to the creation of clouds by way of lifting agents are all associated with convergence; a process that involves the horizontal inflow and accumulation of air at a given location, as well as the rate at which this happens.[106] Near the equator, increased cloudiness is due to the presence of the low-pressure Intertropical Convergence Zone (ITCZ) where very warm and unstable air promotes mostly cumuliform and cumulonimbiform clouds.[107] Clouds of virtually any type can form along the mid-latitude convergence zones depending on the stability and moisture content of the air. These extratropical convergence zones are occupied by the polar fronts where air masses of polar origin meet and clash with those of tropical or subtropical origin.[108] This leads to the formation of weather-making extratropical cyclones composed of cloud systems that may be stable or unstable to varying degrees according to the stability characteristics of the various airmasses that are in conflict.[109]
|
126 |
+
|
127 |
+
Divergence is the opposite of convergence. In the Earth's troposphere, it involves the horizontal outflow of air from the upper part of a rising column of air, or from the lower part of a subsiding column often associated with an area or ridge of high pressure.[106] Cloudiness tends to be least prevalent near the poles and in the subtropics close to the 30th parallels, north and south. The latter are sometimes referred to as the horse latitudes. The presence of a large-scale high-pressure subtropical ridge on each side of the equator reduces cloudiness at these low latitudes.[110] Similar patterns also occur at higher latitudes in both hemispheres.[111]
|
128 |
+
|
129 |
+
The luminance or brightness of a cloud is determined by how light is reflected, scattered, and transmitted by the cloud's particles. Its brightness may also be affected by the presence of haze or photometeors such as halos and rainbows.[112] In the troposphere, dense, deep clouds exhibit a high reflectance (70% to 95%) throughout the visible spectrum. Tiny particles of water are densely packed and sunlight cannot penetrate far into the cloud before it is reflected out, giving a cloud its characteristic white color, especially when viewed from the top.[113] Cloud droplets tend to scatter light efficiently, so that the intensity of the solar radiation decreases with depth into the gases. As a result, the cloud base can vary from a very light to very-dark-grey depending on the cloud's thickness and how much light is being reflected or transmitted back to the observer. High thin tropospheric clouds reflect less light because of the comparatively low concentration of constituent ice crystals or supercooled water droplets which results in a slightly off-white appearance. However, a thick dense ice-crystal cloud appears brilliant white with pronounced grey shading because of its greater reflectivity.[112]
|
130 |
+
|
131 |
+
As a tropospheric cloud matures, the dense water droplets may combine to produce larger droplets. If the droplets become too large and heavy to be kept aloft by the air circulation, they will fall from the cloud as rain. By this process of accumulation, the space between droplets becomes increasingly larger, permitting light to penetrate farther into the cloud. If the cloud is sufficiently large and the droplets within are spaced far enough apart, a percentage of the light that enters the cloud is not reflected back out but is absorbed giving the cloud a darker look. A simple example of this is one's being able to see farther in heavy rain than in heavy fog. This process of reflection/absorption is what causes the range of cloud color from white to black.[114]
|
132 |
+
|
133 |
+
Striking cloud colorations can be seen at any altitude, with the color of a cloud usually being the same as the incident light.[115] During daytime when the sun is relatively high in the sky, tropospheric clouds generally appear bright white on top with varying shades of grey underneath. Thin clouds may look white or appear to have acquired the color of their environment or background. Red, orange, and pink clouds occur almost entirely at sunrise/sunset and are the result of the scattering of sunlight by the atmosphere. When the sun is just below the horizon, low-level clouds are gray, middle clouds appear rose-colored, and high clouds are white or off-white. Clouds at night are black or dark grey in a moonless sky, or whitish when illuminated by the moon. They may also reflect the colors of large fires, city lights, or auroras that might be present.[115]
|
134 |
+
|
135 |
+
A cumulonimbus cloud that appears to have a greenish or bluish tint is a sign that it contains extremely high amounts of water; hail or rain which scatter light in a way that gives the cloud a blue color. A green colorization occurs mostly late in the day when the sun is comparatively low in the sky and the incident sunlight has a reddish tinge that appears green when illuminating a very tall bluish cloud. Supercell type storms are more likely to be characterized by this but any storm can appear this way. Coloration such as this does not directly indicate that it is a severe thunderstorm, it only confirms its potential. Since a green/blue tint signifies copious amounts of water, a strong updraft to support it, high winds from the storm raining out, and wet hail; all elements that improve the chance for it to become severe, can all be inferred from this. In addition, the stronger the updraft is, the more likely the storm is to undergo tornadogenesis and to produce large hail and high winds.[116]
|
136 |
+
|
137 |
+
Yellowish clouds may be seen in the troposphere in the late spring through early fall months during forest fire season. The yellow color is due to the presence of pollutants in the smoke. Yellowish clouds are caused by the presence of nitrogen dioxide and are sometimes seen in urban areas with high air pollution levels.[117]
|
138 |
+
|
139 |
+
Stratocumulus stratiformis and small castellanus made orange by the sun rising
|
140 |
+
|
141 |
+
An occurrence of cloud iridescence with altocumulus volutus and cirrocumulus stratiformis
|
142 |
+
|
143 |
+
Sunset reflecting shades of pink onto grey stratocumulus stratiformis translucidus (becoming perlucidus in the background)
|
144 |
+
|
145 |
+
Stratocumulus stratiformis perlucidus before sunset. Bangalore, India.
|
146 |
+
|
147 |
+
Late-summer rainstorm in Denmark. Nearly black color of base indicates main cloud in foreground probably cumulonimbus.
|
148 |
+
|
149 |
+
Particles in the atmosphere and the sun's angle enhance colors of stratocumulus cumulogenitus at evening twilight
|
150 |
+
|
151 |
+
Tropospheric clouds exert numerous influences on Earth's troposphere and climate. First and foremost, they are the source of precipitation, thereby greatly influencing the distribution and amount of precipitation. Because of their differential buoyancy relative to surrounding cloud-free air, clouds can be associated with vertical motions of the air that may be convective, frontal, or cyclonic. The motion is upward if the clouds are less dense because condensation of water vapor releases heat, warming the air and thereby decreasing its density. This can lead to downward motion because lifting of the air results in cooling that increases its density. All of these effects are subtly dependent on the vertical temperature and moisture structure of the atmosphere and result in major redistribution of heat that affect the Earth's climate.[118]
|
152 |
+
|
153 |
+
The complexity and diversity of clouds in the troposphere is a major reason for difficulty in quantifying the effects of clouds on climate and climate change. On the one hand, white cloud tops promote cooling of Earth's surface by reflecting shortwave radiation (visible and near infrared) from the sun, diminishing the amount of solar radiation that is absorbed at the surface, enhancing the Earth's albedo. Most of the sunlight that reaches the ground is absorbed, warming the surface, which emits radiation upward at longer, infrared, wavelengths. At these wavelengths, however, water in the clouds acts as an efficient absorber. The water reacts by radiating, also in the infrared, both upward and downward, and the downward longwave radiation results in increased warming at the surface. This is analogous to the greenhouse effect of greenhouse gases and water vapor.[118]
|
154 |
+
|
155 |
+
High-level genus-types particularly show this duality with both short-wave albedo cooling and long-wave greenhouse warming effects. On the whole, ice-crystal clouds in the upper troposphere (cirrus) tend to favor net warming.[119][120] However, the cooling effect is dominant with mid-level and low clouds, especially when they form in extensive sheets.[119] Measurements by NASA indicate that on the whole, the effects of low and mid-level clouds that tend to promote cooling outweigh the warming effects of high layers and the variable outcomes associated with vertically developed clouds.[119]
|
156 |
+
|
157 |
+
As difficult as it is to evaluate the influences of current clouds on current climate, it is even more problematic to predict changes in cloud patterns and properties in a future, warmer climate, and the resultant cloud influences on future climate. In a warmer climate more water would enter the atmosphere by evaporation at the surface; as clouds are formed from water vapor, cloudiness would be expected to increase. But in a warmer climate, higher temperatures would tend to evaporate clouds.[121] Both of these statements are considered accurate, and both phenomena, known as cloud feedbacks, are found in climate model calculations. Broadly speaking, if clouds, especially low clouds, increase in a warmer climate, the resultant cooling effect leads to a negative feedback in climate response to increased greenhouse gases. But if low clouds decrease, or if high clouds increase, the feedback is positive. Differing amounts of these feedbacks are the principal reason for differences in climate sensitivities of current global climate models. As a consequence, much research has focused on the response of low and vertical clouds to a changing climate. Leading global models produce quite different results, however, with some showing increasing low clouds and others showing decreases.[122][123] For these reasons the role of tropospheric clouds in regulating weather and climate remains a leading source of uncertainty in global warming projections.[124][125]
|
158 |
+
|
159 |
+
Polar stratospheric clouds (PSC's) form in the lowest part of the stratosphere during the winter, at the altitude and during the season that produces the coldest temperatures and therefore the best chances of triggering condensation caused by adiabatic cooling. Moisture is scarce in the stratosphere, so nacreous and non-nacreous cloud at this altitude range is restricted to polar regions in the winter where the air is coldest.[6]
|
160 |
+
|
161 |
+
PSC's show some variation in structure according to their chemical makeup and atmospheric conditions, but are limited to a single very high range of altitude of about 15,000–25,000 m (49,200–82,000 ft), so they are not classified into altitude levels, genus types, species, or varieties. There is no Latin nomenclature in the manner of tropospheric clouds, but rather descriptive names using common English.[6]
|
162 |
+
|
163 |
+
Supercooled nitric acid and water PSC's, sometimes known as type 1, typically have a stratiform appearance resembling cirrostratus or haze, but because they are not frozen into crystals, do not show the pastel colours of the nacreous types. This type of PSC has been identified as a cause of ozone depletion in the stratosphere.[126] The frozen nacreous types are typically very thin with mother-of-pearl colorations and an undulating cirriform or lenticular (stratocumuliform) appearance. These are sometimes known as type 2.[127][128]
|
164 |
+
|
165 |
+
Polar mesospheric clouds form at an extreme-level altitude range of about 80 to 85 km (50 to 53 mi). They are given the Latin name noctilucent because of their illumination well after sunset and before sunrise. They typically have a bluish or silvery white coloration that can resemble brightly illuminated cirrus. Noctilucent clouds may occasionally take on more of a red or orange hue.[6] They are not common or widespread enough to have a significant effect on climate.[129] However, an increasing frequency of occurrence of noctilucent clouds since the 19th century may be the result of climate change.[130]
|
166 |
+
|
167 |
+
Noctilucent clouds are the highest in the atmosphere and form near the top of the mesosphere at about ten times the altitude of tropospheric high clouds.[131] From ground level, they can occasionally be seen illuminated by the sun during deep twilight. Ongoing research indicates that convective lift in the mesosphere is strong enough during the polar summer to cause adiabatic cooling of small amount of water vapour to the point of saturation. This tends to produce the coldest temperatures in the entire atmosphere just below the mesopause. These conditions result in the best environment for the formation of polar mesospheric clouds.[129] There is also evidence that smoke particles from burnt-up meteors provide much of the condensation nuclei required for the formation of noctilucent cloud.[132]
|
168 |
+
|
169 |
+
Noctilucent clouds have four major types based on physical structure and appearance. Type I veils are very tenuous and lack well-defined structure, somewhat like cirrostratus or poorly defined cirrus.[133] Type II bands are long streaks that often occur in groups arranged roughly parallel to each other. They are usually more widely spaced than the bands or elements seen with cirrocumulus clouds.[134] Type III billows are arrangements of closely spaced, roughly parallel short streaks that mostly resemble cirrus.[135] Type IV whirls are partial or, more rarely, complete rings of cloud with dark centres.[136]
|
170 |
+
|
171 |
+
Distribution in the mesosphere is similar to the stratosphere except at much higher altitudes. Because of the need for maximum cooling of the water vapor to produce noctilucent clouds, their distribution tends to be restricted to polar regions of Earth. A major seasonal difference is that convective lift from below the mesosphere pushes very scarce water vapor to higher colder altitudes required for cloud formation during the respective summer seasons in the northern and southern hemispheres. Sightings are rare more than 45 degrees south of the north pole or north of the south pole.[6]
|
172 |
+
|
173 |
+
Cloud cover has been seen on most other planets in the Solar System. Venus's thick clouds are composed of sulfur dioxide (due to volcanic activity) and appear to be almost entirely stratiform.[137] They are arranged in three main layers at altitudes of 45 to 65 km that obscure the planet's surface and can produce virga. No embedded cumuliform types have been identified, but broken stratocumuliform wave formations are sometimes seen in the top layer that reveal more continuous layer clouds underneath.[138] On Mars, noctilucent, cirrus, cirrocumulus and stratocumulus composed of water-ice have been detected mostly near the poles.[139][140] Water-ice fogs have also been detected on Mars.[141]
|
174 |
+
|
175 |
+
Both Jupiter and Saturn have an outer cirriform cloud deck composed of ammonia,[142][143] an intermediate stratiform haze-cloud layer made of ammonium hydrosulfide, and an inner deck of cumulus water clouds.[144][145] Embedded cumulonimbus are known to exist near the Great Red Spot on Jupiter.[146][147] The same category-types can be found covering Uranus, and Neptune, but are all composed of methane.[148][149][150][151][152][153] Saturn's moon Titan has cirrus clouds believed to be composed largely of methane.[154][155] The Cassini–Huygens Saturn mission uncovered evidence of polar stratospheric clouds[156] and a methane cycle on Titan, including lakes near the poles and fluvial channels on the surface of the moon.[157]
|
176 |
+
|
177 |
+
Some planets outside the Solar System are known to have atmospheric clouds. In October 2013, the detection of high altitude optically thick clouds in the atmosphere of exoplanet Kepler-7b was announced,[158][159] and, in December 2013, in the atmospheres of GJ 436 b and GJ 1214 b.[160][161][162][163]
|
178 |
+
|
179 |
+
Clouds play an important role in various cultures and religious traditions. The ancient Akkadians believed that the clouds were the breasts of the sky goddess Antu[165] and that rain was milk from her breasts.[165] In Exodus 13:21–22, Yahweh is described as guiding the Israelites through the desert in the form of a "pillar of cloud" by day and a "pillar of fire" by night.[164]
|
180 |
+
|
181 |
+
In the ancient Greek comedy The Clouds, written by Aristophanes and first performed at the City Dionysia in 423 BC, the philosopher Socrates declares that the Clouds are the only true deities[166] and tells the main character Strepsiades not to worship any deities other than the Clouds, but to pay homage to them alone.[166] In the play, the Clouds change shape to reveal the true nature of whoever is looking at them,[167][166][168] turning into centaurs at the sight of a long-haired politician, wolves at the sight of the embezzler Simon, deer at the sight of the coward Cleonymus, and mortal women at the sight of the effeminate informer Cleisthenes.[167][168][166] They are hailed the source of inspiration to comic poets and philosophers;[166] they are masters of rhetoric, regarding eloquence and sophistry alike as their "friends".[166]
|
182 |
+
|
183 |
+
In China, clouds are symbols of luck and happiness.[169] Overlapping clouds are thought to imply eternal happiness[169] and clouds of different colors are said to indicate "multiplied blessings".[169]
|
en/115.html.txt
ADDED
@@ -0,0 +1,105 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
In chemistry, alcohol is an organic compound that carries at least one hydroxyl functional group (−OH) bound to a saturated carbon atom.[2] The term alcohol originally referred to the primary alcohol ethanol (ethyl alcohol), which is used as a drug and is the main alcohol present in alcoholic beverages. An important class of alcohols, of which methanol and ethanol are the simplest members, includes all compounds for which the general formula is CnH2n+1OH. Simple monoalcohols that are the subject of this article include primary (RCH2OH), secondary (R2CHOH) and tertiary (R3COH) alcohols.
|
6 |
+
|
7 |
+
The suffix -ol appears in the IUPAC chemical name of all substances where the hydroxyl group is the functional group with the highest priority. When a higher priority group is present in the compound, the prefix hydroxy- is used in its IUPAC name. The suffix -ol in non-IUPAC names (such as paracetamol or cholesterol) also typically indicates that the substance is an alcohol. However, many substances that contain hydroxyl functional groups (particularly sugars, such as glucose and sucrose) have names which include neither the suffix -ol, nor the prefix hydroxy-.
|
8 |
+
|
9 |
+
Alcohol distillation possibly originated in the Indus valley civilization as early as 2000 BCE. The people of India used an alcoholic drink called Sura made from fermented rice, barley, jaggery, and flowers of the madhyaka tree.[3] Alcohol distillation was known to Islamic chemists as early as the eighth century.[4][5]
|
10 |
+
|
11 |
+
The Arab chemist, al-Kindi, unambiguously described the distillation of wine in a treatise titled as "The Book of the chemistry of Perfume and Distillations".[6][7][8]
|
12 |
+
|
13 |
+
The word "alcohol" is from the Arabic kohl (Arabic: الكحل, romanized: al-kuḥl), a powder used as an eyeliner.[9] Al- is the Arabic definite article, equivalent to the in English. Alcohol was originally used for the very fine powder produced by the sublimation of the natural mineral stibnite to form antimony trisulfide Sb2S3. It was considered to be the essence or "spirit" of this mineral. It was used as an antiseptic, eyeliner, and cosmetic. The meaning of alcohol was extended to distilled substances in general, and then narrowed to ethanol, when "spirits" was a synonym for hard liquor.[10]
|
14 |
+
|
15 |
+
Bartholomew Traheron, in his 1543 translation of John of Vigo, introduces the word as a term used by "barbarous" authors for "fine powder." Vigo wrote: "the barbarous auctours use alcohol, or (as I fynde it sometymes wryten) alcofoll, for moost fine poudre."[11]
|
16 |
+
|
17 |
+
The 1657 Lexicon Chymicum, by William Johnson glosses the word as "antimonium sive stibium."[12] By extension, the word came to refer to any fluid obtained by distillation, including "alcohol of wine," the distilled essence of wine. Libavius in Alchymia (1594) refers to "vini alcohol vel vinum alcalisatum". Johnson (1657) glosses alcohol vini as "quando omnis superfluitas vini a vino separatur, ita ut accensum ardeat donec totum consumatur, nihilque fæcum aut phlegmatis in fundo remaneat." The word's meaning became restricted to "spirit of wine" (the chemical known today as ethanol) in the 18th century and was extended to the class of substances so-called as "alcohols" in modern chemistry after 1850.[11]
|
18 |
+
|
19 |
+
The term ethanol was invented in 1892, combining the word ethane with the "-ol" ending of "alcohol".[13]
|
20 |
+
|
21 |
+
IUPAC nomenclature is used in scientific publications and where precise identification of the substance is important, especially in cases where the relative complexity of the molecule does not make such a systematic name unwieldy. In naming simple alcohols, the name of the alkane chain loses the terminal e and adds the suffix -ol, e.g., as in "ethanol" from the alkane chain name "ethane".[14] When necessary, the position of the hydroxyl group is indicated by a number between the alkane name and the -ol: propan-1-ol for CH3CH2CH2OH, propan-2-ol for CH3CH(OH)CH3. If a higher priority group is present (such as an aldehyde, ketone, or carboxylic acid), then the prefix hydroxy-is used,[14] e.g., as in 1-hydroxy-2-propanone (CH3C(O)CH2OH).[15]
|
22 |
+
|
23 |
+
In cases where the OH functional group is bonded to an sp2 carbon on an aromatic ring the molecule is known as a phenol, and is named using the IUPAC rules for naming phenols.[16]
|
24 |
+
|
25 |
+
In other less formal contexts, an alcohol is often called with the name of the corresponding alkyl group followed by the word "alcohol", e.g., methyl alcohol, ethyl alcohol. Propyl alcohol may be n-propyl alcohol or isopropyl alcohol, depending on whether the hydroxyl group is bonded to the end or middle carbon on the straight propane chain. As described under systematic naming, if another group on the molecule takes priority, the alcohol moiety is often indicated using the "hydroxy-" prefix.[17]
|
26 |
+
|
27 |
+
Alcohols are then classified into primary, secondary (sec-, s-), and tertiary (tert-, t-), based upon the number of carbon atoms connected to the carbon atom that bears the hydroxyl functional group. (The respective numeric shorthands 1°, 2°, and 3° are also sometimes used in informal settings.[18]) The primary alcohols have general formulas RCH2OH. The simplest primary alcohol is methanol (CH3OH), for which R=H, and the next is ethanol, for which R=CH3, the methyl group. Secondary alcohols are those of the form RR'CHOH, the simplest of which is 2-propanol (R=R'=CH3). For the tertiary alcohols the general form is RR'R"COH. The simplest example is tert-butanol (2-methylpropan-2-ol), for which each of R, R', and R" is CH3. In these shorthands, R, R', and R" represent substituents, alkyl or other attached, generally organic groups.
|
28 |
+
|
29 |
+
In archaic nomenclature, alcohols can be named as derivatives of methanol using "-carbinol" as the ending. For instance, (CH3)3COH can be named trimethylcarbinol.
|
30 |
+
|
31 |
+
Alcohols have a long history of myriad uses. For simple mono-alcohols, which is the focus on this article, the following are most important industrial alcohols:[20]
|
32 |
+
|
33 |
+
Methanol is the most common industrial alcohol, with about 12 million tons/y produced in 1980. The combined capacity of the other alcohols is about the same, distributed roughly equally.[20]
|
34 |
+
|
35 |
+
With respect to acute toxicity, simple alcohols have low acute toxicities. Doses of several milliliters are tolerated. For pentanols, hexanols, octanols and longer alcohols, LD50 range from 2–5 g/kg (rats, oral). Methanol and ethanol are less acutely toxic. All alcohols are mild skin irritants.[20]
|
36 |
+
|
37 |
+
The metabolism of methanol (and ethylene glycol) is affected by the presence of ethanol, which has a higher affinity for liver alcohol dehydrogenase. In this way methanol will be excreted intact in urine.[21][22][23]
|
38 |
+
|
39 |
+
In general, the hydroxyl group makes alcohols polar. Those groups can form hydrogen bonds to one another and to most other compounds. Owing to the presence of the polar OH alcohols are more water-soluble than simple hydrocarbons. Methanol, ethanol, and propanol are miscible in water. Butanol, with a four-carbon chain, is moderately soluble.
|
40 |
+
|
41 |
+
Because of hydrogen bonding, alcohols tend to have higher boiling points than comparable hydrocarbons and ethers. The boiling point of the alcohol ethanol is 78.29 °C, compared to 69 °C for the hydrocarbon hexane, and 34.6 °C for diethyl ether.
|
42 |
+
|
43 |
+
Simple alcohols are found widely in nature. Ethanol is most prominent because it is the product of fermentation, a major energy-producing pathway. The other simple alcohols are formed in only trace amounts. More complex alcohols are pervasive, as manifested in sugars, some amino acids, and fatty acids.
|
44 |
+
|
45 |
+
In the Ziegler process, linear alcohols are produced from ethylene and triethylaluminium followed by oxidation and hydrolysis.[20] An idealized synthesis of 1-octanol is shown:
|
46 |
+
|
47 |
+
The process generates a range of alcohols that are separated by distillation.
|
48 |
+
|
49 |
+
Many higher alcohols are produced by hydroformylation of alkenes followed by hydrogenation. When applied to a terminal alkene, as is common, one typically obtains a linear alcohol:[20]
|
50 |
+
|
51 |
+
Such processes give fatty alcohols, which are useful for detergents.
|
52 |
+
|
53 |
+
Some low molecular weight alcohols of industrial importance are produced by the addition of water to alkenes. Ethanol, isopropanol, 2-butanol, and tert-butanol are produced by this general method. Two implementations are employed, the direct and indirect methods. The direct method avoids the formation of stable intermediates, typically using acid catalysts. In the indirect method, the alkene is converted to the sulfate ester, which is subsequently hydrolyzed. The direct hydration using ethylene (ethylene hydration)[24] or other alkenes from cracking of fractions of distilled crude oil.
|
54 |
+
|
55 |
+
Hydration is also used industrially to produce the diol ethylene glycol from ethylene oxide.
|
56 |
+
|
57 |
+
Ethanol is obtained by fermentation using glucose produced from sugar from the hydrolysis of starch, in the presence of yeast and temperature of less than 37 °C to produce ethanol. For instance, such a process might proceed by the conversion of sucrose by the enzyme invertase into glucose and fructose, then the conversion of glucose by the enzyme complex zymase into ethanol (and carbon dioxide).
|
58 |
+
|
59 |
+
Several species of the benign bacteria in the intestine use fermentation as a form of anaerobic metabolism. This metabolic reaction produces ethanol as a waste product. Thus, human bodies contain some quantity of alcohol endogenously produced by these bacteria. In rare cases, this can be sufficient to cause "auto-brewery syndrome" in which intoxicating quantities of alcohol are produced.[25][26][27]
|
60 |
+
|
61 |
+
Like ethanol, butanol can be produced by fermentation processes. Saccharomyces yeast are known to produce these higher alcohols at temperatures above 75 °F (24 °C). The bacterium Clostridium acetobutylicum can feed on cellulose to produce butanol on an industrial scale.[28]
|
62 |
+
|
63 |
+
Primary alkyl halides react with aqueous NaOH or KOH mainly to primary alcohols in nucleophilic aliphatic substitution. (Secondary and especially tertiary alkyl halides will give the elimination (alkene) product instead). Grignard reagents react with carbonyl groups to secondary and tertiary alcohols. Related reactions are the Barbier reaction and the Nozaki-Hiyama reaction.
|
64 |
+
|
65 |
+
Aldehydes or ketones are reduced with sodium borohydride or lithium aluminium hydride (after an acidic workup). Another reduction by aluminiumisopropylates is the Meerwein-Ponndorf-Verley reduction. Noyori asymmetric hydrogenation is the asymmetric reduction of β-keto-esters.
|
66 |
+
|
67 |
+
Alkenes engage in an acid catalysed hydration reaction using concentrated sulfuric acid as a catalyst that gives usually secondary or tertiary alcohols. The hydroboration-oxidation and oxymercuration-reduction of alkenes are more reliable in organic synthesis. Alkenes react with NBS and water in halohydrin formation reaction. Amines can be converted to diazonium salts, which are then hydrolyzed.
|
68 |
+
|
69 |
+
The formation of a secondary alcohol via reduction and hydration is shown:
|
70 |
+
|
71 |
+
With a pKa of around 16–19, they are, in general, slightly weaker acids than water. With strong bases such as sodium hydride or sodium they form salts called alkoxides, with the general formula RO− M+.
|
72 |
+
|
73 |
+
The acidity of alcohols is strongly affected by solvation. In the gas phase, alcohols are more acidic than in water.[29]
|
74 |
+
|
75 |
+
The OH group is not a good leaving group in nucleophilic substitution reactions, so neutral alcohols do not react in such reactions. However, if the oxygen is first protonated to give R−OH2+, the leaving group (water) is much more stable, and the nucleophilic substitution can take place. For instance, tertiary alcohols react with hydrochloric acid to produce tertiary alkyl halides, where the hydroxyl group is replaced by a chlorine atom by unimolecular nucleophilic substitution. If primary or secondary alcohols are to be reacted with hydrochloric acid, an activator such as zinc chloride is needed. In alternative fashion, the conversion may be performed directly using thionyl chloride.[1]
|
76 |
+
|
77 |
+
|
78 |
+
|
79 |
+
Alcohols may, likewise, be converted to alkyl bromides using hydrobromic acid or phosphorus tribromide, for example:
|
80 |
+
|
81 |
+
In the Barton-McCombie deoxygenation an alcohol is deoxygenated to an alkane with tributyltin hydride or a trimethylborane-water complex in a radical substitution reaction.
|
82 |
+
|
83 |
+
Meanwhile, the oxygen atom has lone pairs of nonbonded electrons that render it weakly basic in the presence of strong acids such as sulfuric acid. For example, with methanol:
|
84 |
+
|
85 |
+
|
86 |
+
|
87 |
+
Upon treatment with strong acids, alcohols undergo the E1 elimination reaction to produce alkenes. The reaction, in general, obeys Zaitsev's Rule, which states that the most stable (usually the most substituted) alkene is formed. Tertiary alcohols eliminate easily at just above room temperature, but primary alcohols require a higher temperature.
|
88 |
+
|
89 |
+
This is a diagram of acid catalysed dehydration of ethanol to produce ethylene:
|
90 |
+
|
91 |
+
|
92 |
+
|
93 |
+
A more controlled elimination reaction requires the formation of the xanthate ester.
|
94 |
+
|
95 |
+
Tertiary alcohols react with strong acids to generate carbocations. The reaction is related to their dehydration, e.g. isobutylene from tert-butyl alcohol. A special kind of dehydration reaction involves triphenylmethanol and especially its amine-substituted derivatives. When treated with acid, these alcohols lose water to give stable carbocations, which are commercial dyes.[30]
|
96 |
+
|
97 |
+
Alcohol and carboxylic acids react in the so-called Fischer esterification. The reaction usually requires a catalyst, such as concentrated sulfuric acid:
|
98 |
+
|
99 |
+
Other types of ester are prepared in a similar manner – for example, tosyl (tosylate) esters are made by reaction of the alcohol with p-toluenesulfonyl chloride in pyridine.
|
100 |
+
|
101 |
+
Primary alcohols (R-CH2OH) can be oxidized either to aldehydes (R-CHO) or to carboxylic acids (R-CO2H). The oxidation of secondary alcohols (R1R2CH-OH) normally terminates at the ketone (R1R2C=O) stage. Tertiary alcohols (R1R2R3C-OH) are resistant to oxidation.
|
102 |
+
|
103 |
+
The direct oxidation of primary alcohols to carboxylic acids normally proceeds via the corresponding aldehyde, which is transformed via an aldehyde hydrate (R-CH(OH)2) by reaction with water before it can be further oxidized to the carboxylic acid.
|
104 |
+
|
105 |
+
Reagents useful for the transformation of primary alcohols to aldehydes are normally also suitable for the oxidation of secondary alcohols to ketones. These include Collins reagent and Dess-Martin periodinane. The direct oxidation of primary alcohols to carboxylic acids can be carried out using potassium permanganate or the Jones reagent.
|
en/1150.html.txt
ADDED
@@ -0,0 +1,183 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
In meteorology, a cloud is an aerosol consisting of a visible mass of minute liquid droplets, frozen crystals, or other particles suspended in the atmosphere of a planetary body or similar space.[1] Water or various other chemicals may compose the droplets and crystals. On Earth, clouds are formed as a result of saturation of the air when it is cooled to its dew point, or when it gains sufficient moisture (usually in the form of water vapor) from an adjacent source to raise the dew point to the ambient temperature.
|
6 |
+
|
7 |
+
They are seen in the Earth's homosphere, which includes the troposphere, stratosphere, and mesosphere. Nephology is the science of clouds, which is undertaken in the cloud physics branch of meteorology. There are two methods of naming clouds in their respective layers of the homosphere, Latin and common.
|
8 |
+
|
9 |
+
Genus types in the troposphere, the atmospheric layer closest to Earth's surface, have Latin names due to the universal adoption of Luke Howard's nomenclature that was formally proposed in 1802. It became the basis of a modern international system that divides clouds into five physical forms which can be divided or classified further into altitude levels to derive the ten basic genera. The main representative cloud types for each of these forms are stratus, cirrus, stratocumulus, cumulus, and cumulonimbus. Low-level stratiform and stratocumuliform genera do not have any altitude-related prefixes. However mid-level variants of the same physical forms are given the prefix alto- while high-level types carry the prefix cirro-. The other main forms never have prefixes indicating altitude level. Cirriform clouds are always high-level while cumuliform and cumulonimbiform clouds are classified formally as low-level. The latter are also more informally characterized as multi-level or vertical as indicated by the cumulo- prefix. Most of the ten genera derived by this method of classification can be subdivided into species and further subdivided into varieties. Very low stratiform clouds that extend down to the Earth's surface are given the common names fog and mist, but have no Latin names.
|
10 |
+
|
11 |
+
In the stratosphere and mesosphere, clouds have common names for their main types. They may have the appearance of stratiform veils or sheets, cirriform wisps, or stratocumuliform bands or ripples. They are seen infrequently, mostly in the polar regions of Earth. Clouds have been observed in the atmospheres of other planets and moons in the Solar System and beyond. However, due to their different temperature characteristics, they are often composed of other substances such as methane, ammonia, and sulfuric acid, as well as water.
|
12 |
+
|
13 |
+
Tropospheric clouds can have a direct effect on climate change on Earth. They may reflect incoming rays from the sun which can contribute to a cooling effect where and when these clouds occur, or trap longer wave radiation that reflects back up from the Earth's surface which can cause a warming effect. The altitude, form, and thickness of the clouds are the main factors that affect the local heating or cooling of Earth and the atmosphere. Clouds that form above the troposphere are too scarce and too thin to have any influence on climate change.
|
14 |
+
|
15 |
+
The tabular overview that follows is very broad in scope. It draws from several methods of cloud classification, both formal and informal, used in different levels of the Earth's homosphere by a number of cited authorities. Despite some differences in methodologies and terminologies, the classification schemes seen in this article can be harmonized by using an informal cross-classification of physical forms and altitude levels to derive the 10 tropospheric genera, the fog and mist that forms at surface level, and several additional major types above the troposphere. The cumulus genus includes four species that indicate vertical size and structure which can affect both forms and levels. This table should not be seen as a strict or singular classification, but as an illustration of how various major cloud types are related to each other and defined through a full range of altitude levels from Earth's surface to the "edge of space".
|
16 |
+
|
17 |
+
The origin of the term "cloud" can be found in the Old English words clud or clod, meaning a hill or a mass of rock. Around the beginning of the 13th century, the word came to be used as a metaphor for rain clouds, because of the similarity in appearance between a mass of rock and cumulus heap cloud. Over time, the metaphoric usage of the word supplanted the Old English weolcan, which had been the literal term for clouds in general.[2][3]
|
18 |
+
|
19 |
+
Ancient cloud studies were not made in isolation, but were observed in combination with other weather elements and even other natural sciences. Around 340 BC, Greek philosopher Aristotle wrote Meteorologica, a work which represented the sum of knowledge of the time about natural science, including weather and climate. For the first time, precipitation and the clouds from which precipitation fell were called meteors, which originate from the Greek word meteoros, meaning 'high in the sky'. From that word came the modern term meteorology, the study of clouds and weather. Meteorologica was based on intuition and simple observation, but not on what is now considered the scientific method. Nevertheless, it was the first known work that attempted to treat a broad range of meteorological topics in a systematic way, especially the hydrological cycle.[4]
|
20 |
+
|
21 |
+
After centuries of speculative theories about the formation and behavior of clouds, the first truly scientific studies were undertaken by Luke Howard in England and Jean-Baptiste Lamarck in France. Howard was a methodical observer with a strong grounding in the Latin language, and used his background to classify the various tropospheric cloud types during 1802. He believed that the changing cloud forms in the sky could unlock the key to weather forecasting. Lamarck had worked independently on cloud classification the same year and had come up with a different naming scheme that failed to make an impression even in his home country of France because it used unusual French names for cloud types. His system of nomenclature included 12 categories of clouds, with such names as (translated from French) hazy clouds, dappled clouds, and broom-like clouds. By contrast, Howard used universally accepted Latin, which caught on quickly after it was published in 1803.[5] As a sign of the popularity of the naming scheme, German dramatist and poet Johann Wolfgang von Goethe composed four poems about clouds, dedicating them to Howard. An elaboration of Howard's system was eventually formally adopted by the International Meteorological Conference in 1891.[5] This system covered only the tropospheric cloud types, but the discovery of clouds above the troposphere during the late 19th century eventually led to the creation of separate classification schemes using common names for these very high clouds, which were still broadly similar to some cloud forms identiified in the troposphhere.[6]
|
22 |
+
|
23 |
+
Terrestrial clouds can be found throughout most of the homosphere, which includes the troposphere, stratosphere, and mesosphere. Within these layers of the atmosphere, air can become saturated as a result of being cooled to its dew point or by having moisture added from an adjacent source.[7] In the latter case, saturation occurs when the dew point is raised to the ambient air temperature.
|
24 |
+
|
25 |
+
Adiabatic cooling occurs when one or more of three possible lifting agents – convective, cyclonic/frontal, or orographic – cause a parcel of air containing invisible water vapor to rise and cool to its dew point, the temperature at which the air becomes saturated. The main mechanism behind this process is adiabatic cooling.[8] As the air is cooled to its dew point and becomes saturated, water vapor normally condenses to form cloud drops. This condensation normally occurs on cloud condensation nuclei such as salt or dust particles that are small enough to be held aloft by normal circulation of the air.[9][10]
|
26 |
+
|
27 |
+
One agent is the convective upward motion of air caused by daytime solar heating at surface level.[9] Airmass instability allows for the formation of cumuliform clouds that can produce showers if the air is sufficiently moist.[11] On moderately rare occasions, convective lift can be powerful enough to penetrate the tropopause and push the cloud top into the stratosphere.[12]
|
28 |
+
|
29 |
+
Frontal and cyclonic lift occur when stable air is forced aloft at weather fronts and around centers of low pressure by a process called convergence.[13] Warm fronts associated with extratropical cyclones tend to generate mostly cirriform and stratiform clouds over a wide area unless the approaching warm airmass is unstable, in which case cumulus congestus or cumulonimbus clouds are usually embedded in the main precipitating cloud layer.[14] Cold fronts are usually faster moving and generate a narrower line of clouds, which are mostly stratocumuliform, cumuliform, or cumulonimbiform depending on the stability of the warm airmass just ahead of the front.[15]
|
30 |
+
|
31 |
+
A third source of lift is wind circulation forcing air over a physical barrier such as a mountain (orographic lift).[9] If the air is generally stable, nothing more than lenticular cap clouds form. However, if the air becomes sufficiently moist and unstable, orographic showers or thunderstorms may appear.[16]
|
32 |
+
|
33 |
+
Along with adiabatic cooling that requires a lifting agent, three major nonadiabatic mechanisms exist for lowering the temperature of the air to its dew point. Conductive, radiational, and evaporative cooling require no lifting mechanism and can cause condensation at surface level resulting in the formation of fog.[17][18][19]
|
34 |
+
|
35 |
+
Several main sources of water vapor can be added to the air as a way of achieving saturation without any cooling process: water or moist ground,[20][21][22] precipitation or virga,[23] and transpiration from plants[24]
|
36 |
+
|
37 |
+
Tropospheric classification is based on a hierarchy of categories with physical forms and altitude levels at the top.[25][26] These are cross-classified into a total of ten genus types, most of which can be divided into species and further subdivided into varieties which are at the bottom of the hierarchy.[27]
|
38 |
+
|
39 |
+
Clouds in the troposphere assume five physical forms based on structure and process of formation. These forms are commonly used for the purpose of satellite analysis.[25] They are given below in approximate ascending order of instability or convective activity.[28]
|
40 |
+
|
41 |
+
Nonconvective stratiform clouds appear in stable airmass conditions and, in general, have flat, sheet-like structures that can form at any altitude in the troposphere.[29] The stratiform group is divided by altitude range into the genera cirrostratus (high-level), altostratus (mid-level), stratus (low-level), and nimbostratus (multi-level).[26] Fog is commonly considered a surface-based cloud layer.[16] The fog may form at surface level in clear air or it may be the result of a very low stratus cloud subsiding to ground or sea level. Conversely, low stratiform clouds result when advection fog is lifted above surface level during breezy conditions.
|
42 |
+
|
43 |
+
Cirriform clouds in the troposphere are of the genus cirrus and have the appearance of detached or semimerged filaments. They form at high tropospheric altitudes in air that is mostly stable with little or no convective activity, although denser patches may occasionally show buildups caused by limited high-level convection where the air is partly unstable.[30] Clouds resembling cirrus can be found above the troposphere but are classified separately using common names.
|
44 |
+
|
45 |
+
Clouds of this structure have both cumuliform and stratiform characteristics in the form of rolls, ripples, or elements.[31] They generally form as a result of limited convection in an otherwise mostly stable airmass topped by an inversion layer.[32] If the inversion layer is absent or higher in the troposphere, increased airmass instability may cause the cloud layers to develop tops in the form of turrets consisting of embedded cumuliform buildups.[33] The stratocumuliform group is divided into cirrocumulus (high-level), altocumulus (mid-level), and stratocumulus (low-level).[31]
|
46 |
+
|
47 |
+
Cumuliform clouds generally appear in isolated heaps or tufts.[34][35] They are the product of localized but generally free-convective lift where no inversion layers are in the troposphere to limit vertical growth. In general, small cumuliform clouds tend to indicate comparatively weak instability. Larger cumuliform types are a sign of greater atmospheric instability and convective activity.[36] Depending on their vertical size, clouds of the cumulus genus type may be low-level or multi-level with moderate to towering vertical extent.[26]
|
48 |
+
|
49 |
+
The largest free-convective clouds comprise the genus cumulonimbus, which have towering vertical extent. They occur in highly unstable air[9] and often have fuzzy outlines at the upper parts of the clouds that sometimes include anvil tops.[31] These clouds are the product of very strong convection that can penetrate the lower stratosphere.
|
50 |
+
|
51 |
+
Tropospheric clouds form in any of three levels (formerly called étages) based on altitude range above the Earth's surface. The grouping of clouds into levels is commonly done for the purposes of cloud atlases, surface weather observations,[26] and weather maps.[37] The base-height range for each level varies depending on the latitudinal geographical zone.[26] Each altitude level comprises two or three genus-types differentiated mainly by physical form.[38][31]
|
52 |
+
|
53 |
+
The standard levels and genus-types are summarised below in approximate descending order of the altitude at which each is normally based.[39] Multi-level clouds with significant vertical extent are separately listed and summarized in approximate ascending order of instability or convective activity.[28]
|
54 |
+
|
55 |
+
High clouds form at altitudes of 3,000 to 7,600 m (10,000 to 25,000 ft) in the polar regions, 5,000 to 12,200 m (16,500 to 40,000 ft) in the temperate regions, and 6,100 to 18,300 m (20,000 to 60,000 ft) in the tropics.[26] All cirriform clouds are classified as high, thus constitute a single genus cirrus (Ci). Stratocumuliform and stratiform clouds in the high altitude range carry the prefix cirro-, yielding the respective genus names cirrocumulus (Cc) and cirrostratus (Cs). When limited-resolution satellite images of high clouds are analysed without supporting data from direct human observations, distinguishing between individual forms or genus types becomes impossible, and they are then collectively identified as high-type (or informally as cirrus-type, though not all high clouds are of the cirrus form or genus).[40]
|
56 |
+
|
57 |
+
Nonvertical clouds in the middle level are prefixed by alto-, yielding the genus names altocumulus (Ac) for stratocumuliform types and altostratus (As) for stratiform types. These clouds can form as low as 2,000 m (6,500 ft) above surface at any latitude, but may be based as high as 4,000 m (13,000 ft) near the poles, 7,000 m (23,000 ft) at midlatitudes, and 7,600 m (25,000 ft) in the tropics.[26] As with high clouds, the main genus types are easily identified by the human eye, but distinguishing between them using satellite photography is not possible. Without the support of human observations, these clouds are usually collectively identified as middle-type on satellite images.[40]
|
58 |
+
|
59 |
+
Low clouds are found from near the surface up to 2,000 m (6,500 ft).[26] Genus types in this level either have no prefix or carry one that refers to a characteristic other than altitude. Clouds that form in the low level of the troposphere are generally of larger structure than those that form in the middle and high levels, so they can usually be identified by their forms and genus types using satellite photography alone.[40]
|
60 |
+
|
61 |
+
|
62 |
+
|
63 |
+
These clouds have low- to mid-level bases that form anywhere from near the surface to about 2,400 m (8,000 ft) and tops that can extend into the mid-altitude range and sometimes higher in the case of nimbostratus.
|
64 |
+
|
65 |
+
This is a diffuse, dark grey, multi-level stratiform layer with great horizontal extent and usually moderate to deep vertical development. It lacks towering structure and looks feebly illuminated from the inside.[58] Nimbostratus normally forms from mid-level altostratus, and develops at least moderate vertical extent[59][60] when the base subsides into the low level during precipitation that can reach moderate to heavy intensity. It achieves even greater vertical development when it simultaneously grows upward into the high level due to large-scale frontal or cyclonic lift.[61] The nimbo- prefix refers to its ability to produce continuous rain or snow over a wide area, especially ahead of a warm front.[62] This thick cloud layer may be accompanied by embedded towering cumuliform or cumulonimbiform types.[60][63] Meteorologists affiliated with the World Meteorological Organization (WMO) officially classify nimbostratus as mid-level for synoptic purposes while informally characterizing it as multi-level.[26] Independent meteorologists and educators appear split between those who largely follow the WMO model[59][60] and those who classify nimbostratus as low-level, despite its considerable vertical extent and its usual initial formation in the middle altitude range.[64][65]
|
66 |
+
|
67 |
+
These very large cumuliform and cumulonimbiform types have similar low- to mid-level cloud bases as the multi-level and moderate vertical types, and tops that nearly always extend into the high levels. They are required to be identified by their standard names or abbreviations in all aviation observations (METARS) and forecasts (TAFS) to warn pilots of possible severe weather and turbulence.[66]
|
68 |
+
|
69 |
+
Genus types are commonly divided into subtypes called species that indicate specific structural details which can vary according to the stability and windshear characteristics of the atmosphere at any given time and location. Despite this hierarchy, a particular species may be a subtype of more than one genus, especially if the genera are of the same physical form and are differentiated from each other mainly by altitude or level. There are a few species, each of which can be associated with genera of more than one physical form.[72] The species types are grouped below according to the physical forms and genera with which each is normally associated. The forms, genera, and species are listed in approximate ascending order of instability or convective activity.[28]
|
70 |
+
|
71 |
+
Of the stratiform group, high-level cirrostratus comprises two species. Cirrostratus nebulosus has a rather diffuse appearance lacking in structural detail.[73] Cirrostratus fibratus is a species made of semi-merged filaments that are transitional to or from cirrus.[74] Mid-level altostratus and multi-level nimbostratus always have a flat or diffuse appearance and are therefore not subdivided into species. Low stratus is of the species nebulosus[73] except when broken up into ragged sheets of stratus fractus (see below).[59][72][75]
|
72 |
+
|
73 |
+
Cirriform clouds have three non-convective species that can form in mostly stable airmass conditions. Cirrus fibratus comprise filaments that may be straight, wavy, or occasionally twisted by non-convective wind shear.[74] The species uncinus is similar but has upturned hooks at the ends. Cirrus spissatus appear as opaque patches that can show light grey shading.[72]
|
74 |
+
|
75 |
+
Stratocumuliform genus-types (cirrocumulus, altocumulus, and stratocumulus) that appear in mostly stable air have two species each. The stratiformis species normally occur in extensive sheets or in smaller patches where there is only minimal convective activity.[76] Clouds of the lenticularis species tend to have lens-like shapes tapered at the ends. They are most commonly seen as orographic mountain-wave clouds, but can occur anywhere in the troposphere where there is strong wind shear combined with sufficient airmass stability to maintain a generally flat cloud structure. These two species can be found in the high, middle, or low levels of the troposphere depending on the stratocumuliform genus or genera present at any given time.[59][72][75]
|
76 |
+
|
77 |
+
The species fractus shows variable instability because it can be a subdivision of genus-types of different physical forms that have different stability characteristics. This subtype can be in the form of ragged but mostly stable stratiform sheets (stratus fractus) or small ragged cumuliform heaps with somewhat greater instability (cumulus fractus).[72][75][77] When clouds of this species are associated with precipitating cloud systems of considerable vertical and sometimes horizontal extent, they are also classified as accessory clouds under the name pannus (see section on supplementary features).[78]
|
78 |
+
|
79 |
+
These species are subdivisions of genus types that can occur in partly unstable air. The species castellanus appears when a mostly stable stratocumuliform or cirriform layer becomes disturbed by localized areas of airmass instability, usually in the morning or afternoon. This results in the formation of cumuliform buildups of limited convection arising from a common stratiform base.[79] Castellanus resembles the turrets of a castle when viewed from the side, and can be found with stratocumuliform genera at any tropospheric altitude level and with limited-convective patches of high-level cirrus.[80] Tufted clouds of the more detached floccus species are subdivisions of genus-types which may be cirriform or stratocumuliform in overall structure. They are sometimes seen with cirrus, cirrocumulus, altocumulus, and stratocumulus.[81]
|
80 |
+
|
81 |
+
A newly recognized species of stratocumulus or altocumulus has been given the name volutus, a roll cloud that can occur ahead of a cumulonimbus formation.[82] There are some volutus clouds that form as a consequence of interactions with specific geographical features rather than with a parent cloud. Perhaps the strangest geographically specific cloud of this type is the Morning Glory, a rolling cylindrical cloud that appears unpredictably over the Gulf of Carpentaria in Northern Australia. Associated with a powerful "ripple" in the atmosphere, the cloud may be "surfed" in glider aircraft.[83]
|
82 |
+
|
83 |
+
More general airmass instability in the troposphere tends to produce clouds of the more freely convective cumulus genus type, whose species are mainly indicators of degrees of atmospheric instability and resultant vertical development of the clouds. A cumulus cloud initially forms in the low level of the troposphere as a cloudlet of the species humilis that shows only slight vertical development. If the air becomes more unstable, the cloud tends to grow vertically into the species mediocris, then congestus, the tallest cumulus species[72] which is the same type that the International Civil Aviation Organization refers to as 'towering cumulus'.[66]
|
84 |
+
|
85 |
+
With highly unstable atmospheric conditions, large cumulus may continue to grow into cumulonimbus calvus (essentially a very tall congestus cloud that produces thunder), then ultimately into the species capillatus when supercooled water droplets at the top of the cloud turn into ice crystals giving it a cirriform appearance.[72][75]
|
86 |
+
|
87 |
+
Genus and species types are further subdivided into varieties whose names can appear after the species name to provide a fuller description of a cloud. Some cloud varieties are not restricted to a specific altitude level or form, and can therefore be common to more than one genus or species.[84]
|
88 |
+
|
89 |
+
All cloud varieties fall into one of two main groups. One group identifies the opacities of particular low and mid-level cloud structures and comprises the varieties translucidus (thin translucent), perlucidus (thick opaque with translucent or very small clear breaks), and opacus (thick opaque). These varieties are always identifiable for cloud genera and species with variable opacity. All three are associated with the stratiformis species of altocumulus and stratocumulus. However, only two varieties are seen with altostratus and stratus nebulosus whose uniform structures prevent the formation of a perlucidus variety. Opacity-based varieties are not applied to high clouds because they are always translucent, or in the case of cirrus spissatus, always opaque.[84][85]
|
90 |
+
|
91 |
+
A second group describes the occasional arrangements of cloud structures into particular patterns that are discernible by a surface-based observer (cloud fields usually being visible only from a significant altitude above the formations). These varieties are not always present with the genera and species with which they are otherwise associated, but only appear when atmospheric conditions favor their formation. Intortus and vertebratus varieties occur on occasion with cirrus fibratus. They are respectively filaments twisted into irregular shapes, and those that are arranged in fishbone patterns, usually by uneven wind currents that favor the formation of these varieties. The variety radiatus is associated with cloud rows of a particular type that appear to converge at the horizon. It is sometimes seen with the fibratus and uncinus species of cirrus, the stratiformis species of altocumulus and stratocumulus, the mediocris and sometimes humilis species of cumulus,[87][88] and with the genus altostratus.[89]
|
92 |
+
|
93 |
+
Another variety, duplicatus (closely spaced layers of the same type, one above the other), is sometimes found with cirrus of both the fibratus and uncinus species, and with altocumulus and stratocumulus of the species stratiformis and lenticularis. The variety undulatus (having a wavy undulating base) can occur with any clouds of the species stratiformis or lenticularis, and with altostratus. It is only rarely observed with stratus nebulosus. The variety lacunosus is caused by localized downdrafts that create circular holes in the form of a honeycomb or net. It is occasionally seen with cirrocumulus and altocumulus of the species stratiformis, castellanus, and floccus, and with stratocumulus of the species stratiformis and castellanus.[84][85]
|
94 |
+
|
95 |
+
It is possible for some species to show combined varieties at one time, especially if one variety is opacity-based and the other is pattern-based. An example of this would be a layer of altocumulus stratiformis arranged in seemingly converging rows separated by small breaks. The full technical name of a cloud in this configuration would be altocumulus stratiformis radiatus perlucidus, which would identify respectively its genus, species, and two combined varieties.[75][84][85]
|
96 |
+
|
97 |
+
Supplementary features and accessory clouds are not further subdivisions of cloud types below the species and variety level. Rather, they are either hydrometeors or special cloud types with their own Latin names that form in association with certain cloud genera, species, and varieties.[75][85] Supplementary features, whether in the form of clouds or precipitation, are directly attached to the main genus-cloud. Accessory clouds, by contrast, are generally detached from the main cloud.[90]
|
98 |
+
|
99 |
+
One group of supplementary features are not actual cloud formations, but precipitation that falls when water droplets or ice crystals that make up visible clouds have grown too heavy to remain aloft. Virga is a feature seen with clouds producing precipitation that evaporates before reaching the ground, these being of the genera cirrocumulus, altocumulus, altostratus, nimbostratus, stratocumulus, cumulus, and cumulonimbus.[90]
|
100 |
+
|
101 |
+
When the precipitation reaches the ground without completely evaporating, it is designated as the feature praecipitatio.[91] This normally occurs with altostratus opacus, which can produce widespread but usually light precipitation, and with thicker clouds that show significant vertical development. Of the latter, upward-growing cumulus mediocris produces only isolated light showers, while downward growing nimbostratus is capable of heavier, more extensive precipitation. Towering vertical clouds have the greatest ability to produce intense precipitation events, but these tend to be localized unless organized along fast-moving cold fronts. Showers of moderate to heavy intensity can fall from cumulus congestus clouds. Cumulonimbus, the largest of all cloud genera, has the capacity to produce very heavy showers. Low stratus clouds usually produce only light precipitation, but this always occurs as the feature praecipitatio due to the fact this cloud genus lies too close to the ground to allow for the formation of virga.[75][85][90]
|
102 |
+
|
103 |
+
Incus is the most type-specific supplementary feature, seen only with cumulonimbus of the species capillatus. A cumulonimbus incus cloud top is one that has spread out into a clear anvil shape as a result of rising air currents hitting the stability layer at the tropopause where the air no longer continues to get colder with increasing altitude.[92]
|
104 |
+
|
105 |
+
The mamma feature forms on the bases of clouds as downward-facing bubble-like protuberances caused by localized downdrafts within the cloud. It is also sometimes called mammatus, an earlier version of the term used before a standardization of Latin nomenclature brought about by the World Meteorological Organization during the 20th century. The best-known is cumulonimbus with mammatus, but the mamma feature is also seen occasionally with cirrus, cirrocumulus, altocumulus, altostratus, and stratocumulus.[90]
|
106 |
+
|
107 |
+
A tuba feature is a cloud column that may hang from the bottom of a cumulus or cumulonimbus. A newly formed or poorly organized column might be comparatively benign, but can quickly intensify into a funnel cloud or tornado.[90][93][94]
|
108 |
+
|
109 |
+
An arcus feature is a roll cloud with ragged edges attached to the lower front part of cumulus congestus or cumulonimbus that forms along the leading edge of a squall line or thunderstorm outflow.[95] A large arcus formation can have the appearance of a dark menacing arch.[90]
|
110 |
+
|
111 |
+
Several new supplementary features have been formally recognized by the World Meteorological Organization (WMO). The feature fluctus can form under conditions of strong atmospheric wind shear when a stratocumulus, altocumulus, or cirrus cloud breaks into regularly spaced crests. This variant is sometimes known informally as a Kelvin–Helmholtz (wave) cloud. This phenomenon has also been observed in cloud formations over other planets and even in the sun's atmosphere.[96] Another highly disturbed but more chaotic wave-like cloud feature associated with stratocumulus or altocumulus cloud has been given the Latin name asperitas. The supplementary feature cavum is a circular fall-streak hole that occasionally forms in a thin layer of supercooled altocumulus or cirrocumulus. Fall streaks consisting of virga or wisps of cirrus are usually seen beneath the hole as ice crystals fall out to a lower altitude. This type of hole is usually larger than typical lacunosus holes. A murus feature is a cumulonimbus wall cloud with a lowering, rotating cloud base than can lead to the development of tornadoes. A cauda feature is a tail cloud that extends horizontally away from the murus cloud and is the result of air feeding into the storm.[82]
|
112 |
+
|
113 |
+
Supplementary cloud formations detached from the main cloud are known as accessory clouds.[75][85][90] The heavier precipitating clouds, nimbostratus, towering cumulus (cumulus congestus), and cumulonimbus typically see the formation in precipitation of the pannus feature, low ragged clouds of the genera and species cumulus fractus or stratus fractus.[78]
|
114 |
+
|
115 |
+
A group of accessory clouds comprise formations that are associated mainly with upward-growing cumuliform and cumulonimbiform clouds of free convection. Pileus is a cap cloud that can form over a cumulonimbus or large cumulus cloud,[97] whereas a velum feature is a thin horizontal sheet that sometimes forms like an apron around the middle or in front of the parent cloud.[90] An accessory cloud recently officially recognized the World meteorological Organization is the flumen, also known more informally as the beaver's tail. It is formed by the warm, humid inflow of a super-cell thunderstorm, and can be mistaken for a tornado. Although the flumen can indicate a tornado risk, it is similar in appearance to pannus or scud clouds and does not rotate.[82]
|
116 |
+
|
117 |
+
Clouds initially form in clear air or become clouds when fog rises above surface level. The genus of a newly formed cloud is determined mainly by air mass characteristics such as stability and moisture content. If these characteristics change over time, the genus tends to change accordingly. When this happens, the original genus is called a mother cloud. If the mother cloud retains much of its original form after the appearance of the new genus, it is termed a genitus cloud. One example of this is stratocumulus cumulogenitus, a stratocumulus cloud formed by the partial spreading of a cumulus type when there is a loss of convective lift. If the mother cloud undergoes a complete change in genus, it is considered to be a mutatus cloud.[98]
|
118 |
+
|
119 |
+
The genitus and mutatus categories have been expanded to include certain types that do not originate from pre-existing clouds. The term flammagenitus (Latin for 'fire-made') applies to cumulus congestus or cumulonimbus that are formed by large scale fires or volcanic eruptions. Smaller low-level "pyrocumulus" or "fumulus" clouds formed by contained industrial activity are now classified as cumulus homogenitus (Latin for 'man-made'). Contrails formed from the exhaust of aircraft flying in the upper level of the troposphere can persist and spread into formations resembling cirrus which are designated cirrus homogenitus. If a cirrus homogenitus cloud changes fully to any of the high-level genera, they are termed cirrus, cirrostratus, or cirrocumulus homomutatus. Stratus cataractagenitus (Latin for 'cataract-made') are generated by the spray from waterfalls. Silvagenitus (Latin for 'forest-made') is a stratus cloud that forms as water vapor is added to the air above a forest canopy.[98]
|
120 |
+
|
121 |
+
Stratocumulus clouds can be organized into "fields" that take on certain specially classified shapes and characteristics. In general, these fields are more discernible from high altitudes than from ground level. They can often be found in the following forms:
|
122 |
+
|
123 |
+
These patterns are formed from a phenomenon known as a Kármán vortex which is named after the engineer and fluid dynamicist Theodore von Kármán,.[101] Wind driven clouds can form into parallel rows that follow the wind direction. When the wind and clouds encounter high elevation land features such as a vertically prominent islands, they can form eddies around the high land masses that give the clouds a twisted appearance.[102]
|
124 |
+
|
125 |
+
Although the local distribution of clouds can be significantly influenced by topography, the global prevalence of cloud cover in the troposphere tends to vary more by latitude. It is most prevalent in and along low pressure zones of surface tropospheric convergence which encircle the Earth close to the equator and near the 50th parallels of latitude in the northern and southern hemispheres.[105] The adiabatic cooling processes that lead to the creation of clouds by way of lifting agents are all associated with convergence; a process that involves the horizontal inflow and accumulation of air at a given location, as well as the rate at which this happens.[106] Near the equator, increased cloudiness is due to the presence of the low-pressure Intertropical Convergence Zone (ITCZ) where very warm and unstable air promotes mostly cumuliform and cumulonimbiform clouds.[107] Clouds of virtually any type can form along the mid-latitude convergence zones depending on the stability and moisture content of the air. These extratropical convergence zones are occupied by the polar fronts where air masses of polar origin meet and clash with those of tropical or subtropical origin.[108] This leads to the formation of weather-making extratropical cyclones composed of cloud systems that may be stable or unstable to varying degrees according to the stability characteristics of the various airmasses that are in conflict.[109]
|
126 |
+
|
127 |
+
Divergence is the opposite of convergence. In the Earth's troposphere, it involves the horizontal outflow of air from the upper part of a rising column of air, or from the lower part of a subsiding column often associated with an area or ridge of high pressure.[106] Cloudiness tends to be least prevalent near the poles and in the subtropics close to the 30th parallels, north and south. The latter are sometimes referred to as the horse latitudes. The presence of a large-scale high-pressure subtropical ridge on each side of the equator reduces cloudiness at these low latitudes.[110] Similar patterns also occur at higher latitudes in both hemispheres.[111]
|
128 |
+
|
129 |
+
The luminance or brightness of a cloud is determined by how light is reflected, scattered, and transmitted by the cloud's particles. Its brightness may also be affected by the presence of haze or photometeors such as halos and rainbows.[112] In the troposphere, dense, deep clouds exhibit a high reflectance (70% to 95%) throughout the visible spectrum. Tiny particles of water are densely packed and sunlight cannot penetrate far into the cloud before it is reflected out, giving a cloud its characteristic white color, especially when viewed from the top.[113] Cloud droplets tend to scatter light efficiently, so that the intensity of the solar radiation decreases with depth into the gases. As a result, the cloud base can vary from a very light to very-dark-grey depending on the cloud's thickness and how much light is being reflected or transmitted back to the observer. High thin tropospheric clouds reflect less light because of the comparatively low concentration of constituent ice crystals or supercooled water droplets which results in a slightly off-white appearance. However, a thick dense ice-crystal cloud appears brilliant white with pronounced grey shading because of its greater reflectivity.[112]
|
130 |
+
|
131 |
+
As a tropospheric cloud matures, the dense water droplets may combine to produce larger droplets. If the droplets become too large and heavy to be kept aloft by the air circulation, they will fall from the cloud as rain. By this process of accumulation, the space between droplets becomes increasingly larger, permitting light to penetrate farther into the cloud. If the cloud is sufficiently large and the droplets within are spaced far enough apart, a percentage of the light that enters the cloud is not reflected back out but is absorbed giving the cloud a darker look. A simple example of this is one's being able to see farther in heavy rain than in heavy fog. This process of reflection/absorption is what causes the range of cloud color from white to black.[114]
|
132 |
+
|
133 |
+
Striking cloud colorations can be seen at any altitude, with the color of a cloud usually being the same as the incident light.[115] During daytime when the sun is relatively high in the sky, tropospheric clouds generally appear bright white on top with varying shades of grey underneath. Thin clouds may look white or appear to have acquired the color of their environment or background. Red, orange, and pink clouds occur almost entirely at sunrise/sunset and are the result of the scattering of sunlight by the atmosphere. When the sun is just below the horizon, low-level clouds are gray, middle clouds appear rose-colored, and high clouds are white or off-white. Clouds at night are black or dark grey in a moonless sky, or whitish when illuminated by the moon. They may also reflect the colors of large fires, city lights, or auroras that might be present.[115]
|
134 |
+
|
135 |
+
A cumulonimbus cloud that appears to have a greenish or bluish tint is a sign that it contains extremely high amounts of water; hail or rain which scatter light in a way that gives the cloud a blue color. A green colorization occurs mostly late in the day when the sun is comparatively low in the sky and the incident sunlight has a reddish tinge that appears green when illuminating a very tall bluish cloud. Supercell type storms are more likely to be characterized by this but any storm can appear this way. Coloration such as this does not directly indicate that it is a severe thunderstorm, it only confirms its potential. Since a green/blue tint signifies copious amounts of water, a strong updraft to support it, high winds from the storm raining out, and wet hail; all elements that improve the chance for it to become severe, can all be inferred from this. In addition, the stronger the updraft is, the more likely the storm is to undergo tornadogenesis and to produce large hail and high winds.[116]
|
136 |
+
|
137 |
+
Yellowish clouds may be seen in the troposphere in the late spring through early fall months during forest fire season. The yellow color is due to the presence of pollutants in the smoke. Yellowish clouds are caused by the presence of nitrogen dioxide and are sometimes seen in urban areas with high air pollution levels.[117]
|
138 |
+
|
139 |
+
Stratocumulus stratiformis and small castellanus made orange by the sun rising
|
140 |
+
|
141 |
+
An occurrence of cloud iridescence with altocumulus volutus and cirrocumulus stratiformis
|
142 |
+
|
143 |
+
Sunset reflecting shades of pink onto grey stratocumulus stratiformis translucidus (becoming perlucidus in the background)
|
144 |
+
|
145 |
+
Stratocumulus stratiformis perlucidus before sunset. Bangalore, India.
|
146 |
+
|
147 |
+
Late-summer rainstorm in Denmark. Nearly black color of base indicates main cloud in foreground probably cumulonimbus.
|
148 |
+
|
149 |
+
Particles in the atmosphere and the sun's angle enhance colors of stratocumulus cumulogenitus at evening twilight
|
150 |
+
|
151 |
+
Tropospheric clouds exert numerous influences on Earth's troposphere and climate. First and foremost, they are the source of precipitation, thereby greatly influencing the distribution and amount of precipitation. Because of their differential buoyancy relative to surrounding cloud-free air, clouds can be associated with vertical motions of the air that may be convective, frontal, or cyclonic. The motion is upward if the clouds are less dense because condensation of water vapor releases heat, warming the air and thereby decreasing its density. This can lead to downward motion because lifting of the air results in cooling that increases its density. All of these effects are subtly dependent on the vertical temperature and moisture structure of the atmosphere and result in major redistribution of heat that affect the Earth's climate.[118]
|
152 |
+
|
153 |
+
The complexity and diversity of clouds in the troposphere is a major reason for difficulty in quantifying the effects of clouds on climate and climate change. On the one hand, white cloud tops promote cooling of Earth's surface by reflecting shortwave radiation (visible and near infrared) from the sun, diminishing the amount of solar radiation that is absorbed at the surface, enhancing the Earth's albedo. Most of the sunlight that reaches the ground is absorbed, warming the surface, which emits radiation upward at longer, infrared, wavelengths. At these wavelengths, however, water in the clouds acts as an efficient absorber. The water reacts by radiating, also in the infrared, both upward and downward, and the downward longwave radiation results in increased warming at the surface. This is analogous to the greenhouse effect of greenhouse gases and water vapor.[118]
|
154 |
+
|
155 |
+
High-level genus-types particularly show this duality with both short-wave albedo cooling and long-wave greenhouse warming effects. On the whole, ice-crystal clouds in the upper troposphere (cirrus) tend to favor net warming.[119][120] However, the cooling effect is dominant with mid-level and low clouds, especially when they form in extensive sheets.[119] Measurements by NASA indicate that on the whole, the effects of low and mid-level clouds that tend to promote cooling outweigh the warming effects of high layers and the variable outcomes associated with vertically developed clouds.[119]
|
156 |
+
|
157 |
+
As difficult as it is to evaluate the influences of current clouds on current climate, it is even more problematic to predict changes in cloud patterns and properties in a future, warmer climate, and the resultant cloud influences on future climate. In a warmer climate more water would enter the atmosphere by evaporation at the surface; as clouds are formed from water vapor, cloudiness would be expected to increase. But in a warmer climate, higher temperatures would tend to evaporate clouds.[121] Both of these statements are considered accurate, and both phenomena, known as cloud feedbacks, are found in climate model calculations. Broadly speaking, if clouds, especially low clouds, increase in a warmer climate, the resultant cooling effect leads to a negative feedback in climate response to increased greenhouse gases. But if low clouds decrease, or if high clouds increase, the feedback is positive. Differing amounts of these feedbacks are the principal reason for differences in climate sensitivities of current global climate models. As a consequence, much research has focused on the response of low and vertical clouds to a changing climate. Leading global models produce quite different results, however, with some showing increasing low clouds and others showing decreases.[122][123] For these reasons the role of tropospheric clouds in regulating weather and climate remains a leading source of uncertainty in global warming projections.[124][125]
|
158 |
+
|
159 |
+
Polar stratospheric clouds (PSC's) form in the lowest part of the stratosphere during the winter, at the altitude and during the season that produces the coldest temperatures and therefore the best chances of triggering condensation caused by adiabatic cooling. Moisture is scarce in the stratosphere, so nacreous and non-nacreous cloud at this altitude range is restricted to polar regions in the winter where the air is coldest.[6]
|
160 |
+
|
161 |
+
PSC's show some variation in structure according to their chemical makeup and atmospheric conditions, but are limited to a single very high range of altitude of about 15,000–25,000 m (49,200–82,000 ft), so they are not classified into altitude levels, genus types, species, or varieties. There is no Latin nomenclature in the manner of tropospheric clouds, but rather descriptive names using common English.[6]
|
162 |
+
|
163 |
+
Supercooled nitric acid and water PSC's, sometimes known as type 1, typically have a stratiform appearance resembling cirrostratus or haze, but because they are not frozen into crystals, do not show the pastel colours of the nacreous types. This type of PSC has been identified as a cause of ozone depletion in the stratosphere.[126] The frozen nacreous types are typically very thin with mother-of-pearl colorations and an undulating cirriform or lenticular (stratocumuliform) appearance. These are sometimes known as type 2.[127][128]
|
164 |
+
|
165 |
+
Polar mesospheric clouds form at an extreme-level altitude range of about 80 to 85 km (50 to 53 mi). They are given the Latin name noctilucent because of their illumination well after sunset and before sunrise. They typically have a bluish or silvery white coloration that can resemble brightly illuminated cirrus. Noctilucent clouds may occasionally take on more of a red or orange hue.[6] They are not common or widespread enough to have a significant effect on climate.[129] However, an increasing frequency of occurrence of noctilucent clouds since the 19th century may be the result of climate change.[130]
|
166 |
+
|
167 |
+
Noctilucent clouds are the highest in the atmosphere and form near the top of the mesosphere at about ten times the altitude of tropospheric high clouds.[131] From ground level, they can occasionally be seen illuminated by the sun during deep twilight. Ongoing research indicates that convective lift in the mesosphere is strong enough during the polar summer to cause adiabatic cooling of small amount of water vapour to the point of saturation. This tends to produce the coldest temperatures in the entire atmosphere just below the mesopause. These conditions result in the best environment for the formation of polar mesospheric clouds.[129] There is also evidence that smoke particles from burnt-up meteors provide much of the condensation nuclei required for the formation of noctilucent cloud.[132]
|
168 |
+
|
169 |
+
Noctilucent clouds have four major types based on physical structure and appearance. Type I veils are very tenuous and lack well-defined structure, somewhat like cirrostratus or poorly defined cirrus.[133] Type II bands are long streaks that often occur in groups arranged roughly parallel to each other. They are usually more widely spaced than the bands or elements seen with cirrocumulus clouds.[134] Type III billows are arrangements of closely spaced, roughly parallel short streaks that mostly resemble cirrus.[135] Type IV whirls are partial or, more rarely, complete rings of cloud with dark centres.[136]
|
170 |
+
|
171 |
+
Distribution in the mesosphere is similar to the stratosphere except at much higher altitudes. Because of the need for maximum cooling of the water vapor to produce noctilucent clouds, their distribution tends to be restricted to polar regions of Earth. A major seasonal difference is that convective lift from below the mesosphere pushes very scarce water vapor to higher colder altitudes required for cloud formation during the respective summer seasons in the northern and southern hemispheres. Sightings are rare more than 45 degrees south of the north pole or north of the south pole.[6]
|
172 |
+
|
173 |
+
Cloud cover has been seen on most other planets in the Solar System. Venus's thick clouds are composed of sulfur dioxide (due to volcanic activity) and appear to be almost entirely stratiform.[137] They are arranged in three main layers at altitudes of 45 to 65 km that obscure the planet's surface and can produce virga. No embedded cumuliform types have been identified, but broken stratocumuliform wave formations are sometimes seen in the top layer that reveal more continuous layer clouds underneath.[138] On Mars, noctilucent, cirrus, cirrocumulus and stratocumulus composed of water-ice have been detected mostly near the poles.[139][140] Water-ice fogs have also been detected on Mars.[141]
|
174 |
+
|
175 |
+
Both Jupiter and Saturn have an outer cirriform cloud deck composed of ammonia,[142][143] an intermediate stratiform haze-cloud layer made of ammonium hydrosulfide, and an inner deck of cumulus water clouds.[144][145] Embedded cumulonimbus are known to exist near the Great Red Spot on Jupiter.[146][147] The same category-types can be found covering Uranus, and Neptune, but are all composed of methane.[148][149][150][151][152][153] Saturn's moon Titan has cirrus clouds believed to be composed largely of methane.[154][155] The Cassini–Huygens Saturn mission uncovered evidence of polar stratospheric clouds[156] and a methane cycle on Titan, including lakes near the poles and fluvial channels on the surface of the moon.[157]
|
176 |
+
|
177 |
+
Some planets outside the Solar System are known to have atmospheric clouds. In October 2013, the detection of high altitude optically thick clouds in the atmosphere of exoplanet Kepler-7b was announced,[158][159] and, in December 2013, in the atmospheres of GJ 436 b and GJ 1214 b.[160][161][162][163]
|
178 |
+
|
179 |
+
Clouds play an important role in various cultures and religious traditions. The ancient Akkadians believed that the clouds were the breasts of the sky goddess Antu[165] and that rain was milk from her breasts.[165] In Exodus 13:21–22, Yahweh is described as guiding the Israelites through the desert in the form of a "pillar of cloud" by day and a "pillar of fire" by night.[164]
|
180 |
+
|
181 |
+
In the ancient Greek comedy The Clouds, written by Aristophanes and first performed at the City Dionysia in 423 BC, the philosopher Socrates declares that the Clouds are the only true deities[166] and tells the main character Strepsiades not to worship any deities other than the Clouds, but to pay homage to them alone.[166] In the play, the Clouds change shape to reveal the true nature of whoever is looking at them,[167][166][168] turning into centaurs at the sight of a long-haired politician, wolves at the sight of the embezzler Simon, deer at the sight of the coward Cleonymus, and mortal women at the sight of the effeminate informer Cleisthenes.[167][168][166] They are hailed the source of inspiration to comic poets and philosophers;[166] they are masters of rhetoric, regarding eloquence and sophistry alike as their "friends".[166]
|
182 |
+
|
183 |
+
In China, clouds are symbols of luck and happiness.[169] Overlapping clouds are thought to imply eternal happiness[169] and clouds of different colors are said to indicate "multiplied blessings".[169]
|
en/1151.html.txt
ADDED
@@ -0,0 +1,183 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
In meteorology, a cloud is an aerosol consisting of a visible mass of minute liquid droplets, frozen crystals, or other particles suspended in the atmosphere of a planetary body or similar space.[1] Water or various other chemicals may compose the droplets and crystals. On Earth, clouds are formed as a result of saturation of the air when it is cooled to its dew point, or when it gains sufficient moisture (usually in the form of water vapor) from an adjacent source to raise the dew point to the ambient temperature.
|
6 |
+
|
7 |
+
They are seen in the Earth's homosphere, which includes the troposphere, stratosphere, and mesosphere. Nephology is the science of clouds, which is undertaken in the cloud physics branch of meteorology. There are two methods of naming clouds in their respective layers of the homosphere, Latin and common.
|
8 |
+
|
9 |
+
Genus types in the troposphere, the atmospheric layer closest to Earth's surface, have Latin names due to the universal adoption of Luke Howard's nomenclature that was formally proposed in 1802. It became the basis of a modern international system that divides clouds into five physical forms which can be divided or classified further into altitude levels to derive the ten basic genera. The main representative cloud types for each of these forms are stratus, cirrus, stratocumulus, cumulus, and cumulonimbus. Low-level stratiform and stratocumuliform genera do not have any altitude-related prefixes. However mid-level variants of the same physical forms are given the prefix alto- while high-level types carry the prefix cirro-. The other main forms never have prefixes indicating altitude level. Cirriform clouds are always high-level while cumuliform and cumulonimbiform clouds are classified formally as low-level. The latter are also more informally characterized as multi-level or vertical as indicated by the cumulo- prefix. Most of the ten genera derived by this method of classification can be subdivided into species and further subdivided into varieties. Very low stratiform clouds that extend down to the Earth's surface are given the common names fog and mist, but have no Latin names.
|
10 |
+
|
11 |
+
In the stratosphere and mesosphere, clouds have common names for their main types. They may have the appearance of stratiform veils or sheets, cirriform wisps, or stratocumuliform bands or ripples. They are seen infrequently, mostly in the polar regions of Earth. Clouds have been observed in the atmospheres of other planets and moons in the Solar System and beyond. However, due to their different temperature characteristics, they are often composed of other substances such as methane, ammonia, and sulfuric acid, as well as water.
|
12 |
+
|
13 |
+
Tropospheric clouds can have a direct effect on climate change on Earth. They may reflect incoming rays from the sun which can contribute to a cooling effect where and when these clouds occur, or trap longer wave radiation that reflects back up from the Earth's surface which can cause a warming effect. The altitude, form, and thickness of the clouds are the main factors that affect the local heating or cooling of Earth and the atmosphere. Clouds that form above the troposphere are too scarce and too thin to have any influence on climate change.
|
14 |
+
|
15 |
+
The tabular overview that follows is very broad in scope. It draws from several methods of cloud classification, both formal and informal, used in different levels of the Earth's homosphere by a number of cited authorities. Despite some differences in methodologies and terminologies, the classification schemes seen in this article can be harmonized by using an informal cross-classification of physical forms and altitude levels to derive the 10 tropospheric genera, the fog and mist that forms at surface level, and several additional major types above the troposphere. The cumulus genus includes four species that indicate vertical size and structure which can affect both forms and levels. This table should not be seen as a strict or singular classification, but as an illustration of how various major cloud types are related to each other and defined through a full range of altitude levels from Earth's surface to the "edge of space".
|
16 |
+
|
17 |
+
The origin of the term "cloud" can be found in the Old English words clud or clod, meaning a hill or a mass of rock. Around the beginning of the 13th century, the word came to be used as a metaphor for rain clouds, because of the similarity in appearance between a mass of rock and cumulus heap cloud. Over time, the metaphoric usage of the word supplanted the Old English weolcan, which had been the literal term for clouds in general.[2][3]
|
18 |
+
|
19 |
+
Ancient cloud studies were not made in isolation, but were observed in combination with other weather elements and even other natural sciences. Around 340 BC, Greek philosopher Aristotle wrote Meteorologica, a work which represented the sum of knowledge of the time about natural science, including weather and climate. For the first time, precipitation and the clouds from which precipitation fell were called meteors, which originate from the Greek word meteoros, meaning 'high in the sky'. From that word came the modern term meteorology, the study of clouds and weather. Meteorologica was based on intuition and simple observation, but not on what is now considered the scientific method. Nevertheless, it was the first known work that attempted to treat a broad range of meteorological topics in a systematic way, especially the hydrological cycle.[4]
|
20 |
+
|
21 |
+
After centuries of speculative theories about the formation and behavior of clouds, the first truly scientific studies were undertaken by Luke Howard in England and Jean-Baptiste Lamarck in France. Howard was a methodical observer with a strong grounding in the Latin language, and used his background to classify the various tropospheric cloud types during 1802. He believed that the changing cloud forms in the sky could unlock the key to weather forecasting. Lamarck had worked independently on cloud classification the same year and had come up with a different naming scheme that failed to make an impression even in his home country of France because it used unusual French names for cloud types. His system of nomenclature included 12 categories of clouds, with such names as (translated from French) hazy clouds, dappled clouds, and broom-like clouds. By contrast, Howard used universally accepted Latin, which caught on quickly after it was published in 1803.[5] As a sign of the popularity of the naming scheme, German dramatist and poet Johann Wolfgang von Goethe composed four poems about clouds, dedicating them to Howard. An elaboration of Howard's system was eventually formally adopted by the International Meteorological Conference in 1891.[5] This system covered only the tropospheric cloud types, but the discovery of clouds above the troposphere during the late 19th century eventually led to the creation of separate classification schemes using common names for these very high clouds, which were still broadly similar to some cloud forms identiified in the troposphhere.[6]
|
22 |
+
|
23 |
+
Terrestrial clouds can be found throughout most of the homosphere, which includes the troposphere, stratosphere, and mesosphere. Within these layers of the atmosphere, air can become saturated as a result of being cooled to its dew point or by having moisture added from an adjacent source.[7] In the latter case, saturation occurs when the dew point is raised to the ambient air temperature.
|
24 |
+
|
25 |
+
Adiabatic cooling occurs when one or more of three possible lifting agents – convective, cyclonic/frontal, or orographic – cause a parcel of air containing invisible water vapor to rise and cool to its dew point, the temperature at which the air becomes saturated. The main mechanism behind this process is adiabatic cooling.[8] As the air is cooled to its dew point and becomes saturated, water vapor normally condenses to form cloud drops. This condensation normally occurs on cloud condensation nuclei such as salt or dust particles that are small enough to be held aloft by normal circulation of the air.[9][10]
|
26 |
+
|
27 |
+
One agent is the convective upward motion of air caused by daytime solar heating at surface level.[9] Airmass instability allows for the formation of cumuliform clouds that can produce showers if the air is sufficiently moist.[11] On moderately rare occasions, convective lift can be powerful enough to penetrate the tropopause and push the cloud top into the stratosphere.[12]
|
28 |
+
|
29 |
+
Frontal and cyclonic lift occur when stable air is forced aloft at weather fronts and around centers of low pressure by a process called convergence.[13] Warm fronts associated with extratropical cyclones tend to generate mostly cirriform and stratiform clouds over a wide area unless the approaching warm airmass is unstable, in which case cumulus congestus or cumulonimbus clouds are usually embedded in the main precipitating cloud layer.[14] Cold fronts are usually faster moving and generate a narrower line of clouds, which are mostly stratocumuliform, cumuliform, or cumulonimbiform depending on the stability of the warm airmass just ahead of the front.[15]
|
30 |
+
|
31 |
+
A third source of lift is wind circulation forcing air over a physical barrier such as a mountain (orographic lift).[9] If the air is generally stable, nothing more than lenticular cap clouds form. However, if the air becomes sufficiently moist and unstable, orographic showers or thunderstorms may appear.[16]
|
32 |
+
|
33 |
+
Along with adiabatic cooling that requires a lifting agent, three major nonadiabatic mechanisms exist for lowering the temperature of the air to its dew point. Conductive, radiational, and evaporative cooling require no lifting mechanism and can cause condensation at surface level resulting in the formation of fog.[17][18][19]
|
34 |
+
|
35 |
+
Several main sources of water vapor can be added to the air as a way of achieving saturation without any cooling process: water or moist ground,[20][21][22] precipitation or virga,[23] and transpiration from plants[24]
|
36 |
+
|
37 |
+
Tropospheric classification is based on a hierarchy of categories with physical forms and altitude levels at the top.[25][26] These are cross-classified into a total of ten genus types, most of which can be divided into species and further subdivided into varieties which are at the bottom of the hierarchy.[27]
|
38 |
+
|
39 |
+
Clouds in the troposphere assume five physical forms based on structure and process of formation. These forms are commonly used for the purpose of satellite analysis.[25] They are given below in approximate ascending order of instability or convective activity.[28]
|
40 |
+
|
41 |
+
Nonconvective stratiform clouds appear in stable airmass conditions and, in general, have flat, sheet-like structures that can form at any altitude in the troposphere.[29] The stratiform group is divided by altitude range into the genera cirrostratus (high-level), altostratus (mid-level), stratus (low-level), and nimbostratus (multi-level).[26] Fog is commonly considered a surface-based cloud layer.[16] The fog may form at surface level in clear air or it may be the result of a very low stratus cloud subsiding to ground or sea level. Conversely, low stratiform clouds result when advection fog is lifted above surface level during breezy conditions.
|
42 |
+
|
43 |
+
Cirriform clouds in the troposphere are of the genus cirrus and have the appearance of detached or semimerged filaments. They form at high tropospheric altitudes in air that is mostly stable with little or no convective activity, although denser patches may occasionally show buildups caused by limited high-level convection where the air is partly unstable.[30] Clouds resembling cirrus can be found above the troposphere but are classified separately using common names.
|
44 |
+
|
45 |
+
Clouds of this structure have both cumuliform and stratiform characteristics in the form of rolls, ripples, or elements.[31] They generally form as a result of limited convection in an otherwise mostly stable airmass topped by an inversion layer.[32] If the inversion layer is absent or higher in the troposphere, increased airmass instability may cause the cloud layers to develop tops in the form of turrets consisting of embedded cumuliform buildups.[33] The stratocumuliform group is divided into cirrocumulus (high-level), altocumulus (mid-level), and stratocumulus (low-level).[31]
|
46 |
+
|
47 |
+
Cumuliform clouds generally appear in isolated heaps or tufts.[34][35] They are the product of localized but generally free-convective lift where no inversion layers are in the troposphere to limit vertical growth. In general, small cumuliform clouds tend to indicate comparatively weak instability. Larger cumuliform types are a sign of greater atmospheric instability and convective activity.[36] Depending on their vertical size, clouds of the cumulus genus type may be low-level or multi-level with moderate to towering vertical extent.[26]
|
48 |
+
|
49 |
+
The largest free-convective clouds comprise the genus cumulonimbus, which have towering vertical extent. They occur in highly unstable air[9] and often have fuzzy outlines at the upper parts of the clouds that sometimes include anvil tops.[31] These clouds are the product of very strong convection that can penetrate the lower stratosphere.
|
50 |
+
|
51 |
+
Tropospheric clouds form in any of three levels (formerly called étages) based on altitude range above the Earth's surface. The grouping of clouds into levels is commonly done for the purposes of cloud atlases, surface weather observations,[26] and weather maps.[37] The base-height range for each level varies depending on the latitudinal geographical zone.[26] Each altitude level comprises two or three genus-types differentiated mainly by physical form.[38][31]
|
52 |
+
|
53 |
+
The standard levels and genus-types are summarised below in approximate descending order of the altitude at which each is normally based.[39] Multi-level clouds with significant vertical extent are separately listed and summarized in approximate ascending order of instability or convective activity.[28]
|
54 |
+
|
55 |
+
High clouds form at altitudes of 3,000 to 7,600 m (10,000 to 25,000 ft) in the polar regions, 5,000 to 12,200 m (16,500 to 40,000 ft) in the temperate regions, and 6,100 to 18,300 m (20,000 to 60,000 ft) in the tropics.[26] All cirriform clouds are classified as high, thus constitute a single genus cirrus (Ci). Stratocumuliform and stratiform clouds in the high altitude range carry the prefix cirro-, yielding the respective genus names cirrocumulus (Cc) and cirrostratus (Cs). When limited-resolution satellite images of high clouds are analysed without supporting data from direct human observations, distinguishing between individual forms or genus types becomes impossible, and they are then collectively identified as high-type (or informally as cirrus-type, though not all high clouds are of the cirrus form or genus).[40]
|
56 |
+
|
57 |
+
Nonvertical clouds in the middle level are prefixed by alto-, yielding the genus names altocumulus (Ac) for stratocumuliform types and altostratus (As) for stratiform types. These clouds can form as low as 2,000 m (6,500 ft) above surface at any latitude, but may be based as high as 4,000 m (13,000 ft) near the poles, 7,000 m (23,000 ft) at midlatitudes, and 7,600 m (25,000 ft) in the tropics.[26] As with high clouds, the main genus types are easily identified by the human eye, but distinguishing between them using satellite photography is not possible. Without the support of human observations, these clouds are usually collectively identified as middle-type on satellite images.[40]
|
58 |
+
|
59 |
+
Low clouds are found from near the surface up to 2,000 m (6,500 ft).[26] Genus types in this level either have no prefix or carry one that refers to a characteristic other than altitude. Clouds that form in the low level of the troposphere are generally of larger structure than those that form in the middle and high levels, so they can usually be identified by their forms and genus types using satellite photography alone.[40]
|
60 |
+
|
61 |
+
|
62 |
+
|
63 |
+
These clouds have low- to mid-level bases that form anywhere from near the surface to about 2,400 m (8,000 ft) and tops that can extend into the mid-altitude range and sometimes higher in the case of nimbostratus.
|
64 |
+
|
65 |
+
This is a diffuse, dark grey, multi-level stratiform layer with great horizontal extent and usually moderate to deep vertical development. It lacks towering structure and looks feebly illuminated from the inside.[58] Nimbostratus normally forms from mid-level altostratus, and develops at least moderate vertical extent[59][60] when the base subsides into the low level during precipitation that can reach moderate to heavy intensity. It achieves even greater vertical development when it simultaneously grows upward into the high level due to large-scale frontal or cyclonic lift.[61] The nimbo- prefix refers to its ability to produce continuous rain or snow over a wide area, especially ahead of a warm front.[62] This thick cloud layer may be accompanied by embedded towering cumuliform or cumulonimbiform types.[60][63] Meteorologists affiliated with the World Meteorological Organization (WMO) officially classify nimbostratus as mid-level for synoptic purposes while informally characterizing it as multi-level.[26] Independent meteorologists and educators appear split between those who largely follow the WMO model[59][60] and those who classify nimbostratus as low-level, despite its considerable vertical extent and its usual initial formation in the middle altitude range.[64][65]
|
66 |
+
|
67 |
+
These very large cumuliform and cumulonimbiform types have similar low- to mid-level cloud bases as the multi-level and moderate vertical types, and tops that nearly always extend into the high levels. They are required to be identified by their standard names or abbreviations in all aviation observations (METARS) and forecasts (TAFS) to warn pilots of possible severe weather and turbulence.[66]
|
68 |
+
|
69 |
+
Genus types are commonly divided into subtypes called species that indicate specific structural details which can vary according to the stability and windshear characteristics of the atmosphere at any given time and location. Despite this hierarchy, a particular species may be a subtype of more than one genus, especially if the genera are of the same physical form and are differentiated from each other mainly by altitude or level. There are a few species, each of which can be associated with genera of more than one physical form.[72] The species types are grouped below according to the physical forms and genera with which each is normally associated. The forms, genera, and species are listed in approximate ascending order of instability or convective activity.[28]
|
70 |
+
|
71 |
+
Of the stratiform group, high-level cirrostratus comprises two species. Cirrostratus nebulosus has a rather diffuse appearance lacking in structural detail.[73] Cirrostratus fibratus is a species made of semi-merged filaments that are transitional to or from cirrus.[74] Mid-level altostratus and multi-level nimbostratus always have a flat or diffuse appearance and are therefore not subdivided into species. Low stratus is of the species nebulosus[73] except when broken up into ragged sheets of stratus fractus (see below).[59][72][75]
|
72 |
+
|
73 |
+
Cirriform clouds have three non-convective species that can form in mostly stable airmass conditions. Cirrus fibratus comprise filaments that may be straight, wavy, or occasionally twisted by non-convective wind shear.[74] The species uncinus is similar but has upturned hooks at the ends. Cirrus spissatus appear as opaque patches that can show light grey shading.[72]
|
74 |
+
|
75 |
+
Stratocumuliform genus-types (cirrocumulus, altocumulus, and stratocumulus) that appear in mostly stable air have two species each. The stratiformis species normally occur in extensive sheets or in smaller patches where there is only minimal convective activity.[76] Clouds of the lenticularis species tend to have lens-like shapes tapered at the ends. They are most commonly seen as orographic mountain-wave clouds, but can occur anywhere in the troposphere where there is strong wind shear combined with sufficient airmass stability to maintain a generally flat cloud structure. These two species can be found in the high, middle, or low levels of the troposphere depending on the stratocumuliform genus or genera present at any given time.[59][72][75]
|
76 |
+
|
77 |
+
The species fractus shows variable instability because it can be a subdivision of genus-types of different physical forms that have different stability characteristics. This subtype can be in the form of ragged but mostly stable stratiform sheets (stratus fractus) or small ragged cumuliform heaps with somewhat greater instability (cumulus fractus).[72][75][77] When clouds of this species are associated with precipitating cloud systems of considerable vertical and sometimes horizontal extent, they are also classified as accessory clouds under the name pannus (see section on supplementary features).[78]
|
78 |
+
|
79 |
+
These species are subdivisions of genus types that can occur in partly unstable air. The species castellanus appears when a mostly stable stratocumuliform or cirriform layer becomes disturbed by localized areas of airmass instability, usually in the morning or afternoon. This results in the formation of cumuliform buildups of limited convection arising from a common stratiform base.[79] Castellanus resembles the turrets of a castle when viewed from the side, and can be found with stratocumuliform genera at any tropospheric altitude level and with limited-convective patches of high-level cirrus.[80] Tufted clouds of the more detached floccus species are subdivisions of genus-types which may be cirriform or stratocumuliform in overall structure. They are sometimes seen with cirrus, cirrocumulus, altocumulus, and stratocumulus.[81]
|
80 |
+
|
81 |
+
A newly recognized species of stratocumulus or altocumulus has been given the name volutus, a roll cloud that can occur ahead of a cumulonimbus formation.[82] There are some volutus clouds that form as a consequence of interactions with specific geographical features rather than with a parent cloud. Perhaps the strangest geographically specific cloud of this type is the Morning Glory, a rolling cylindrical cloud that appears unpredictably over the Gulf of Carpentaria in Northern Australia. Associated with a powerful "ripple" in the atmosphere, the cloud may be "surfed" in glider aircraft.[83]
|
82 |
+
|
83 |
+
More general airmass instability in the troposphere tends to produce clouds of the more freely convective cumulus genus type, whose species are mainly indicators of degrees of atmospheric instability and resultant vertical development of the clouds. A cumulus cloud initially forms in the low level of the troposphere as a cloudlet of the species humilis that shows only slight vertical development. If the air becomes more unstable, the cloud tends to grow vertically into the species mediocris, then congestus, the tallest cumulus species[72] which is the same type that the International Civil Aviation Organization refers to as 'towering cumulus'.[66]
|
84 |
+
|
85 |
+
With highly unstable atmospheric conditions, large cumulus may continue to grow into cumulonimbus calvus (essentially a very tall congestus cloud that produces thunder), then ultimately into the species capillatus when supercooled water droplets at the top of the cloud turn into ice crystals giving it a cirriform appearance.[72][75]
|
86 |
+
|
87 |
+
Genus and species types are further subdivided into varieties whose names can appear after the species name to provide a fuller description of a cloud. Some cloud varieties are not restricted to a specific altitude level or form, and can therefore be common to more than one genus or species.[84]
|
88 |
+
|
89 |
+
All cloud varieties fall into one of two main groups. One group identifies the opacities of particular low and mid-level cloud structures and comprises the varieties translucidus (thin translucent), perlucidus (thick opaque with translucent or very small clear breaks), and opacus (thick opaque). These varieties are always identifiable for cloud genera and species with variable opacity. All three are associated with the stratiformis species of altocumulus and stratocumulus. However, only two varieties are seen with altostratus and stratus nebulosus whose uniform structures prevent the formation of a perlucidus variety. Opacity-based varieties are not applied to high clouds because they are always translucent, or in the case of cirrus spissatus, always opaque.[84][85]
|
90 |
+
|
91 |
+
A second group describes the occasional arrangements of cloud structures into particular patterns that are discernible by a surface-based observer (cloud fields usually being visible only from a significant altitude above the formations). These varieties are not always present with the genera and species with which they are otherwise associated, but only appear when atmospheric conditions favor their formation. Intortus and vertebratus varieties occur on occasion with cirrus fibratus. They are respectively filaments twisted into irregular shapes, and those that are arranged in fishbone patterns, usually by uneven wind currents that favor the formation of these varieties. The variety radiatus is associated with cloud rows of a particular type that appear to converge at the horizon. It is sometimes seen with the fibratus and uncinus species of cirrus, the stratiformis species of altocumulus and stratocumulus, the mediocris and sometimes humilis species of cumulus,[87][88] and with the genus altostratus.[89]
|
92 |
+
|
93 |
+
Another variety, duplicatus (closely spaced layers of the same type, one above the other), is sometimes found with cirrus of both the fibratus and uncinus species, and with altocumulus and stratocumulus of the species stratiformis and lenticularis. The variety undulatus (having a wavy undulating base) can occur with any clouds of the species stratiformis or lenticularis, and with altostratus. It is only rarely observed with stratus nebulosus. The variety lacunosus is caused by localized downdrafts that create circular holes in the form of a honeycomb or net. It is occasionally seen with cirrocumulus and altocumulus of the species stratiformis, castellanus, and floccus, and with stratocumulus of the species stratiformis and castellanus.[84][85]
|
94 |
+
|
95 |
+
It is possible for some species to show combined varieties at one time, especially if one variety is opacity-based and the other is pattern-based. An example of this would be a layer of altocumulus stratiformis arranged in seemingly converging rows separated by small breaks. The full technical name of a cloud in this configuration would be altocumulus stratiformis radiatus perlucidus, which would identify respectively its genus, species, and two combined varieties.[75][84][85]
|
96 |
+
|
97 |
+
Supplementary features and accessory clouds are not further subdivisions of cloud types below the species and variety level. Rather, they are either hydrometeors or special cloud types with their own Latin names that form in association with certain cloud genera, species, and varieties.[75][85] Supplementary features, whether in the form of clouds or precipitation, are directly attached to the main genus-cloud. Accessory clouds, by contrast, are generally detached from the main cloud.[90]
|
98 |
+
|
99 |
+
One group of supplementary features are not actual cloud formations, but precipitation that falls when water droplets or ice crystals that make up visible clouds have grown too heavy to remain aloft. Virga is a feature seen with clouds producing precipitation that evaporates before reaching the ground, these being of the genera cirrocumulus, altocumulus, altostratus, nimbostratus, stratocumulus, cumulus, and cumulonimbus.[90]
|
100 |
+
|
101 |
+
When the precipitation reaches the ground without completely evaporating, it is designated as the feature praecipitatio.[91] This normally occurs with altostratus opacus, which can produce widespread but usually light precipitation, and with thicker clouds that show significant vertical development. Of the latter, upward-growing cumulus mediocris produces only isolated light showers, while downward growing nimbostratus is capable of heavier, more extensive precipitation. Towering vertical clouds have the greatest ability to produce intense precipitation events, but these tend to be localized unless organized along fast-moving cold fronts. Showers of moderate to heavy intensity can fall from cumulus congestus clouds. Cumulonimbus, the largest of all cloud genera, has the capacity to produce very heavy showers. Low stratus clouds usually produce only light precipitation, but this always occurs as the feature praecipitatio due to the fact this cloud genus lies too close to the ground to allow for the formation of virga.[75][85][90]
|
102 |
+
|
103 |
+
Incus is the most type-specific supplementary feature, seen only with cumulonimbus of the species capillatus. A cumulonimbus incus cloud top is one that has spread out into a clear anvil shape as a result of rising air currents hitting the stability layer at the tropopause where the air no longer continues to get colder with increasing altitude.[92]
|
104 |
+
|
105 |
+
The mamma feature forms on the bases of clouds as downward-facing bubble-like protuberances caused by localized downdrafts within the cloud. It is also sometimes called mammatus, an earlier version of the term used before a standardization of Latin nomenclature brought about by the World Meteorological Organization during the 20th century. The best-known is cumulonimbus with mammatus, but the mamma feature is also seen occasionally with cirrus, cirrocumulus, altocumulus, altostratus, and stratocumulus.[90]
|
106 |
+
|
107 |
+
A tuba feature is a cloud column that may hang from the bottom of a cumulus or cumulonimbus. A newly formed or poorly organized column might be comparatively benign, but can quickly intensify into a funnel cloud or tornado.[90][93][94]
|
108 |
+
|
109 |
+
An arcus feature is a roll cloud with ragged edges attached to the lower front part of cumulus congestus or cumulonimbus that forms along the leading edge of a squall line or thunderstorm outflow.[95] A large arcus formation can have the appearance of a dark menacing arch.[90]
|
110 |
+
|
111 |
+
Several new supplementary features have been formally recognized by the World Meteorological Organization (WMO). The feature fluctus can form under conditions of strong atmospheric wind shear when a stratocumulus, altocumulus, or cirrus cloud breaks into regularly spaced crests. This variant is sometimes known informally as a Kelvin–Helmholtz (wave) cloud. This phenomenon has also been observed in cloud formations over other planets and even in the sun's atmosphere.[96] Another highly disturbed but more chaotic wave-like cloud feature associated with stratocumulus or altocumulus cloud has been given the Latin name asperitas. The supplementary feature cavum is a circular fall-streak hole that occasionally forms in a thin layer of supercooled altocumulus or cirrocumulus. Fall streaks consisting of virga or wisps of cirrus are usually seen beneath the hole as ice crystals fall out to a lower altitude. This type of hole is usually larger than typical lacunosus holes. A murus feature is a cumulonimbus wall cloud with a lowering, rotating cloud base than can lead to the development of tornadoes. A cauda feature is a tail cloud that extends horizontally away from the murus cloud and is the result of air feeding into the storm.[82]
|
112 |
+
|
113 |
+
Supplementary cloud formations detached from the main cloud are known as accessory clouds.[75][85][90] The heavier precipitating clouds, nimbostratus, towering cumulus (cumulus congestus), and cumulonimbus typically see the formation in precipitation of the pannus feature, low ragged clouds of the genera and species cumulus fractus or stratus fractus.[78]
|
114 |
+
|
115 |
+
A group of accessory clouds comprise formations that are associated mainly with upward-growing cumuliform and cumulonimbiform clouds of free convection. Pileus is a cap cloud that can form over a cumulonimbus or large cumulus cloud,[97] whereas a velum feature is a thin horizontal sheet that sometimes forms like an apron around the middle or in front of the parent cloud.[90] An accessory cloud recently officially recognized the World meteorological Organization is the flumen, also known more informally as the beaver's tail. It is formed by the warm, humid inflow of a super-cell thunderstorm, and can be mistaken for a tornado. Although the flumen can indicate a tornado risk, it is similar in appearance to pannus or scud clouds and does not rotate.[82]
|
116 |
+
|
117 |
+
Clouds initially form in clear air or become clouds when fog rises above surface level. The genus of a newly formed cloud is determined mainly by air mass characteristics such as stability and moisture content. If these characteristics change over time, the genus tends to change accordingly. When this happens, the original genus is called a mother cloud. If the mother cloud retains much of its original form after the appearance of the new genus, it is termed a genitus cloud. One example of this is stratocumulus cumulogenitus, a stratocumulus cloud formed by the partial spreading of a cumulus type when there is a loss of convective lift. If the mother cloud undergoes a complete change in genus, it is considered to be a mutatus cloud.[98]
|
118 |
+
|
119 |
+
The genitus and mutatus categories have been expanded to include certain types that do not originate from pre-existing clouds. The term flammagenitus (Latin for 'fire-made') applies to cumulus congestus or cumulonimbus that are formed by large scale fires or volcanic eruptions. Smaller low-level "pyrocumulus" or "fumulus" clouds formed by contained industrial activity are now classified as cumulus homogenitus (Latin for 'man-made'). Contrails formed from the exhaust of aircraft flying in the upper level of the troposphere can persist and spread into formations resembling cirrus which are designated cirrus homogenitus. If a cirrus homogenitus cloud changes fully to any of the high-level genera, they are termed cirrus, cirrostratus, or cirrocumulus homomutatus. Stratus cataractagenitus (Latin for 'cataract-made') are generated by the spray from waterfalls. Silvagenitus (Latin for 'forest-made') is a stratus cloud that forms as water vapor is added to the air above a forest canopy.[98]
|
120 |
+
|
121 |
+
Stratocumulus clouds can be organized into "fields" that take on certain specially classified shapes and characteristics. In general, these fields are more discernible from high altitudes than from ground level. They can often be found in the following forms:
|
122 |
+
|
123 |
+
These patterns are formed from a phenomenon known as a Kármán vortex which is named after the engineer and fluid dynamicist Theodore von Kármán,.[101] Wind driven clouds can form into parallel rows that follow the wind direction. When the wind and clouds encounter high elevation land features such as a vertically prominent islands, they can form eddies around the high land masses that give the clouds a twisted appearance.[102]
|
124 |
+
|
125 |
+
Although the local distribution of clouds can be significantly influenced by topography, the global prevalence of cloud cover in the troposphere tends to vary more by latitude. It is most prevalent in and along low pressure zones of surface tropospheric convergence which encircle the Earth close to the equator and near the 50th parallels of latitude in the northern and southern hemispheres.[105] The adiabatic cooling processes that lead to the creation of clouds by way of lifting agents are all associated with convergence; a process that involves the horizontal inflow and accumulation of air at a given location, as well as the rate at which this happens.[106] Near the equator, increased cloudiness is due to the presence of the low-pressure Intertropical Convergence Zone (ITCZ) where very warm and unstable air promotes mostly cumuliform and cumulonimbiform clouds.[107] Clouds of virtually any type can form along the mid-latitude convergence zones depending on the stability and moisture content of the air. These extratropical convergence zones are occupied by the polar fronts where air masses of polar origin meet and clash with those of tropical or subtropical origin.[108] This leads to the formation of weather-making extratropical cyclones composed of cloud systems that may be stable or unstable to varying degrees according to the stability characteristics of the various airmasses that are in conflict.[109]
|
126 |
+
|
127 |
+
Divergence is the opposite of convergence. In the Earth's troposphere, it involves the horizontal outflow of air from the upper part of a rising column of air, or from the lower part of a subsiding column often associated with an area or ridge of high pressure.[106] Cloudiness tends to be least prevalent near the poles and in the subtropics close to the 30th parallels, north and south. The latter are sometimes referred to as the horse latitudes. The presence of a large-scale high-pressure subtropical ridge on each side of the equator reduces cloudiness at these low latitudes.[110] Similar patterns also occur at higher latitudes in both hemispheres.[111]
|
128 |
+
|
129 |
+
The luminance or brightness of a cloud is determined by how light is reflected, scattered, and transmitted by the cloud's particles. Its brightness may also be affected by the presence of haze or photometeors such as halos and rainbows.[112] In the troposphere, dense, deep clouds exhibit a high reflectance (70% to 95%) throughout the visible spectrum. Tiny particles of water are densely packed and sunlight cannot penetrate far into the cloud before it is reflected out, giving a cloud its characteristic white color, especially when viewed from the top.[113] Cloud droplets tend to scatter light efficiently, so that the intensity of the solar radiation decreases with depth into the gases. As a result, the cloud base can vary from a very light to very-dark-grey depending on the cloud's thickness and how much light is being reflected or transmitted back to the observer. High thin tropospheric clouds reflect less light because of the comparatively low concentration of constituent ice crystals or supercooled water droplets which results in a slightly off-white appearance. However, a thick dense ice-crystal cloud appears brilliant white with pronounced grey shading because of its greater reflectivity.[112]
|
130 |
+
|
131 |
+
As a tropospheric cloud matures, the dense water droplets may combine to produce larger droplets. If the droplets become too large and heavy to be kept aloft by the air circulation, they will fall from the cloud as rain. By this process of accumulation, the space between droplets becomes increasingly larger, permitting light to penetrate farther into the cloud. If the cloud is sufficiently large and the droplets within are spaced far enough apart, a percentage of the light that enters the cloud is not reflected back out but is absorbed giving the cloud a darker look. A simple example of this is one's being able to see farther in heavy rain than in heavy fog. This process of reflection/absorption is what causes the range of cloud color from white to black.[114]
|
132 |
+
|
133 |
+
Striking cloud colorations can be seen at any altitude, with the color of a cloud usually being the same as the incident light.[115] During daytime when the sun is relatively high in the sky, tropospheric clouds generally appear bright white on top with varying shades of grey underneath. Thin clouds may look white or appear to have acquired the color of their environment or background. Red, orange, and pink clouds occur almost entirely at sunrise/sunset and are the result of the scattering of sunlight by the atmosphere. When the sun is just below the horizon, low-level clouds are gray, middle clouds appear rose-colored, and high clouds are white or off-white. Clouds at night are black or dark grey in a moonless sky, or whitish when illuminated by the moon. They may also reflect the colors of large fires, city lights, or auroras that might be present.[115]
|
134 |
+
|
135 |
+
A cumulonimbus cloud that appears to have a greenish or bluish tint is a sign that it contains extremely high amounts of water; hail or rain which scatter light in a way that gives the cloud a blue color. A green colorization occurs mostly late in the day when the sun is comparatively low in the sky and the incident sunlight has a reddish tinge that appears green when illuminating a very tall bluish cloud. Supercell type storms are more likely to be characterized by this but any storm can appear this way. Coloration such as this does not directly indicate that it is a severe thunderstorm, it only confirms its potential. Since a green/blue tint signifies copious amounts of water, a strong updraft to support it, high winds from the storm raining out, and wet hail; all elements that improve the chance for it to become severe, can all be inferred from this. In addition, the stronger the updraft is, the more likely the storm is to undergo tornadogenesis and to produce large hail and high winds.[116]
|
136 |
+
|
137 |
+
Yellowish clouds may be seen in the troposphere in the late spring through early fall months during forest fire season. The yellow color is due to the presence of pollutants in the smoke. Yellowish clouds are caused by the presence of nitrogen dioxide and are sometimes seen in urban areas with high air pollution levels.[117]
|
138 |
+
|
139 |
+
Stratocumulus stratiformis and small castellanus made orange by the sun rising
|
140 |
+
|
141 |
+
An occurrence of cloud iridescence with altocumulus volutus and cirrocumulus stratiformis
|
142 |
+
|
143 |
+
Sunset reflecting shades of pink onto grey stratocumulus stratiformis translucidus (becoming perlucidus in the background)
|
144 |
+
|
145 |
+
Stratocumulus stratiformis perlucidus before sunset. Bangalore, India.
|
146 |
+
|
147 |
+
Late-summer rainstorm in Denmark. Nearly black color of base indicates main cloud in foreground probably cumulonimbus.
|
148 |
+
|
149 |
+
Particles in the atmosphere and the sun's angle enhance colors of stratocumulus cumulogenitus at evening twilight
|
150 |
+
|
151 |
+
Tropospheric clouds exert numerous influences on Earth's troposphere and climate. First and foremost, they are the source of precipitation, thereby greatly influencing the distribution and amount of precipitation. Because of their differential buoyancy relative to surrounding cloud-free air, clouds can be associated with vertical motions of the air that may be convective, frontal, or cyclonic. The motion is upward if the clouds are less dense because condensation of water vapor releases heat, warming the air and thereby decreasing its density. This can lead to downward motion because lifting of the air results in cooling that increases its density. All of these effects are subtly dependent on the vertical temperature and moisture structure of the atmosphere and result in major redistribution of heat that affect the Earth's climate.[118]
|
152 |
+
|
153 |
+
The complexity and diversity of clouds in the troposphere is a major reason for difficulty in quantifying the effects of clouds on climate and climate change. On the one hand, white cloud tops promote cooling of Earth's surface by reflecting shortwave radiation (visible and near infrared) from the sun, diminishing the amount of solar radiation that is absorbed at the surface, enhancing the Earth's albedo. Most of the sunlight that reaches the ground is absorbed, warming the surface, which emits radiation upward at longer, infrared, wavelengths. At these wavelengths, however, water in the clouds acts as an efficient absorber. The water reacts by radiating, also in the infrared, both upward and downward, and the downward longwave radiation results in increased warming at the surface. This is analogous to the greenhouse effect of greenhouse gases and water vapor.[118]
|
154 |
+
|
155 |
+
High-level genus-types particularly show this duality with both short-wave albedo cooling and long-wave greenhouse warming effects. On the whole, ice-crystal clouds in the upper troposphere (cirrus) tend to favor net warming.[119][120] However, the cooling effect is dominant with mid-level and low clouds, especially when they form in extensive sheets.[119] Measurements by NASA indicate that on the whole, the effects of low and mid-level clouds that tend to promote cooling outweigh the warming effects of high layers and the variable outcomes associated with vertically developed clouds.[119]
|
156 |
+
|
157 |
+
As difficult as it is to evaluate the influences of current clouds on current climate, it is even more problematic to predict changes in cloud patterns and properties in a future, warmer climate, and the resultant cloud influences on future climate. In a warmer climate more water would enter the atmosphere by evaporation at the surface; as clouds are formed from water vapor, cloudiness would be expected to increase. But in a warmer climate, higher temperatures would tend to evaporate clouds.[121] Both of these statements are considered accurate, and both phenomena, known as cloud feedbacks, are found in climate model calculations. Broadly speaking, if clouds, especially low clouds, increase in a warmer climate, the resultant cooling effect leads to a negative feedback in climate response to increased greenhouse gases. But if low clouds decrease, or if high clouds increase, the feedback is positive. Differing amounts of these feedbacks are the principal reason for differences in climate sensitivities of current global climate models. As a consequence, much research has focused on the response of low and vertical clouds to a changing climate. Leading global models produce quite different results, however, with some showing increasing low clouds and others showing decreases.[122][123] For these reasons the role of tropospheric clouds in regulating weather and climate remains a leading source of uncertainty in global warming projections.[124][125]
|
158 |
+
|
159 |
+
Polar stratospheric clouds (PSC's) form in the lowest part of the stratosphere during the winter, at the altitude and during the season that produces the coldest temperatures and therefore the best chances of triggering condensation caused by adiabatic cooling. Moisture is scarce in the stratosphere, so nacreous and non-nacreous cloud at this altitude range is restricted to polar regions in the winter where the air is coldest.[6]
|
160 |
+
|
161 |
+
PSC's show some variation in structure according to their chemical makeup and atmospheric conditions, but are limited to a single very high range of altitude of about 15,000–25,000 m (49,200–82,000 ft), so they are not classified into altitude levels, genus types, species, or varieties. There is no Latin nomenclature in the manner of tropospheric clouds, but rather descriptive names using common English.[6]
|
162 |
+
|
163 |
+
Supercooled nitric acid and water PSC's, sometimes known as type 1, typically have a stratiform appearance resembling cirrostratus or haze, but because they are not frozen into crystals, do not show the pastel colours of the nacreous types. This type of PSC has been identified as a cause of ozone depletion in the stratosphere.[126] The frozen nacreous types are typically very thin with mother-of-pearl colorations and an undulating cirriform or lenticular (stratocumuliform) appearance. These are sometimes known as type 2.[127][128]
|
164 |
+
|
165 |
+
Polar mesospheric clouds form at an extreme-level altitude range of about 80 to 85 km (50 to 53 mi). They are given the Latin name noctilucent because of their illumination well after sunset and before sunrise. They typically have a bluish or silvery white coloration that can resemble brightly illuminated cirrus. Noctilucent clouds may occasionally take on more of a red or orange hue.[6] They are not common or widespread enough to have a significant effect on climate.[129] However, an increasing frequency of occurrence of noctilucent clouds since the 19th century may be the result of climate change.[130]
|
166 |
+
|
167 |
+
Noctilucent clouds are the highest in the atmosphere and form near the top of the mesosphere at about ten times the altitude of tropospheric high clouds.[131] From ground level, they can occasionally be seen illuminated by the sun during deep twilight. Ongoing research indicates that convective lift in the mesosphere is strong enough during the polar summer to cause adiabatic cooling of small amount of water vapour to the point of saturation. This tends to produce the coldest temperatures in the entire atmosphere just below the mesopause. These conditions result in the best environment for the formation of polar mesospheric clouds.[129] There is also evidence that smoke particles from burnt-up meteors provide much of the condensation nuclei required for the formation of noctilucent cloud.[132]
|
168 |
+
|
169 |
+
Noctilucent clouds have four major types based on physical structure and appearance. Type I veils are very tenuous and lack well-defined structure, somewhat like cirrostratus or poorly defined cirrus.[133] Type II bands are long streaks that often occur in groups arranged roughly parallel to each other. They are usually more widely spaced than the bands or elements seen with cirrocumulus clouds.[134] Type III billows are arrangements of closely spaced, roughly parallel short streaks that mostly resemble cirrus.[135] Type IV whirls are partial or, more rarely, complete rings of cloud with dark centres.[136]
|
170 |
+
|
171 |
+
Distribution in the mesosphere is similar to the stratosphere except at much higher altitudes. Because of the need for maximum cooling of the water vapor to produce noctilucent clouds, their distribution tends to be restricted to polar regions of Earth. A major seasonal difference is that convective lift from below the mesosphere pushes very scarce water vapor to higher colder altitudes required for cloud formation during the respective summer seasons in the northern and southern hemispheres. Sightings are rare more than 45 degrees south of the north pole or north of the south pole.[6]
|
172 |
+
|
173 |
+
Cloud cover has been seen on most other planets in the Solar System. Venus's thick clouds are composed of sulfur dioxide (due to volcanic activity) and appear to be almost entirely stratiform.[137] They are arranged in three main layers at altitudes of 45 to 65 km that obscure the planet's surface and can produce virga. No embedded cumuliform types have been identified, but broken stratocumuliform wave formations are sometimes seen in the top layer that reveal more continuous layer clouds underneath.[138] On Mars, noctilucent, cirrus, cirrocumulus and stratocumulus composed of water-ice have been detected mostly near the poles.[139][140] Water-ice fogs have also been detected on Mars.[141]
|
174 |
+
|
175 |
+
Both Jupiter and Saturn have an outer cirriform cloud deck composed of ammonia,[142][143] an intermediate stratiform haze-cloud layer made of ammonium hydrosulfide, and an inner deck of cumulus water clouds.[144][145] Embedded cumulonimbus are known to exist near the Great Red Spot on Jupiter.[146][147] The same category-types can be found covering Uranus, and Neptune, but are all composed of methane.[148][149][150][151][152][153] Saturn's moon Titan has cirrus clouds believed to be composed largely of methane.[154][155] The Cassini–Huygens Saturn mission uncovered evidence of polar stratospheric clouds[156] and a methane cycle on Titan, including lakes near the poles and fluvial channels on the surface of the moon.[157]
|
176 |
+
|
177 |
+
Some planets outside the Solar System are known to have atmospheric clouds. In October 2013, the detection of high altitude optically thick clouds in the atmosphere of exoplanet Kepler-7b was announced,[158][159] and, in December 2013, in the atmospheres of GJ 436 b and GJ 1214 b.[160][161][162][163]
|
178 |
+
|
179 |
+
Clouds play an important role in various cultures and religious traditions. The ancient Akkadians believed that the clouds were the breasts of the sky goddess Antu[165] and that rain was milk from her breasts.[165] In Exodus 13:21–22, Yahweh is described as guiding the Israelites through the desert in the form of a "pillar of cloud" by day and a "pillar of fire" by night.[164]
|
180 |
+
|
181 |
+
In the ancient Greek comedy The Clouds, written by Aristophanes and first performed at the City Dionysia in 423 BC, the philosopher Socrates declares that the Clouds are the only true deities[166] and tells the main character Strepsiades not to worship any deities other than the Clouds, but to pay homage to them alone.[166] In the play, the Clouds change shape to reveal the true nature of whoever is looking at them,[167][166][168] turning into centaurs at the sight of a long-haired politician, wolves at the sight of the embezzler Simon, deer at the sight of the coward Cleonymus, and mortal women at the sight of the effeminate informer Cleisthenes.[167][168][166] They are hailed the source of inspiration to comic poets and philosophers;[166] they are masters of rhetoric, regarding eloquence and sophistry alike as their "friends".[166]
|
182 |
+
|
183 |
+
In China, clouds are symbols of luck and happiness.[169] Overlapping clouds are thought to imply eternal happiness[169] and clouds of different colors are said to indicate "multiplied blessings".[169]
|
en/1152.html.txt
ADDED
@@ -0,0 +1,190 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
in Europe (dark grey) – [Legend]
|
4 |
+
|
5 |
+
Vatican City /ˈvætɪkən/ (listen), officially the Vatican City State (Italian: Stato della Città del Vaticano;[g] Latin: Status Civitatis Vaticanae),[h][i] is the Holy See's independent city-state enclaved within Rome, Italy.[12] Vatican City became independent from Italy with the Lateran Treaty (1929), and it is a distinct territory under "full ownership, exclusive dominion, and sovereign authority and jurisdiction" of the Holy See, itself a sovereign entity of international law, which maintains the city state's temporal, diplomatic, and spiritual independence.[j][13] With an area of 49 hectares (121 acres)[b] and a population of about 805,[c] it is the smallest sovereign state in the world by both area and population.[14]
|
6 |
+
|
7 |
+
As governed by the Holy See, the Vatican City is an ecclesiastical or sacerdotal-monarchical state (a type of theocracy) ruled by the pope who is the bishop of Rome and head of the Catholic Church.[3][15] The highest state functionaries are all Catholic clergy of various national origins. Except the Avignon Papacy (1309–1437), the popes have generally resided at the Apostolic Palace within what is now Vatican City, although at times residing instead in the Quirinal Palace in Rome or elsewhere.
|
8 |
+
|
9 |
+
The Holy See dates back to Early Christianity and is the principal episcopal see of the Catholic Church, with approximately 1.313 billion baptised Catholic Christians in the world as of 2017[update] in the Latin Church and 23 Eastern Catholic Churches.[16] The independent Vatican City-state, on the other hand, came into existence on 11 February 1929 by the Lateran Treaty between the Holy See and Italy, which spoke of it as a new creation,[17] not as a vestige of the much larger Papal States (756–1870), which had previously encompassed much of central Italy.
|
10 |
+
|
11 |
+
Within the Vatican City are religious and cultural sites such as St. Peter's Basilica, the Sistine Chapel, and the Vatican Museums. They feature some of the world's most famous paintings and sculptures. The unique economy of Vatican City is supported financially by the sale of postage stamps and souvenirs, fees for admission to museums, and sales of publications.
|
12 |
+
|
13 |
+
The name Vatican City was first used in the Lateran Treaty, signed on 11 February 1929, which established the modern city-state named after Vatican Hill, the geographic location of the state. "Vatican" is derived from the name of an Etruscan settlement, Vatica or Vaticum located in the general area the Romans called Ager Vaticanus, "Vatican territory".[18]
|
14 |
+
|
15 |
+
The official Italian name of the city is Città del Vaticano or, more formally, Stato della Città del Vaticano, meaning "Vatican City State". Although the Holy See (which is distinct from the Vatican City) and the Catholic Church use Ecclesiastical Latin in official documents, the Vatican City uses Italian.[citation needed] The Latin name is Status Civitatis Vaticanae;[19][20] this is used in official documents by the Holy See, the Church and the Pope.
|
16 |
+
|
17 |
+
The name "Vatican" was already in use in the time of the Roman Republic for the Ager Vaticanus, a marshy area on the west bank of the Tiber across from the city of Rome, located between the Janiculum, the Vatican Hill and Monte Mario, down to the Aventine Hill and up to the confluence of the Cremera creek.[21]
|
18 |
+
|
19 |
+
Because of its vicinity to their arch-fiend, the Etruscan city of Veii (another naming for the Ager Vaticanus was Ripa Veientana or Ripa Etrusca) and for being subjected to the floods of the Tiber, the Romans considered this originally uninhabited part of Rome unsalubrious and ominous.[22]
|
20 |
+
|
21 |
+
The particularly low quality of Vatican wine, even after the reclamation of the area, was commented on by the poet Martial (40 – between 102 and 104 AD).[23] Tacitus wrote, that in AD 69, the Year of the Four Emperors, when the northern army that brought Vitellius to power arrived in Rome, "a large proportion camped in the unhealthy districts of the Vatican, which resulted in many deaths among the common soldiery; and the Tiber being close by, the inability of the Gauls and Germans to bear the heat and the consequent greed with which they drank from the stream weakened their bodies, which were already an easy prey to disease".[24]
|
22 |
+
|
23 |
+
The toponym Ager Vaticanus is attested until the 1st century AD: afterwards, another toponym appeared, Vaticanus, denoting an area much more restricted: the Vatican hill, today's St. Peter's Square, and possibly today's Via della Conciliazione.[21]
|
24 |
+
|
25 |
+
Under the Roman Empire, many villas were constructed there, after Agrippina the Elder (14 BC–18 October AD 33) drained the area and laid out her gardens in the early 1st century AD. In AD 40, her son, Emperor Caligula (31 August AD 12–24 January AD 41; r. 37–41) built in her gardens a circus for charioteers (AD 40) that was later completed by Nero, the Circus Gaii et Neronis,[25] usually called, simply, the Circus of Nero.[26]
|
26 |
+
|
27 |
+
The Vatican Obelisk was originally taken by Caligula from Heliopolis in Egypt to decorate the spina of his circus and is thus its last visible remnant.[27] This area became the site of martyrdom of many Christians after the Great Fire of Rome in AD 64. Ancient tradition holds that it was in this circus that Saint Peter was crucified upside-down.[28]
|
28 |
+
|
29 |
+
Opposite the circus was a cemetery separated by the Via Cornelia. Funeral monuments and mausoleums, and small tombs, as well as altars to pagan gods of all kinds of polytheistic religions, were constructed lasting until before the construction of the Constantinian Basilica of St. Peter in the first half of the 4th century. A shrine dedicated to the Phrygian goddess Cybele and her consort Attis remained active long after the ancient Basilica of St. Peter was built nearby.[29]
|
30 |
+
Remains of this ancient necropolis were brought to light sporadically during renovations by various popes throughout the centuries, increasing in frequency during the Renaissance until it was systematically excavated by orders of Pope Pius XII from 1939 to 1941. The Constantinian basilica was built in 326 over what was believed to be the tomb of Saint Peter, buried in that cemetery.[30]
|
31 |
+
|
32 |
+
From then on, the area became more populated in connection with activity at the basilica. A palace was constructed nearby as early as the 5th century during the pontificate of Pope Symmachus (reigned 498–514).[31]
|
33 |
+
|
34 |
+
Popes gradually came to have a secular role as governors of regions near Rome. They ruled the Papal States, which covered a large portion of the Italian peninsula, for more than a thousand years until the mid-19th century, when all the territory belonging to the papacy was seized by the newly created Kingdom of Italy.
|
35 |
+
|
36 |
+
For most of this time the popes did not live at the Vatican. The Lateran Palace, on the opposite side of Rome, was their habitual residence for about a thousand years. From 1309 to 1377, they lived at Avignon in France. On their return to Rome they chose to live at the Vatican. They moved to the Quirinal Palace in 1583, after work on it was completed under Pope Paul V (1605–1621), but on the capture of Rome in 1870 retired to the Vatican, and what had been their residence became that of the King of Italy.
|
37 |
+
|
38 |
+
In 1870, the Pope's holdings were left in an uncertain situation when Rome itself was annexed by the Piedmont-led forces which had united the rest of Italy, after a nominal resistance by the papal forces. Between 1861 and 1929 the status of the Pope was referred to as the "Roman Question".
|
39 |
+
|
40 |
+
Italy made no attempt to interfere with the Holy See within the Vatican walls. However, it confiscated church property in many places. In 1871, the Quirinal Palace was confiscated by the King of Italy and became the royal palace. Thereafter, the popes resided undisturbed within the Vatican walls, and certain papal prerogatives were recognized by the Law of Guarantees, including the right to send and receive ambassadors. But the Popes did not recognise the Italian king's right to rule in Rome, and they refused to leave the Vatican compound until the dispute was resolved in 1929; Pope Pius IX (1846–1878), the last ruler of the Papal States, was referred to as a "prisoner in the Vatican". Forced to give up secular power, the popes focused on spiritual issues.[32]
|
41 |
+
|
42 |
+
This situation was resolved on 11 February 1929, when the Lateran Treaty between the Holy See and the Kingdom of Italy was signed by Prime Minister and Head of Government Benito Mussolini on behalf of King Victor Emmanuel III and by Cardinal Secretary of State Pietro Gasparri for Pope Pius XI.[17][13][33] The treaty, which became effective on 7 June 1929, established the independent state of Vatican City and reaffirmed the special status of Catholic Christianity in Italy.[34]
|
43 |
+
|
44 |
+
The Holy See, which ruled Vatican City, pursued a policy of neutrality during World War II, under the leadership of Pope Pius XII. Although German troops occupied the city of Rome after the September 1943 Armistice of Cassibile, and the Allies from 1944, they respected Vatican City as neutral territory.[35] One of the main diplomatic priorities of the bishop of Rome was to prevent the bombing of the city; so sensitive was the pontiff that he protested even the British air dropping of pamphlets over Rome, claiming that the few landing within the city-state violated the Vatican's neutrality.[36] The British policy, as expressed in the minutes of a Cabinet meeting, was: "that we should on no account molest the Vatican City, but that our action as regards the rest of Rome would depend upon how far the Italian government observed the rules of war".[36]
|
45 |
+
|
46 |
+
After the US entered into the war, the US opposed such a bombing, fearful of offending Catholic members of its military forces, but said that "they could not stop the British from bombing Rome if the British so decided". The US military even exempted Catholic pilots and crew from air raids on Rome and other Church holdings, unless voluntarily agreed upon. Notably, with the exception of Rome, and presumably the possibility of the Vatican, no Catholic US pilot or air crew refused a mission within German-held Italy. The British uncompromisingly said "they would bomb Rome whenever the needs of the war demanded".[37] In December 1942, the UK's envoy suggested to the Holy See that Rome be declared an "open city", a suggestion that the Holy See took more seriously than was probably meant by the UK, who did not want Rome to be an open city, but Mussolini rejected the suggestion when the Holy See put it to him. In connection with the Allied invasion of Sicily, 500 US aircraft bombed Rome on 19 July 1943, aiming particularly at the railway hub. Some 1,500 people were killed; Pius XII himself, who had been described in the previous month as "worried sick" about the possible bombing, viewed the aftermath. Another raid took place on 13 August 1943, after Mussolini had been ousted from power.[38] On the following day, the new government declared Rome an open city, after consulting the Holy See on the wording of the declaration, but the UK had decided that they would never recognize Rome as an open city.[39]
|
47 |
+
|
48 |
+
Pius XII had refrained from creating cardinals during the war. By the end of World War II, there were several prominent vacancies: Cardinal Secretary of State, Camerlengo, Chancellor, and Prefect for the Congregation for the Religious among them.[40] Pius XII created 32 cardinals in early 1946, having announced his intentions to do so in his preceding Christmas message.
|
49 |
+
|
50 |
+
The Pontifical Military Corps, except for the Swiss Guard, was disbanded by will of Paul VI, as expressed in a letter of 14 September 1970.[41] The Gendarmerie Corps was transformed into a civilian police and security force.
|
51 |
+
|
52 |
+
In 1984, a new concordat between the Holy See and Italy modified certain provisions of the earlier treaty, including the position of Catholic Christianity as the Italian state religion, a position given to it by a statute of the Kingdom of Sardinia of 1848.[34]
|
53 |
+
|
54 |
+
Construction in 1995 of a new guest house, Domus Sanctae Marthae, adjacent to St Peter's Basilica was criticized by Italian environmental groups, backed by Italian politicians. They claimed the new building would block views of the Basilica from nearby Italian apartments.[42] For a short while the plans strained the relations between the Vatican and the Italian government. The head of the Vatican's Department of Technical Services robustly rejected challenges to the Vatican State's right to build within its borders.[42]
|
55 |
+
|
56 |
+
The name "Vatican" was already in use in the time of the Roman Republic for the Ager Vaticanus, a marshy area on the west bank of the Tiber across from the city of Rome, located between the Janiculum, the Vatican Hill and Monte Mario, down to the Aventine Hill and up to the confluence of the Cremera creek.[21] The territory of Vatican City is part of the Vatican Hill, and of the adjacent former Vatican Fields. It is in this territory that St. Peter's Basilica, the Apostolic Palace, the Sistine Chapel, and museums were built, along with various other buildings. The area was part of the Roman rione of Borgo until 1929. Being separated from the city, on the west bank of the river Tiber, the area was an outcrop of the city that was protected by being included within the walls of Leo IV (847–855), and later expanded by the current fortification walls, built under Paul III (1534–1549), Pius IV (1559–1565), and Urban VIII (1623–1644).
|
57 |
+
|
58 |
+
When the Lateran Treaty of 1929 that gave the state its form was being prepared, the boundaries of the proposed territory were influenced by the fact that much of it was all but enclosed by this loop. For some tracts of the frontier, there was no wall, but the line of certain buildings supplied part of the boundary, and for a small part of the frontier a modern wall was constructed.
|
59 |
+
|
60 |
+
The territory includes St. Peter's Square, distinguished from the territory of Italy only by a white line along the limit of the square, where it touches Piazza Pio XII. St. Peter's Square is reached through the Via della Conciliazione which runs from close to the Tiber to St. Peter's. This grand approach was constructed by Benito Mussolini after the conclusion of the Lateran Treaty.
|
61 |
+
|
62 |
+
According to the Lateran Treaty, certain properties of the Holy See that are located in Italian territory, most notably the Papal Palace of Castel Gandolfo and the major basilicas, enjoy extraterritorial status similar to that of foreign embassies.[43][44] These properties, scattered all over Rome and Italy, house essential offices and institutions necessary to the character and mission of the Holy See.[44]
|
63 |
+
|
64 |
+
Castel Gandolfo and the named basilicas are patrolled internally by police agents of Vatican City State and not by Italian police. According to the Lateran Treaty (Art. 3) St. Peter's Square, up to but not including the steps leading to the basilica, is normally patrolled by the Italian police.[43]
|
65 |
+
|
66 |
+
There are no passport controls for visitors entering Vatican City from the surrounding Italian territory. There is free public access to Saint Peter's Square and Basilica and, on the occasion of papal general audiences, to the hall in which they are held. For these audiences and for major ceremonies in Saint Peter's Basilica and Square, tickets free of charge must be obtained beforehand. The Vatican Museums, incorporating the Sistine Chapel, usually charge an entrance fee. There is no general public access to the gardens, but guided tours for small groups can be arranged to the gardens and excavations under the basilica. Other places are open to only those individuals who have business to transact there.
|
67 |
+
|
68 |
+
Vatican City's climate is the same as Rome's: a temperate, Mediterranean climate Csa with mild, rainy winters from October to mid-May and hot, dry summers from May to September. Some minor local features, principally mists and dews, are caused by the anomalous bulk of St Peter's Basilica, the elevation, the fountains, and the size of the large paved square.
|
69 |
+
|
70 |
+
In July 2007, the Vatican accepted a proposal by two firms based respectively in San Francisco and Budapest,[47] whereby it would become the first carbon neutral state by offsetting its carbon dioxide emissions with the creation of a Vatican Climate Forest in Hungary,[48] as a purely symbolic gesture[49] to encourage Catholics to do more to safeguard the planet.[50] Nothing came of the project.[51][52]
|
71 |
+
|
72 |
+
On 26 November 2008, the Vatican itself put into effect a plan announced in May 2007 to cover the roof of the Paul VI Audience Hall with solar panels.[53][54]
|
73 |
+
|
74 |
+
Within the territory of Vatican City are the Vatican Gardens (Italian: Giardini Vaticani),[55] which account for about half of this territory. The gardens, established during the Renaissance and Baroque era, are decorated with fountains and sculptures.
|
75 |
+
|
76 |
+
The gardens cover approximately 23 hectares (57 acres). The highest point is 60 metres (197 ft) above mean sea level. Stone walls bound the area in the north, south and west.
|
77 |
+
|
78 |
+
The gardens date back to medieval times when orchards and vineyards extended to the north of the Papal Apostolic Palace.[56] In 1279, Pope Nicholas III (Giovanni Gaetano Orsini, 1277–1280) moved his residence back to the Vatican from the Lateran Palace and enclosed this area with walls.[57] He planted an orchard (pomerium), a lawn (pratellum), and a garden (viridarium).[57]
|
79 |
+
|
80 |
+
The politics of Vatican City takes place in an absolute elective monarchy, in which the head of the Catholic Church takes power. The pope exercises principal legislative, executive, and judicial power over the State of Vatican City (an entity distinct from the Holy See), which is a rare case of a non-hereditary monarchy.[58]
|
81 |
+
|
82 |
+
Vatican City is one of the few widely recognized independent states that has not become a member of the United Nations.[59] The Holy See, which is distinct from Vatican City State, has permanent observer status with all the rights of a full member except for a vote in the UN General Assembly.
|
83 |
+
|
84 |
+
The government of Vatican City has a unique structure. The pope is the sovereign of the state. Legislative authority is vested in the Pontifical Commission for Vatican City State, a body of cardinals appointed by the pope for five-year periods. Executive power is in the hands of the president of that commission, assisted by the general secretary and deputy general secretary. The state's foreign relations are entrusted to the Holy See's Secretariat of State and diplomatic service. Nevertheless, the pope has absolute power in the executive, legislative, and judicial branches over Vatican City. He is the only absolute monarch in Europe.
|
85 |
+
|
86 |
+
There are departments that deal with health, security, telecommunications, etc.[60]
|
87 |
+
|
88 |
+
The Cardinal Camerlengo presides over the Apostolic Camera to which is entrusted the administration of the property and protection of other papal temporal powers and rights of the Holy See during the period of the empty throne or sede vacante (papal vacancy). Those of the Vatican State remain under the control of the Pontifical Commission for the State of Vatican City. Acting with three other cardinals chosen by lot every three days, one from each order of cardinals (cardinal bishop, cardinal priest, and cardinal deacon), he in a sense performs during that period the functions of head of state of Vatican City.[citation needed] All the decisions these four cardinals take must be approved by the College of Cardinals as a whole.
|
89 |
+
|
90 |
+
The nobility that was closely associated with the Holy See at the time of the Papal States continued to be associated with the Papal Court after the loss of these territories, generally with merely nominal duties (see Papal Master of the Horse, Prefecture of the Pontifical Household, Hereditary officers of the Roman Curia, Black Nobility). They also formed the ceremonial Noble Guard. In the first decades of the existence of the Vatican City State, executive functions were entrusted to some of them, including that of delegate for the State of Vatican City (now denominated president of the Commission for Vatican City). But with the motu proprio Pontificalis Domus of 28 March 1968,[61] Pope Paul VI abolished the honorary positions that had continued to exist until then, such as Quartermaster general and Master of the Horse.[62]
|
91 |
+
|
92 |
+
Vatican City State, created in 1929 by the Lateran Pacts, provides the Holy See with a temporal jurisdiction and independence within a small territory. It is distinct from the Holy See. The state can thus be deemed a significant but not essential instrument of the Holy See. The Holy See itself has existed continuously as a juridical entity since Roman Imperial times and has been internationally recognized as a powerful and independent sovereign entity since Late Antiquity to the present, without interruption even at times when it was deprived of territory (e.g. 1870 to 1929). The Holy See has the oldest active continuous diplomatic service in the world, dating back to at least AD 325 with its legation to the Council of Nicea.[63]
|
93 |
+
|
94 |
+
The Pope is ex officio head of state[64] of Vatican City since the 1860s, functions dependent on his primordial function as bishop of the diocese of Rome. The term "Holy See" refers not to the Vatican state but to the Pope's spiritual and pastoral governance, largely exercised through the Roman Curia.[65] His official title with regard to Vatican City is Sovereign of the State of the Vatican City.
|
95 |
+
|
96 |
+
Pope Francis, born Jorge Mario Bergoglio in Buenos Aires, Argentina, was elected on 13 March 2013. His principal subordinate government official for Vatican City as well as the country's head of government is the President of the Pontifical Commission for Vatican City State, who since 1952 exercises the functions previously belonging to the Governor of Vatican City. Since 2001, the president of the Pontifical Commission for Vatican City State also has the title of president of the Governorate of the State of Vatican City. The president is Italian Cardinal Giuseppe Bertello, who was appointed on 1 October 2011.
|
97 |
+
|
98 |
+
Legislative functions are delegated to the unicameral Pontifical Commission for Vatican City State, led by the President of the Pontifical Commission for Vatican City State. Its seven members are cardinals appointed by the Pope for terms of five years. Acts of the commission must be approved by the Pope, through the Holy See's Secretariat of State, and before taking effect must be published in a special appendix of the Acta Apostolicae Sedis. Most of the content of this appendix consists of routine executive decrees, such as approval for a new set of postage stamps.
|
99 |
+
|
100 |
+
Executive authority is delegated to the Governorate of Vatican City. The Governorate consists of the President of the Pontifical Commission—using the title "President of the Governorate of Vatican City"—a general secretary, and a Vice general secretary, each appointed by the Pope for five-year terms. Important actions of the Governorate must be confirmed by the Pontifical Commission and by the Pope through the Secretariat of State.
|
101 |
+
|
102 |
+
The Governorate oversees the central governmental functions through several departments and offices. The directors and officials of these offices are appointed by the Pope for five-year terms. These organs concentrate on material questions concerning the state's territory, including local security, records, transportation, and finances. The Governorate oversees a modern security and police corps, the Corpo della Gendarmeria dello Stato della Città del Vaticano.
|
103 |
+
|
104 |
+
Judicial functions are delegated to a supreme court, an appellate court, a tribunal (Tribunal of Vatican City State), and a trial judge. At the Vatican's request, sentences imposed can be served in Italy (see the section on crime, below).
|
105 |
+
|
106 |
+
The international postal country code prefix is SCV, and the only postal code is 00120 – altogether SCV-00120.[66]
|
107 |
+
|
108 |
+
As the Vatican City is an enclave within Italy, its military defence is provided by the Italian Armed Forces. However, there is no formal defence treaty with Italy, as the Vatican City is a neutral state. Vatican City has no armed forces of its own, although the Swiss Guard is a military corps of the Holy See responsible for the personal security of the Pope, and residents in the state. Soldiers of the Swiss Guard are entitled to hold Vatican City State passports and nationality. Swiss mercenaries were historically recruited by Popes as part of an army for the Papal States, and the Pontifical Swiss Guard was founded by Pope Julius II on 22 January 1506 as the pope's personal bodyguard and continues to fulfill that function. It is listed in the Annuario Pontificio under "Holy See", not under "State of Vatican City". At the end of 2005, the Guard had 134 members. Recruitment is arranged by a special agreement between the Holy See and Switzerland. All recruits must be Catholic, unmarried males with Swiss citizenship who have completed their basic training with the Swiss Armed Forces with certificates of good conduct, be between the ages of 19 and 30, and be at least 174 cm (5 ft 9 in) in height. Members are equipped with small arms and the traditional halberd (also called the Swiss voulge), and trained in bodyguarding tactics. The Palatine Guard and the Noble Guard, the last armed forces of the Vatican City State, were disbanded by Pope Paul VI in 1970.[41] As Vatican City has listed every building in its territory on the International Register of Cultural Property under Special Protection, the Hague Convention for the Protection of Cultural Property in the Event of Armed Conflict theoretically renders it immune to armed attack.[67]
|
109 |
+
|
110 |
+
Civil defence is the responsibility of the Corps of Firefighters of the Vatican City State, the national fire brigade. Dating its origins to the early nineteenth century, the Corps in its present form was established in 1941. It is responsible for fire fighting, as well as a range of civil defence scenarios including flood, natural disaster, and mass casualty management. The Corps is governmentally supervised through the Directorate for Security Services and Civil Defence, which is also responsible for the Gendarmerie (see below).
|
111 |
+
|
112 |
+
The Gendarmerie Corps (Corpo della Gendarmeria) is the gendarmerie, or police and security force, of Vatican City and the extraterritorial properties of the Holy See.[68] The corps is responsible for security, public order, border control, traffic control, criminal investigation, and other general police duties in Vatican City including providing security for the Pope outside of Vatican City. The corps has 130 personnel and is a part of the Directorate for Security Services and Civil Defence (which also includes the Vatican Fire Brigade), an organ of the Governorate of Vatican City.[69][70]
|
113 |
+
|
114 |
+
Vatican City State is a recognized national territory under international law, but it is the Holy See that conducts diplomatic relations on its behalf, in addition to the Holy See's own diplomacy, entering into international agreements in its regard. Vatican City thus has no diplomatic service of its own.
|
115 |
+
|
116 |
+
Because of space limitations, Vatican City is one of the few countries in the world that is unable to host embassies. Foreign embassies to the Holy See are located in the city of Rome; only during the Second World War were the staff of some embassies accredited to the Holy See given what hospitality was possible within the narrow confines of Vatican City—embassies such as that of the United Kingdom while Rome was held by the Axis Powers and Germany's when the Allies controlled Rome.
|
117 |
+
|
118 |
+
The size of Vatican City is thus unrelated to the large global reach exercised by the Holy See as an entity quite distinct from the state.[71]
|
119 |
+
|
120 |
+
However, Vatican City State itself participates in some international organizations whose functions relate to the state as a geographical entity, distinct from the non-territorial legal persona of the Holy See. These organizations are much less numerous than those in which the Holy See participates either as a member or with observer status. They include the following eight, in each of which Vatican City State holds membership:[72][73]
|
121 |
+
|
122 |
+
It also participates in:[72]
|
123 |
+
|
124 |
+
The Vatican City State is not a member of the International Criminal Court (ICC). In Europe only Belarus is also a non-party, non-signatory state.
|
125 |
+
|
126 |
+
Further, the Vatican City State is not a member of the European Court of Human Rights. Again, only Belarus is also not a member in Europe.
|
127 |
+
|
128 |
+
The OECD's "Common Reporting Standard" (CRS) aiming at preventing tax evasion and money laundering has also not been signed.[75][76][77] The Vatican City State has been criticized for its money laundering practises in the past decades.[78][79][80] The only other country in Europe that has not agreed to sign the CRS is Belarus.
|
129 |
+
|
130 |
+
The Vatican City State is also one of few countries in the world that does not provide any publicly available financial data to the IMF.[81]
|
131 |
+
|
132 |
+
The Vatican City State budget includes the Vatican Museums and post office and is supported financially by the sale of stamps, coins, medals and tourist mementos; by fees for admission to museums; and by publications sales.[k] The incomes and living standards of lay workers are comparable to those of counterparts who work in the city of Rome.[82] Other industries include printing, the production of mosaics, and the manufacture of staff uniforms. There is a Vatican Pharmacy.
|
133 |
+
|
134 |
+
The Institute for Works of Religion (IOR, Istituto per le Opere di Religione), also known as the Vatican Bank, is a financial agency situated in the Vatican that conducts worldwide financial activities. It has multilingual ATMs with instructions in Latin, possibly the only ATM in the world with this feature.[83]
|
135 |
+
|
136 |
+
Vatican City issues its own coins and stamps. It has used the euro as its currency since 1 January 1999, owing to a special agreement with the European Union (council decision 1999/98). Euro coins and notes were introduced on 1 January 2002—the Vatican does not issue euro banknotes. Issuance of euro-denominated coins is strictly limited by treaty, though somewhat more than usual is allowed in a year in which there is a change in the papacy.[84] Because of their rarity, Vatican euro coins are highly sought by collectors.[85] Until the adoption of the Euro, Vatican coinage and stamps were denominated in their own Vatican lira currency, which was on par with the Italian lira.
|
137 |
+
|
138 |
+
Vatican City State, which employs nearly 2,000 people, had a surplus of 6.7 million euros in 2007 but ran a deficit in 2008 of over 15 million euros.[86]
|
139 |
+
|
140 |
+
In 2012, the US Department of State's International Narcotics Control Strategy Report listed Vatican City for the first time among the nations of concern for money-laundering, placing it in the middle category, which includes countries such as Ireland, but not among the most vulnerable countries, which include the United States itself, Germany, Italy, and Russia.[87]
|
141 |
+
|
142 |
+
On 24 February 2014 the Vatican announced it was establishing a secretariat for the economy, to be responsible for all economic, financial and administrative activities of the Holy See and the Vatican City State, headed by Cardinal George Pell. This followed the charging of two senior clerics including a monsignor with money laundering offences. Pope Francis also appointed an auditor-general authorized to carry out random audits of any agency at any time, and engaged a US financial services company to review the Vatican's 19,000 accounts to ensure compliance with international money laundering practices. The pontiff also ordered that the Administration of the Patrimony of the Apostolic See would be the Vatican's central bank, with responsibilities similar to other central banks around the world.[88]
|
143 |
+
|
144 |
+
As of 2019, Vatican City had a total population of 825, including 453 residents (regardless of citizenship) and 372 Vatican citizens residing elsewhere (diplomats of the Holy See to other countries and cardinals residing in Rome).[9][89] The population is composed of clergy, other religious members, and lay people serving the state (such as the Swiss Guard) and their family members.[90] All citizens, residents and places of worship in the city are Catholic. The city also receives thousands of tourists and workers every day.
|
145 |
+
|
146 |
+
Vatican City has no formally enacted official language, but, unlike the Holy See which most often uses Latin for the authoritative version of its official documents, Vatican City uses only Italian in its legislation and official communications.[91] Italian is also the everyday language used by most of those who work in the state. In the Swiss Guard, Swiss German is the language used for giving commands, but the individual guards take their oath of loyalty in their own languages: German, French, Italian or Romansh. The official websites of the Holy See[92] and of Vatican City[93] are primarily in Italian, with versions of their pages in a large number of languages to varying extents.
|
147 |
+
|
148 |
+
Unlike citizenship of other states, which is based either on jus sanguinis (birth from a citizen, even outside the state's territory) or on jus soli (birth within the territory of the state), citizenship of Vatican City is granted jus officii, namely on the grounds of appointment to work in a certain capacity in the service of the Holy See. It usually ceases upon cessation of the appointment. Citizenship is also extended to the spouse and children of a citizen, provided they are living together in the city.[89] Some individuals are also authorized to reside in the city but do not qualify or choose not to request citizenship.[89] Anyone who loses Vatican citizenship and does not possess other citizenship automatically becomes an Italian citizen as provided in the Lateran Treaty.[43]
|
149 |
+
|
150 |
+
The Holy See, not being a country, issues only diplomatic and service passports, whereas Vatican City issues normal passports for its citizens.
|
151 |
+
|
152 |
+
In statistics comparing countries in various per capita or per area metrics, the Vatican City is often an outlier—these can stem from the state's small size and ecclesiastical function.[94] For example, as most of the roles which would confer citizenship are reserved for men, the gender ratio of the citizenship is several men per woman.[95] Further oddities are petty crimes against tourists resulting in a very high per-capita crime rate,[96] and the city-state leading the world in per-capita wine consumption.[94] A jocular illustration of these anomalies is sometimes made by calculating a "Popes per km2" statistic, which is greater than two because the country is less than half a square kilometre in area.[97]
|
153 |
+
|
154 |
+
Vatican City is home to some of the most famous art in the world. St. Peter's Basilica, whose successive architects include Bramante, Michelangelo, Giacomo della Porta, Maderno and Bernini, is a renowned work of Renaissance architecture. The Sistine Chapel is famous for its frescos, which include works by Perugino, Domenico Ghirlandaio and Botticelli as well as the ceiling and Last Judgment by Michelangelo. Artists who decorated the interiors of the Vatican include Raphael and Fra Angelico.
|
155 |
+
|
156 |
+
The Vatican Apostolic Library and the collections of the Vatican Museums are of the highest historical, scientific and cultural importance. In 1984, the Vatican was added by UNESCO to the List of World Heritage Sites; it is the only one to consist of an entire state.[98] Furthermore, it is the only site to date registered with the UNESCO as a centre containing monuments in the "International Register of Cultural Property under Special Protection" according to the 1954 Hague Convention for the Protection of Cultural Property in the Event of Armed Conflict.[98]
|
157 |
+
|
158 |
+
Michelangelo's Pietà, in the Basilica, is one of the Vatican's best known artworks
|
159 |
+
|
160 |
+
Michelangelo's frescos on the Sistine Chapel ceiling, "an artistic vision without precedent"[99]
|
161 |
+
|
162 |
+
The elaborately decorated Sistine Hall in the Vatican Library
|
163 |
+
|
164 |
+
Main courtyard of the Vatican Museums
|
165 |
+
|
166 |
+
There is a football championship, called the Vatican City Championship, with eight teams, including, for example, the Swiss Guard's FC Guardia and police and museum guard teams.[100]
|
167 |
+
|
168 |
+
Vatican City has a reasonably well-developed transport network considering its size (consisting mostly of a piazza and walkways). As a state that is 1.05 kilometres (0.65 miles) long and 0.85 kilometres (0.53 miles) wide,[101] it has a small transportation system with no airports or highways. The only aviation facility in Vatican City is the Vatican City Heliport. Vatican City is one of the few independent countries without an airport, and is served by the airports that serve the city of Rome, Leonardo da Vinci-Fiumicino Airport and to a lesser extent Ciampino Airport.[102]
|
169 |
+
|
170 |
+
There is a standard gauge railway, mainly used to transport freight, connected to Italy's network at Rome's Saint Peter's station by an 852-metre-long (932 yd) spur, 300 metres (330 yd) of which is within Vatican territory.[102] Pope John XXIII was the first Pope to make use of the railway; Pope John Paul II rarely used it.[102]
|
171 |
+
|
172 |
+
The closest metro station is Ottaviano – San Pietro – Musei Vaticani.[103]
|
173 |
+
|
174 |
+
The City is served by an independent, modern telephone system named the Vatican Telephone Service,[104] and a postal system (Poste Vaticane) that started operating on 13 February 1929. On 1 August, the state started to release its own postal stamps, under the authority of the Philatelic and Numismatic Office of the Vatican City State.[105] The City's postal service is sometimes said to be "the best in the world",[106] and faster than the postal service in Rome.[106]
|
175 |
+
|
176 |
+
The Vatican also controls its own Internet top-level domain, which is registered as (.va). Broadband service is widely provided within Vatican City. Vatican City has also been given a radio ITU prefix, HV, and this is sometimes used by amateur radio operators.
|
177 |
+
|
178 |
+
Vatican Radio, which was organized by Guglielmo Marconi, broadcasts on short-wave, medium-wave and FM frequencies and on the Internet.[107] Its main transmission antennae are located in Italian territory, and exceed Italian environmental protection levels of emission. For this reason, the Vatican Radio has been sued. Television services are provided through another entity, the Vatican Television Center.[108]
|
179 |
+
|
180 |
+
L'Osservatore Romano is the multilingual semi-official newspaper of the Holy See. It is published by a private corporation under the direction of Catholic laymen, but reports on official information. However, the official texts of documents are in the Acta Apostolicae Sedis, the official gazette of the Holy See, which has an appendix for documents of the Vatican City State.
|
181 |
+
|
182 |
+
Vatican Radio, the Vatican Television Center, and L'Osservatore Romano are organs not of the Vatican State but of the Holy See, and are listed as such in the Annuario Pontificio, which places them in the section "Institutions linked with the Holy See", ahead of the sections on the Holy See's diplomatic service abroad and the diplomatic corps accredited to the Holy See, after which is placed the section on the State of Vatican City.
|
183 |
+
|
184 |
+
In 2008, the Vatican began an "ecological island" for renewable waste and has continued the initiative throughout the papacy of Francis. These innovations included, for example, the installation of a solar power system on the roof of the Paul VI Audience Hall. In July 2019, it was announced that Vatican City would ban the use and sale of single-use plastics as soon as its supply was depleted, well before the 2021 deadline established by the European Union. It is estimated that 50–55% of Vatican City's municipal solid waste is properly sorted and recycled, with the goal of reaching the EU standard of 70–75%[109]
|
185 |
+
|
186 |
+
Crime in Vatican City consists largely of purse snatching, pickpocketing and shoplifting by outsiders.[110] The tourist foot-traffic in St. Peter's Square is one of the main locations for pickpockets in Vatican City.[111] If crimes are committed in Saint Peter's Square, the perpetrators may be arrested and tried by the Italian authorities, since that area is normally patrolled by Italian police.[112]
|
187 |
+
|
188 |
+
Under the terms of article 22 of the Lateran Treaty,[113] Italy will, at the request of the Holy See, punish individuals for crimes committed within Vatican City and will itself proceed against the person who committed the offence, if that person takes refuge in Italian territory. Persons accused of crimes recognized as such both in Italy and in Vatican City that are committed in Italian territory will be handed over to the Italian authorities if they take refuge in Vatican City or in buildings that enjoy immunity under the treaty.[113][114]
|
189 |
+
|
190 |
+
Vatican City has no prison system, apart from a few detention cells for pre-trial detention.[115] People convicted of committing crimes in the Vatican serve terms in Italian prisons (Polizia Penitenziaria), with costs covered by the Vatican.[116]
|
en/1153.html.txt
ADDED
@@ -0,0 +1,76 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
The lemon, Citrus limon, is a species of small evergreen tree in the flowering plant family Rutaceae, native to South Asia, primarily North eastern India. Its fruits are round in shape.
|
6 |
+
|
7 |
+
The tree's ellipsoidal yellow fruit is used for culinary and non-culinary purposes throughout the world, primarily for its juice, which has both culinary and cleaning uses.[2] The pulp and rind are also used in cooking and baking. The juice of the lemon is about 5% to 6% citric acid, with a pH of around 2.2, giving it a sour taste. The distinctive sour taste of lemon juice makes it a key ingredient in drinks and foods such as lemonade and lemon meringue pie.
|
8 |
+
|
9 |
+
The origin of the lemon is unknown, though lemons are thought to have first grown in Assam (a region in northeast India), northern Burma or China.[2] A genomic study of the lemon indicated it was a hybrid between bitter orange (sour orange) and citron.[3][4]
|
10 |
+
|
11 |
+
Lemons entered Europe near southern Italy no later than the second century AD, during the time of Ancient Rome.[2] However, they were not widely cultivated. They were later introduced to Persia and then to Iraq and Egypt around 700 AD.[2] The lemon was first recorded in literature in a 10th-century Arabic treatise on farming, and was also used as an ornamental plant in early Islamic gardens.[2] It was distributed widely throughout the Arab world and the Mediterranean region between 1000 and 1150.[2]
|
12 |
+
|
13 |
+
The first substantial cultivation of lemons in Europe began in Genoa in the middle of the 15th century. The lemon was later introduced to the Americas in 1493 when Christopher Columbus brought lemon seeds to Hispaniola on his voyages. Spanish conquest throughout the New World helped spread lemon seeds. It was mainly used as an ornamental plant and for medicine.[2] In the 19th century, lemons were increasingly planted in Florida and California.[2]
|
14 |
+
|
15 |
+
In 1747, James Lind's experiments on seamen suffering from scurvy involved adding lemon juice to their diets, though vitamin C was not yet known as an important dietary ingredient.[2][5]
|
16 |
+
|
17 |
+
The origin of the word lemon may be Middle Eastern.[2] The word draws from the Old French limon, then Italian limone, from the Arabic laymūn or līmūn, and from the Persian līmūn, a generic term for citrus fruit, which is a cognate of Sanskrit (nimbū, “lime”).[6]
|
18 |
+
|
19 |
+
The 'Bonnie Brae' is oblong, smooth, thin-skinned and seedless.[7] These are mostly grown in San Diego County, USA.[8]
|
20 |
+
|
21 |
+
The 'Eureka' grows year-round and abundantly. This is the common supermarket lemon,[9] also known as 'Four Seasons' (Quatre Saisons) because of its ability to produce fruit and flowers together throughout the year. This variety is also available as a plant to domestic customers.[10] There is also a pink-fleshed Eureka lemon, with a green and yellow variegated outer skin.[11]
|
22 |
+
|
23 |
+
The 'Femminello St. Teresa', or 'Sorrento'[12] is native to Italy. This fruit's zest is high in lemon oils. It is the variety traditionally used in the making of limoncello.
|
24 |
+
|
25 |
+
The 'Yen Ben' is an Australasian cultivar.[13]
|
26 |
+
|
27 |
+
Lemon is a rich source of vitamin C, providing 64% of the Daily Value in a 100 g reference amount (table). Other essential nutrients are low in content.
|
28 |
+
|
29 |
+
Lemons contain numerous phytochemicals, including polyphenols, terpenes, and tannins.[14] Lemon juice contains slightly more citric acid than lime juice (about 47 g/l), nearly twice the citric acid of grapefruit juice, and about five times the amount of citric acid found in orange juice.[15]
|
30 |
+
|
31 |
+
Lemon juice, rind, and peel are used in a wide variety of foods and drinks. The whole lemon is used to make marmalade, lemon curd and lemon liqueur. Lemon slices and lemon rind are used as a garnish for food and drinks. Lemon zest, the grated outer rind of the fruit, is used to add flavor to baked goods, puddings, rice, and other dishes.
|
32 |
+
|
33 |
+
Lemon juice is used to make lemonade, soft drinks, and cocktails. It is used in marinades for fish, where its acid neutralizes amines in fish by converting them into nonvolatile ammonium salts. In meat, the acid partially hydrolyzes tough collagen fibers, tenderizing it.[16] In the United Kingdom, lemon juice is frequently added to pancakes, especially on Shrove Tuesday.
|
34 |
+
|
35 |
+
Lemon juice is also used as a short-term preservative on certain foods that tend to oxidize and turn brown after being sliced (enzymatic browning), such as apples, bananas, and avocados, where its acid denatures the enzymes.
|
36 |
+
|
37 |
+
In Morocco, lemons are preserved in jars or barrels of salt. The salt penetrates the peel and rind, softening them, and curing them so that they last almost indefinitely.[17] The preserved lemon is used in a wide variety of dishes. Preserved lemons can also be found in Sicilian, Italian, Greek, and French dishes.
|
38 |
+
|
39 |
+
The peel can be used in the manufacture of pectin, a polysaccharide used as a gelling agent and stabilizer in food and other products.[18]
|
40 |
+
|
41 |
+
Lemon oil is extracted from oil-containing cells in the skin. A machine breaks up the cells, and uses a water spray to flush off the oil. The oil/water mixture is then filtered and separated by centrifugation.[19]
|
42 |
+
|
43 |
+
The leaves of the lemon tree are used to make a tea and for preparing cooked meats and seafoods.
|
44 |
+
|
45 |
+
Lemons were the primary commercial source of citric acid before the development of fermentation-based processes.[20]
|
46 |
+
|
47 |
+
Lemon oil may be used in aromatherapy. Lemon oil aroma does not influence the human immune system,[21] but may contribute to relaxation.[22]
|
48 |
+
|
49 |
+
One educational science experiment involves attaching electrodes to a lemon and using it as a battery to produce electricity. Although very low power, several lemon batteries can power a small digital watch.[23] These experiments also work with other fruits and vegetables.
|
50 |
+
|
51 |
+
Lemon juice may be used as a simple invisible ink, developed by heat.[24]
|
52 |
+
|
53 |
+
Lemons need a minimum temperature of around 7 °C (45 °F), so they are not hardy year-round in temperate climates, but become hardier as they mature.[25] Citrus require minimal pruning by trimming overcrowded branches, with the tallest branch cut back to encourage bushy growth.[25] Throughout summer, pinching back tips of the most vigorous growth assures more abundant canopy development. As mature plants may produce unwanted, fast-growing shoots (called "water shoots"), these are removed from the main branches at the bottom or middle of the plant.[25]
|
54 |
+
|
55 |
+
The tradition of urinating near a lemon tree[26][27][28]
|
56 |
+
may result from color-based sympathetic magic.
|
57 |
+
|
58 |
+
In cultivation in the UK, the cultivars "Meyer"[29]
|
59 |
+
and "Variegata"[30]
|
60 |
+
have gained the Royal Horticultural Society's Award of Garden Merit (confirmed 2017).[31]
|
61 |
+
|
62 |
+
(in millions of tonnes)
|
63 |
+
|
64 |
+
In 2018, world production of lemons (combined with limes for reporting) was 19.4 million tonnes.[32] The top producers – India, Mexico, China, Argentina, Brazil, and Turkey – collectively accounted for 65% of global production (table).[32]
|
65 |
+
|
66 |
+
Many plants taste or smell similar to lemons.
|
67 |
+
|
68 |
+
Flower
|
69 |
+
|
70 |
+
Lemon seedling
|
71 |
+
|
72 |
+
Mature lemons
|
73 |
+
|
74 |
+
Full-sized tree
|
75 |
+
|
76 |
+
Variegated pink lemon
|
en/1154.html.txt
ADDED
@@ -0,0 +1,109 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Timeline
|
2 |
+
|
3 |
+
The Etruscan civilization (/ɪˈtrʌskən/) was a civilization of ancient Italy whose territory covered roughly what is now Tuscany, western Umbria, and northern Lazio,[1][2] as well as parts of what are now the Po Valley, Emilia-Romagna, south-eastern Lombardy, southern Veneto, and Campania.[3]
|
4 |
+
|
5 |
+
The earliest evidence of a culture that is identifiably Etruscan dates from about 900 BC. This is the period of the Iron Age Villanovan culture, considered to be the earliest phase of Etruscan civilization,[4][5][6][7][8] which itself developed from the previous late Bronze Age Proto-Villanovan culture in the same region.[9] Etruscan civilization endured until it was assimilated into Roman society. Assimilation began in the late 4th century BC as a result of the Roman–Etruscan Wars;[10] it accelerated with the grant of Roman citizenship in 90 BC, and became complete in 27 BC, when the Etruscans’ territory was incorporated into the newly established Roman Empire.[11]
|
6 |
+
|
7 |
+
Etruscan culture was influenced by Ancient Greek culture, beginning around 750 BC (during the last phase of the Villanovan period), when the Greeks, who were at this time in their Archaic Orientalizing period, started founding colonies in southern Italy. Greek influence also occurred in the 4th and 5th centuries BC during Greece’s Classical period.
|
8 |
+
|
9 |
+
The territorial extent of Etruscan civilization reached its maximum around 750 BC, during the foundational period of the Roman Kingdom. Its culture flourished in three confederacies of cities: that of Etruria (Tuscany, Latium and Umbria), that of the Po Valley with the eastern Alps, and that of Campania.[12][13] The league in northern Italy is mentioned in Livy.[14][15][16] The reduction in Etruscan territory was gradual, but after 500 BC the political balance of power on the Italian peninsula shifted away from the Etruscans in favor of the rising Roman Republic.[17]
|
10 |
+
|
11 |
+
The earliest known examples of Etruscan writing are inscriptions found in southern Etruria that date to around 700 BC.[10][18] The Etruscans developed a system of writing which uses symbols borrowed from Euboean Greek script, but the Etruscan language remains only partly understood, making modern understanding of their society and culture heavily dependent on much later and generally disapproving Roman and Greek sources. In the Etruscan political system, authority resided in its individual small cities, and probably in its prominent individual families. At the height of Etruscan power, elite Etruscan families grew very rich through trade with the Celtic world to the north and the Greeks to the south, and they filled their large family tombs with imported luxuries. Judging from archaeological remains, Archaic Greece had a huge influence on their art and architecture, and Greek mythology was evidently very familiar to them.
|
12 |
+
|
13 |
+
The Etruscans called themselves Rasenna, which was shortened to Rasna or Raśna (etymology unknown).[19][20][21]
|
14 |
+
|
15 |
+
In Attic Greek, the Etruscans were known as Tyrrhenians (Τυρρηνοί, Turrhēnoi, earlier Τυρσηνοί Tursēnoi), from which the Romans derived the names Tyrrhēnī, Tyrrhēnia (Etruria), and Mare Tyrrhēnum (Tyrrhenian Sea),[22][full citation needed] prompting some to associate them with the Teresh (one of the Sea Peoples named by the Egyptians).
|
16 |
+
|
17 |
+
The ancient Romans referred to the Etruscans as the Tuscī or Etruscī (singular Tuscus).[23][24] Their Roman name is the origin of the terms "Toscana", which refers to their heartland, and "Etruria", which can refer to their wider region. The term Tusci is thought by linguists to have been the Umbrian word for “Etruscan,” based an inscription on an ancient bronze tablet from a nearby region.[25] The inscription contains the phrase turskum ... nomen, literally "the Tuscan name". Based on a knowledge of Umbrian grammar, linguists can infer that the base form of the word turksum is *Tursci,[26] which would, through metathesis and a word-initial epenthesis, be likely to lead to the form, E-trus-ci.[27]
|
18 |
+
|
19 |
+
As for the original meaning of the root, *Turs-, a widely cited hypothesis is that it, like the word Latin turris, means "tower", and comes from the Greek word for tower: τύρσις.[28] On this hypothesis, the Tusci were called the "people who build towers"[28] or "the tower builders".[29] This proposed etymology is made the more plausible because the Etruscans preferred to build their towns on high precipices reinforced by walls. Alternatively, Giuliano and Larissa Bonfante have speculated that Etruscan houses may have seemed like towers to the simple Latins.[30] The proposed etymology has a long history, Dionysius of Halicarnassus having observed long ago, "[T]here is no reason that the Greeks should not have called [the Etruscans] by this name, both from their living in towers and from the name of one of their rulers."[31]
|
20 |
+
|
21 |
+
Literary and historical texts in the Etruscan language have not survived, and the language itself is only partially understood by modern scholars. As previously noted, this makes modern understanding of their society and culture heavily dependent on much later and generally disapproving Roman and Greek sources. These ancient writers differed in their theories about the origin of the Etruscan people. Some suggested they were Pelasgians who had migrated there from Greece. Others maintained that they were indigenous to central Italy and were not from Greece.
|
22 |
+
|
23 |
+
The first Greek author to mention the Etruscans, whom the Ancient Greeks called Tyrrhenians, was the 8th-century BC poet Hesiod, in his work, the Theogony. He merely described them as residing in central Italy alongside the Latins.[32] The 7th-century BC Homeric Hymn to Dionysus[33] referred to them merely as pirates.[34] Unlike later Greek authors, these authors did not suggest that Etruscans had migrated to Italy from the east, and did not associate them with the Pelasgians.
|
24 |
+
|
25 |
+
It was only in the 5th century BC, when the Etruscan civilization had been established for several centuries, that Greek writers started associating the name “Tyrrhenians” with the “Pelasgians,” and even then, some did so in a way that suggests they were meant only as generic, descriptive labels for “non-Greek” and “indigenous ancestors of Greeks,” respectively.[35]
|
26 |
+
|
27 |
+
The 5th-century BC historians Thucydides[36] and Herodotus,[37] and the 1st-century BC historian Strabo[38][full citation needed], did seem to suggest that the Tyrrhenians were originally Pelasgians who migrated to Italy from Lydia by way of the Greek island of Lemnos. They all described Lemnos as having been settled by Pelasgians, whom Thucydides identified as "belonging to the Tyrrhenians" (τὸ δὲ πλεῖστον Πελασγικόν, τῶν καὶ Λῆμνόν ποτε καὶ Ἀθήνας Τυρσηνῶν). As Strabo and Herodotus told it,[39] the migration to Lemnos was led by Tyrrhenus / Tyrsenos, the son of Atys (who was king of Lydia). Strabo[38] added that the Pelasgians of Lemnos and Imbros then followed Tyrrhenus to the Italian Peninsula. And, according to the logographer Hellanicus of Lesbos, there was a Pelasgian migration from Thessaly in Greece to the Italian peninsula, as part of which the Pelasgians colonized the area he called Tyrrhenia, and they then came to be called Tyrrhenians.[40]
|
28 |
+
|
29 |
+
There is some evidence suggesting a link between the island of Lemnos and the Tyrrhenians. The Lemnos Stele bears inscriptions in a language with strong structural resemblances to the language of the Etruscans.[41] The discovery of these inscriptions in modern times has led to the suggestion of a "Tyrrhenian language group" comprising Etruscan, Lemnian, and the Raetic spoken in the Alps.
|
30 |
+
|
31 |
+
However, the 1st-century BC historian Dionysius of Halicarnassus, a Greek living in Rome, dismissed many of the ancient theories of other Greek historians and postulated that the Etruscans were indigenous people who had always lived in Etruria and were different from both the Pelasgians and the Lydians.[42] Dionysius noted that the 5th-century historian Xanthus of Lydia, who was originally from Sardis and was regarded as an important source and authority for the history of Lydia, never suggested a Lydian origin of the Etruscans and never named Tyrrhenus as a ruler of the Lydians.[42]
|
32 |
+
|
33 |
+
For this reason, therefore, I am persuaded that the Pelasgians are a different people from the Tyrrhenians. And I do not believe, either, that the Tyrrhenians were a colony of the Lydians; for they do not use the same language as the latter, nor can it be alleged that, though they no longer speak a similar tongue, they still retain some other indications of their mother country. For they neither worship the same gods as the Lydians nor make use of similar laws or institutions, but in these very respects they differ more from the Lydians than from the Pelasgians. Indeed, those probably come nearest to the truth who declare that the nation migrated from nowhere else, but was native to the country, since it is found to be a very ancient nation and to agree with no other either in its language or in its manner of living.
|
34 |
+
|
35 |
+
The credibility of Dionysius of Halicarnassus is arguably bolstered by the fact that he was the first ancient writer to report the endonym of the Etruscans: Rasenna.
|
36 |
+
|
37 |
+
The Romans, however, give them other names: from the country they once inhabited, named Etruria, they call them Etruscans, and from their knowledge of the ceremonies relating to divine worship, in which they excel others, they now call them, rather inaccurately, Tusci, but formerly, with the same accuracy as the Greeks, they called them Thyoscoï [an earlier form of Tusci]. Their own name for themselves, however, is the same as that of one of their leaders, Rasenna.
|
38 |
+
|
39 |
+
Similarly, the 1st-century BC historian Livy, in his Ab Urbe Condita Libri, said that the Rhaetians were Etruscans who had been driven into the mountains by the invading Gauls; and he asserted that the inhabitants of Raetia were of Etruscan origin.[43]
|
40 |
+
|
41 |
+
The Alpine tribes have also, no doubt, the same origin (of the Etruscans), especially the Raetians; who have been rendered so savage by the very nature of the country as to retain nothing of their ancient character save the sound of their speech, and even that is corrupted.
|
42 |
+
|
43 |
+
First-century historian Pliny the Elder also put the Etruscans in the context of the Rhaetian people to the north, and wrote in his Natural History (AD 79):[44]
|
44 |
+
|
45 |
+
Adjoining these the (Alpine) Noricans are the Raeti and Vindelici. All are divided into a number of states. The Raeti are believed to be people of Tuscan race driven out by the Gauls, their leader was named Raetus.
|
46 |
+
|
47 |
+
The question of Etruscan origins has long been a subject of interest and debate among historians. In modern times, all the evidence gathered so far by etruscologists points to an indigenous origin of the Etruscans.[45][46] Archaeologically there is no evidence for a migration of the Lydians or the Pelasgians into Etruria.[45][46] Modern etruscologists and archeologists, such as Massimo Pallottino (1947), have shown that early historians’ assumptions and assertions on the subject were groundless.[47] In 2000, the etruscologist Dominique Briquel explained in detail why he believes that ancient Greek historians’ writings on Etruscan origins should not even count as historical documents.[48] He argues that the ancient story of the Etruscans’ 'Lydian origins' was a deliberate, politically motivated fabrication, and that ancient Greeks inferred a connection between the Tyrrhenians and the Pelasgians solely on the basis of certain Greek and local traditions and on the mere fact that there had been trade between the Etruscans and Greeks.[49][50] He noted that, even if these stories include historical facts suggesting contact, such contact is more plausibly traceable to cultural exchange than to migration.[51]
|
48 |
+
|
49 |
+
Several archaeologists who have analyzed Bronze Age and Iron Age remains that were excavated in the territory of historical Etruria have pointed out that no evidence has been found, related either to material culture or to social practices, that can support a migration theory.[52] The most marked and radical change that has been archaeologically attested in the area is the adoption, starting in about the 12th century BC, of the funeral rite of incineration in terracotta urns, which is a Continental European practice, derived from the Urnfield culture; there is nothing about it that suggests an ethnic contribution from Asia Minor or the Near East.[52]
|
50 |
+
|
51 |
+
A 2012 survey of the previous 30 years’ archaeological findings, based on excavations of the major Etruscan cities, showed a continuity of culture from the last phase of the Bronze Age (11th–10th century BC) to the Iron Age (9th–8th century BC). This is evidence that the Etruscan civilization, which emerged around 900 BC, was built by people whose ancestors had inhabited that region for at least the previous 200 years.[53] Based on this cultural continuity, there is now a consensus among archeologists that Proto-Etruscan culture developed, during the last phase of the Bronze Age, from the indigenous Proto-Villanovan culture, and that the subsequent Iron Age Villanovan culture is most accurately described as an early phase of the Etruscan civilization.[9] It is possible that there were contacts between northern-central Italy and the Mycenaean world at the end of the Bronze Age. However contacts between the inhabitants of Etruria and inhabitants of Greece, Aegean Sea Islands, Asia Minor, and the Near East are attested only centuries later, when Etruscan civilization was already flourishing and Etruscan ethnogenesis was well established. The first of these attested contacts relate to the Greek colonies in Southern Italy and the consequent orientalizing period.[54]
|
52 |
+
|
53 |
+
A mtDNA study in 2004 stated that the Etruscans had no significant heterogeneity, and that all mitochondrial lineages observed among the Etruscan samples appear typically European or West Asian, but only a few haplotypes were shared with modern populations. Allele sharing between the Etruscans and modern populations is highest among Germans (seven haplotypes in common), the Cornish from South West England (five haplotypes in common), the Turks (four haplotypes in common), and the Tuscans (two haplotypes in common).[55]
|
54 |
+
|
55 |
+
A mitochondrial DNA study (2013) also concluded that the Etruscans were an indigenous population, showing that Etruscans' mtDNA appear to fall very close to a Neolithic population from Central Europe (Germany, Austria, Hungary) and to other Tuscan populations, strongly suggesting that the Etruscan civilization developed locally from the Villanovan culture, as already supported by archaeological evidence and anthropological research,[9][56] and that genetic links between Tuscany and western Anatolia date back to at least 5,000 years ago during the Neolithic and the "most likely separation time between Tuscany and Western Anatolia falls around 7,600 years ago", at the time of the migrations of Early European Farmers (EEF) from Anatolia to Europe in the early Neolithic. The ancient Etruscan samples had mitochondrial DNA haplogroups (mtDNA) JT (subclades of J and T) and U5, with a minority of mtDNA H1b.[57][58] According to British archeologist Phil Perkins, "there are indications that the evidence of DNA can support the theory that Etruscan people are autochthonous in central Italy".[59][60]
|
56 |
+
|
57 |
+
A 2019 genetic study published in the journal Science analyzed the remains of eleven Iron Age individuals from the areas around Rome, of which four were Etruscan individuals, one buried in Veio Grotta Gramiccia from the Villanovan era (900-800 BC) and three buried in La Mattonara Necropolis near Civitavecchia from the Orientalizing period (700-600 BC). The study concluded that Etruscans (900–600 BC) and the Latins (900–500 BC) from Latium vetus were genetically similar.[61], genetic differences between the examined Etruscans and Latins were found to be insignificant.[62] The Etruscan individuals and contemporary Latins were distinguished from preceding populations of Italy by the presence of ca. 30-40% steppe ancestry.[63] Their DNA was a mixture of two-thirds Copper Age ancestry (EEF + WHG; Etruscans ~66–72%, Latins ~62–75%) and one-third Steppe-related ancestry (Etruscans ~27–33%, Latins ~24–37%).[61] The only sample of Y-DNA extracted belonged to haplogroup J-M12 (J2b-L283), found in an individual dated 700-600 BC, and carried exactly the M314 derived allele also found in a Middle Bronze Age individual from Croatia (1631-1531 calBCE). While the four samples of mtDNA extracted belonged to haplogroups U5a1, H, T2b32, K1a4.[64] Therefore, Etruscans had also Steppe-related ancestry despite speaking a pre-Indo-European language.
|
58 |
+
|
59 |
+
The Etruscan civilization begins with the Villanovan culture, regarded as the oldest phase.[4][5][6][7][8] The Etruscans themselves dated the origin of the Etruscan nation to a date corresponding to the 11th or 10th century BC.[5][65] The Villanovan culture emerges with the phenomenon of regionalization from the late Bronze Age culture called "Proto-Villanovan", part of the central European Urnfield culture system. In the last Villanovan phase, called the recent phase (about 770–730 BC), the Etruscans established relations of a certain consistency with the first Greek immigrants in southern Italy (in Pithecusa and then in Cuma), so much so as to initially absorb techniques and figurative models and soon more properly cultural models, with the introduction, for example, of writing, of a new way of banqueting, of a heroic funerary ideology, that is, a new aristocratic way of life, such as to profoundly change the physiognomy of Etruscan society.[65] Thus, thanks to the growing number of contacts with the Greeks, the Etruscans entered what is called the Orientalizing phase. In this phase, there was a heavy influence in Greece, most of Italy and some areas of Spain, from the most advanced areas of the eastern Mediterranean and the ancient Near East.[66] Also directly Phoenician, or otherwise Near Eastern, craftsmen, merchants and artists contributed to the spread in southern Europe of Near Eastern cultural and artistic motifs. The last three phases of Etruscan civilization are called, respectively, Archaic, Classical and Hellenistic, which roughly correspond to the homonymous phases of the ancient Greek civilization.
|
60 |
+
|
61 |
+
Etruscan expansion was focused both to the north beyond the Apennine Mountains and into Campania. Some small towns in the sixth century BC disappeared during this time, ostensibly subsumed by greater, more powerful neighbours. However, it is certain that the political structure of the Etruscan culture was similar to, albeit more aristocratic than, Magna Graecia in the south. The mining and commerce of metal, especially copper and iron, led to an enrichment of the Etruscans and to the expansion of their influence in the Italian peninsula and the western Mediterranean Sea. Here, their interests collided with those of the Greeks, especially in the sixth century BC, when Phocaeans of Italy founded colonies along the coast of Sardinia, Spain and Corsica. This led the Etruscans to ally themselves with Carthage, whose interests also collided with the Greeks.[68][69]
|
62 |
+
|
63 |
+
Around 540 BC, the Battle of Alalia led to a new distribution of power in the western Mediterranean. Though the battle had no clear winner, Carthage managed to expand its sphere of influence at the expense of the Greeks, and Etruria saw itself relegated to the northern Tyrrhenian Sea with full ownership of Corsica. From the first half of the 5th century BC, the new political situation meant the beginning of the Etruscan decline after losing their southern provinces. In 480 BC, Etruria's ally Carthage was defeated by a coalition of Magna Graecia cities led by Syracuse, Sicily. A few years later, in 474 BC, Syracuse's tyrant Hiero defeated the Etruscans at the Battle of Cumae. Etruria's influence over the cities of Latium and Campania weakened, and the area was taken over by Romans and Samnites.
|
64 |
+
|
65 |
+
In the 4th century BC, Etruria saw a Gallic invasion end its influence over the Po Valley and the Adriatic coast. Meanwhile, Rome had started annexing Etruscan cities. This led to the loss of the northern Etruscan provinces. During the Roman–Etruscan Wars, Etruria was conquered by Rome in the 3rd century BC.[68][69]
|
66 |
+
|
67 |
+
According to legend,[70] there was a period between 600 BC and 500 BC in which an alliance was formed among twelve Etruscan settlements, known today as the Etruscan League, Etruscan Federation, or Dodecapolis (in Greek Δωδεκάπολις). According to a legend the Etruscan League of twelve cities was founded by Tarchon and his brother Tyrrhenus. Tarchon lent his name to the city of Tarchna, or Tarquinnii, as it was known by the Romans. Tyrrhenus gave his name to the Tyrrhenians, the alternative name for the Etruscans. Although there is no consensus on which cities were in the league, the following list may be close to the mark: Arretium, Caisra, Clevsin, Curtun, Perusna, Pupluna, Veii, Tarchna, Vetluna, Volterra, Velzna, and Velch. Some modern authors include Rusellae.[71] The league was mostly an economic and religious league, or a loose confederation, similar to the Greek states. During the later imperial times, when Etruria was just one of many regions controlled by Rome, the number of cities in the league increased by three. This is noted on many later grave stones from the 2nd century BC onwards. According to Livy, the twelve city-states met once a year at the Fanum Voltumnae at Volsinii, where a leader was chosen to represent the league.[72]
|
68 |
+
|
69 |
+
There were two other Etruscan leagues ("Lega dei popoli"): that of Campania, the main city of which was Capua, and the Po Valley city-states in northern Italy, which included Bologna, Spina and Adria.
|
70 |
+
|
71 |
+
Those who subscribe to a Latin foundation of Rome followed by an Etruscan invasion typically speak of an Etruscan "influence" on Roman culture – that is, cultural objects which were adopted by Rome from neighbouring Etruria. The prevailing view is that Rome was founded by Latins who later merged with Etruscans. In this interpretation, Etruscan cultural objects are considered influences rather than part of a heritage.[73] Rome was probably a small settlement until the arrival of the Etruscans, who constructed the first elements of its urban infrastructure such as the drainage system.[74][75]
|
72 |
+
|
73 |
+
The main criterion for deciding whether an object originated at Rome and traveled by influence to the Etruscans, or descended to the Romans from the Etruscans, is date. Many, if not most, of the Etruscan cities were older than Rome. If one finds that a given feature was there first, it cannot have originated at Rome. A second criterion is the opinion of the ancient sources. These would indicate that certain institutions and customs came directly from the Etruscans. Rome is located on the edge of what was Etruscan territory. When Etruscan settlements turned up south of the border, it was presumed that the Etruscans spread there after the foundation of Rome, but the settlements are now known to have preceded Rome.
|
74 |
+
|
75 |
+
Etruscan settlements were frequently built on hills – the steeper the better – and surrounded by thick walls. According to Roman mythology, when Romulus and Remus founded Rome, they did so on the Palatine Hill according to Etruscan ritual; that is, they began with a pomerium or sacred ditch. Then, they proceeded to the walls. Romulus was required to kill Remus when the latter jumped over the wall, breaking its magic spell (see also under Pons Sublicius). The name of Rome is attested in Etruscan in the form Ruma-χ meaning 'Roman', a form that mirrors other attested ethnonyms in that language with the same suffix -χ: Velzna-χ '(someone) from Volsinii' and Sveama-χ '(someone) from Sovana'. This in itself, however, is not enough to prove Etruscan origin conclusively. If Tiberius is from θefarie, then Ruma would have been placed on the Thefar (Tiber) river. A heavily discussed topic among scholars is who was the founding population of Rome. In 390 BC, the city of Rome was attacked by the Gauls, and as a result may have lost many – though not all – of its earlier records. Certainly, the history of Rome before that date is not as secure as it later becomes, but enough material remains to give a good picture of the development of the city and its institutions.[citation needed]
|
76 |
+
|
77 |
+
Later history relates that some Etruscans lived in the Vicus Tuscus,[76] the "Etruscan quarter", and that there was an Etruscan line of kings (albeit ones descended from a Greek, Demaratus of Corinth) that succeeded kings of Latin and Sabine origin. Etruscophile historians would argue that this, together with evidence for institutions, religious elements and other cultural elements, proves that Rome was founded by Italics. The true picture is rather more complicated, not least because the Etruscan cities were separate entities which never came together to form a single Etruscan state. Furthermore, there were strong Latin and Italic elements to Roman culture, and later Romans proudly celebrated these multiple, 'multicultural' influences on the city.
|
78 |
+
|
79 |
+
Under Romulus and Numa Pompilius, the people were said to have been divided into thirty curiae and three tribes. Few Etruscan words entered Latin, but the names of at least two of the tribes – Ramnes and Luceres – seem to be Etruscan. The last kings may have borne the Etruscan title lucumo, while the regalia were traditionally considered of Etruscan origin – the golden crown, the sceptre, the toga palmata (a special robe), the sella curulis (curule chair), and above all the primary symbol of state power: The fasces. The latter was a bundle of whipping rods surrounding a double-bladed axe, carried by the king's lictors. An example of the fasces are the remains of bronze rods and the axe from a tomb in Etruscan Vetulonia. This allowed archaeologists to identify the depiction of a fasces on the grave stele of Avele Feluske, who is shown as a warrior wielding the fasces. The most telling Etruscan feature is the word populus, which appears as an Etruscan deity, Fufluns. Populus seems to mean the people assembled in a military body, rather than the general populace.[citation needed]
|
80 |
+
|
81 |
+
The historical Etruscans had achieved a state system of society, with remnants of the chiefdom and tribal forms. In this, they were different from the surrounding Italics, who had chiefs and tribes.[citation needed] Rome was in a sense the first Italic state, but it began as an Etruscan one. It is believed that the Etruscan government style changed from total monarchy to oligarchic republic (as the Roman Republic) in the 6th century BC, although it is important to note this did not happen to all the city-states.[citation needed]
|
82 |
+
|
83 |
+
The government was viewed as being a central authority, ruling over all tribal and clan organizations. It retained the power of life and death; in fact, the gorgon, an ancient symbol of that power, appears as a motif in Etruscan decoration. The adherents to this state power were united by a common religion. Political unity in Etruscan society was the city-state, which was probably the referent of methlum, "district". Etruscan texts name quite a number of magistrates, without much of a hint as to their function: The camthi, the parnich, the purth, the tamera, the macstrev, and so on. The people were the mech. The chief ruler of a methlum was perhaps a zilach.[citation needed]
|
84 |
+
|
85 |
+
The princely tombs were not of individuals. The inscription evidence shows that families were interred there over long periods, marking the growth of the aristocratic family as a fixed institution, parallel to the gens at Rome and perhaps even its model. The Etruscans could have used any model of the eastern Mediterranean. That the growth of this class is related to the new acquisition of wealth through trade is unquestioned. The wealthiest cities were located near the coast. At the centre of the society was the married couple, tusurthir. The Etruscans were a monogamous society that emphasized pairing.
|
86 |
+
|
87 |
+
Similarly, the behaviour of some wealthy women is not uniquely Etruscan. The apparent promiscuous revelry has a spiritual explanation. Swaddling and Bonfante (among others) explain that depictions of the nude embrace, or symplegma, "had the power to ward off evil", as did baring the breast, which was adopted by western culture as an apotropaic device, appearing finally on the figureheads of sailing ships as a nude female upper torso. It is also possible that Greek and Roman attitudes to the Etruscans were based on a misunderstanding of the place of women within their society. In both Greece and the Earliest Republican Rome, respectable women were confined to the house and mixed-sex socialising did not occur. Thus, the freedom of women within Etruscan society could have been misunderstood as implying their sexual availability.[77] It is worth noting that a number of Etruscan tombs carry funerary inscriptions in the form "X son of (father) and (mother)", indicating the importance of the mother's side of the family.[77]
|
88 |
+
|
89 |
+
The Etruscans, like the contemporary cultures of Ancient Greece and Ancient Rome, had a significant military tradition. In addition to marking the rank and power of certain individuals, warfare was a considerable economic advantage to Etruscan civilization. Like many ancient societies, the Etruscans conducted campaigns during summer months, raiding neighboring areas, attempting to gain territory and combating piracy as a means of acquiring valuable resources, such as land, prestige, goods, and slaves. It is likely that individuals taken in battle would be ransomed back to their families and clans at high cost. Prisoners could also potentially be sacrificed on tombs as an honor to fallen leaders of Etruscan society, not unlike the sacrifices made by Achilles for Patrocles.[78][79][80]
|
90 |
+
|
91 |
+
The range of Etruscan civilization is marked by its cities. They were entirely assimilated by Italic, Celtic, or Roman ethnic groups, but the names survive from inscriptions and their ruins are of aesthetic and historic interest in most of the cities of central Italy. Etruscan cities flourished over most of Italy during the Roman Iron Age, marking the farthest extent of Etruscan civilization. They were gradually assimilated first by Italics in the south, then by Celts in the north and finally in Etruria itself by the growing Roman Republic.[78]
|
92 |
+
|
93 |
+
That many Roman cities were formerly Etruscan was well known to all the Roman authors. Some cities were founded by Etruscans in prehistoric times, and bore entirely Etruscan names. Others were colonized by Etruscans who Etruscanized the name, usually Italic.[79]
|
94 |
+
|
95 |
+
The Etruscan system of belief was an immanent polytheism; that is, all visible phenomena were considered to be a manifestation of divine power and that power was subdivided into deities that acted continually on the world of man and could be dissuaded or persuaded in favour of human affairs. How to understand the will of deities, and how to behave, had been revealed to the Etruscans by two initiators, Tages, a childlike figure born from tilled land and immediately gifted with prescience, and Vegoia, a female figure. Their teachings were kept in a series of sacred books. Three layers of deities are evident in the extensive Etruscan art motifs. One appears to be divinities of an indigenous nature: Catha and Usil, the sun; Tivr, the moon; Selvans, a civil god; Turan, the goddess of love; Laran, the god of war; Leinth, the goddess of death; Maris; Thalna; Turms; and the ever-popular Fufluns, whose name is related in some way to the city of Populonia and the populus Romanus, possibly, the god of the people.[81][82]
|
96 |
+
|
97 |
+
Ruling over this pantheon of lesser deities were higher ones that seem to reflect the Indo-European system: Tin or Tinia, the sky, Uni his wife (Juno), and Cel, the earth goddess. In addition, some Greek and Roman gods were taken into the Etruscan system: Aritimi (Artemis), Menrva (Minerva), Pacha (Dionysus). The Greek heroes taken from Homer also appear extensively in art motifs.[81][82]
|
98 |
+
|
99 |
+
Relatively little is known about the architecture of the ancient Etruscans. They adapted the native Italic styles with influence from the external appearance of Greek architecture. In turn, ancient Roman architecture began with Etruscan styles, and then accepted still further Greek influence. Roman temples show many of the same differences in form to Greek ones that Etruscan temples do, but like the Greeks, use stone, in which they closely copy Greek conventions. The houses of the wealthy were evidently often large and comfortable, but the burial chambers of tombs, often filled with grave-goods, are the nearest approach to them to survive. In the southern Etruscan area, tombs have large rock-cut chambers under a tumulus in large necropoleis, and these, together with some city walls, are the only Etruscan constructions to survive. Etruscan architecture is not generally considered as part of the body of Greco-Roman classical architecture.[83]
|
100 |
+
|
101 |
+
Etruscan art was produced by the Etruscan civilization between the 9th and 2nd centuries BC. Particularly strong in this tradition were figurative sculpture in terracotta (particularly lifesize on sarcophagi or temples), wall-painting and metalworking (especially engraved bronze mirrors). Etruscan sculpture in cast bronze was famous and widely exported, but few large examples have survived (the material was too valuable, and recycled later). In contrast to terracotta and bronze, there was apparently little Etruscan sculpture in stone, despite the Etruscans controlling fine sources of marble, including Carrara marble, which seems not to have been exploited until the Romans. Most surviving Etruscan art comes from tombs, including all the fresco wall-paintings, which show scenes of feasting and some narrative mythological subjects.[citation needed]
|
102 |
+
|
103 |
+
Bucchero wares in black were the early and native styles of fine Etruscan pottery. There was also a tradition of elaborate Etruscan vase painting, which sprung from its Greek equivalent; the Etruscans were the main export market for Greek vases. Etruscan temples were heavily decorated with colourfully painted terracotta antefixes and other fittings, which survive in large numbers where the wooden superstructure has vanished. Etruscan art was strongly connected to religion; the afterlife was of major importance in Etruscan art.[84]
|
104 |
+
|
105 |
+
The Etruscan musical instruments seen in frescoes and bas-reliefs are different types of pipes, such as the plagiaulos (the pipes of Pan or Syrinx), the alabaster pipe and the famous double pipes, accompanied on percussion instruments such as the tintinnabulum, tympanum and crotales, and later by stringed instruments like the lyre and kithara.
|
106 |
+
|
107 |
+
Etruscans left around 13,000 inscriptions which have been found so far, only a small minority of which are of significant length. Attested from 700 BC to AD 50, the relation of Etruscan to other languages has been a source of long-running speculation and study. The Etruscans are believed to have spoken a pre–Indo-European language,[85][86][87] and the majority consensus is that Etruscan is related only to other members of what is called the Tyrsenian language family, which in itself is an isolate family, that is, unrelated directly to other known language groups. Since Rix (1998), it is widely accepted that the Tyrsenian family groups Raetic and Lemnian are related to Etruscan.[10]
|
108 |
+
|
109 |
+
Etruscan texts, written in a space of seven centuries, use a form of the Greek alphabet due to close contact between the Etruscans and the Greek colonies at Pithecusae and Cumae in the 8th century BC (until it was no longer used, at the beginning of the 1st century AD). Etruscan inscriptions disappeared from Chiusi, Perugia and Arezzo around this time. Only a few fragments survive, religious and especially funeral texts most of which are late (from the 4th century BC). In addition to the original texts that have survived to this day, we have a large number of quotations and allusions from classical authors. In the 1st century BC, Diodorus Siculus wrote that literary culture was one of the great achievements of the Etruscans. Little is known of it and even what is known of their language is due to the repetition of the same few words in the many inscriptions found (by way of the modern epitaphs) contrasted in bilingual or trilingual texts with Latin and Punic. Out of the aforementioned genres, is just one such Volnio (Volnius) cited in classical sources mention.[88] With a few exceptions, such as the Liber Linteus, the only written records in the Etruscan language that remain are inscriptions, mainly funerary. The language is written in the Etruscan alphabet, a script related to the early Euboean Greek alphabet.[89] Many thousand inscriptions in Etruscan are known, mostly epitaphs, and a few very short texts have survived, which are mainly religious. Etruscan imaginative literature is evidenced only in references by later Roman authors, but it is evident from their visual art that the Greek myths were well-known.[90]
|
en/1155.html.txt
ADDED
@@ -0,0 +1,168 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
History of the world · Ancient maritime history Protohistory · Axial Age · Iron Age Historiography · Ancient literature Ancient warfare · Cradle of civilization
|
4 |
+
|
5 |
+
Ancient Greece (Greek: Ἑλλάς, romanized: Hellás) was a civilization belonging to a period of Greek history from the Greek Dark Ages of the 12th–9th centuries BC to the end of antiquity (c. AD 600). Immediately following this period was the beginning of the Early Middle Ages and the Byzantine time.[1] Roughly three centuries after the Late Bronze Age collapse of Mycenaean Greece, Greek urban poleis began to form in the 8th century BC, ushering in the Archaic period and colonization of the Mediterranean Basin. This was followed by the period of Classical Greece, an era that began with the Greco-Persian Wars, lasting from the 5th to 4th centuries BC. Due to the conquests by Alexander the Great of Macedon, Hellenistic civilization flourished from Central Asia to the western end of the Mediterranean Sea. The Hellenistic period came to an end with the conquests and annexations of the eastern Mediterranean world by the Roman Republic, which established the Roman province of Macedonia in Roman Greece, and later the province of Achaea during the Roman Empire.
|
6 |
+
|
7 |
+
Classical Greek culture, especially philosophy, had a powerful influence on ancient Rome, which carried a version of it to many parts of the Mediterranean Basin and Europe. For this reason, Classical Greece is generally considered to be the seminal culture which provided the foundation of modern Western culture and is considered the cradle of Western civilization.[2][3][4]
|
8 |
+
|
9 |
+
Classical antiquity in the Mediterranean region is commonly considered to have begun in the 8th century BC[5] (around the time of the earliest recorded poetry of Homer) and ended in the 6th century AD.
|
10 |
+
|
11 |
+
Classical antiquity in Greece was preceded by the Greek Dark Ages (c. 1200 – c. 800 BC), archaeologically characterised by the protogeometric and geometric styles of designs on pottery. Following the Dark Ages was the Archaic Period, beginning around the 8th century BC. The Archaic Period saw early developments in Greek culture and society which formed the basis for the Classical Period.[6] After the Archaic Period, the Classical Period in Greece is conventionally considered to have lasted from the Persian invasion of Greece in 480 until the death of Alexander the Great in 323.[7] The period is characterized by a style which was considered by later observers to be exemplary, i.e., "classical", as shown in the Parthenon, for instance. Politically, the Classical Period was dominated by Athens and the Delian League during the 5th century, but displaced by Spartan hegemony during the early 4th century BC, before power shifted to Thebes and the Boeotian League and finally to the League of Corinth led by Macedon. This period saw the Greco-Persian Wars and the Rise of Macedon.
|
12 |
+
|
13 |
+
Following the Classical period was the Hellenistic period (323–146 BC), during which Greek culture and power expanded into the Near and Middle East. This period begins with the death of Alexander and ends with the Roman conquest. Roman Greece is usually considered to be the period between Roman victory over the Corinthians at the Battle of Corinth in 146 BC and the establishment of Byzantium by Constantine as the capital of the Roman Empire in AD 330. Finally, Late Antiquity refers to the period of Christianization during the later 4th to early 6th centuries AD, sometimes taken to be complete with the closure of the Academy of Athens by Justinian I in 529.[8]
|
14 |
+
|
15 |
+
The historical period of ancient Greece is unique in world history as the first period attested directly in proper historiography, while earlier ancient history or proto-history is known by much more circumstantial evidence, such as annals or king lists, and pragmatic epigraphy.
|
16 |
+
|
17 |
+
Herodotus is widely known as the "father of history": his Histories are eponymous of the entire field. Written between the 450s and 420s BC, Herodotus' work reaches about a century into the past, discussing 6th century historical figures such as Darius I of Persia, Cambyses II and Psamtik III, and alluding to some 8th century ones such as Candaules.
|
18 |
+
|
19 |
+
Herodotus was succeeded by authors such as Thucydides, Xenophon, Demosthenes, Plato and Aristotle. Most of these authors were either Athenian or pro-Athenian, which is why far more is known about the history and politics of Athens than those of many other cities.
|
20 |
+
Their scope is further limited by a focus on political, military and diplomatic history, ignoring economic and social history.[9]
|
21 |
+
|
22 |
+
In the 8th century BC, Greece began to emerge from the Dark Ages which followed the fall of the Mycenaean civilization. Literacy had been lost and Mycenaean script forgotten, but the Greeks adopted the Phoenician alphabet, modifying it to create the Greek alphabet. Objects with Phoenician writing on them may have been available in Greece from the 9th century BC, but the earliest evidence of Greek writing comes from graffiti on Greek pottery from the mid-8th century.[10] Greece was divided into many small self-governing communities, a pattern largely dictated by Greek geography: every island, valley and plain is cut off from its neighbors by the sea or mountain ranges.[11]
|
23 |
+
|
24 |
+
The Lelantine War (c. 710 – c. 650 BC) is the earliest documented war of the ancient Greek period. It was fought between the important poleis (city-states) of Chalcis and Eretria over the fertile Lelantine plain of Euboea. Both cities seem to have suffered a decline as result of the long war, though Chalcis was the nominal victor.
|
25 |
+
|
26 |
+
A mercantile class arose in the first half of the 7th century BC, shown by the introduction of coinage in about 680 BC.[12] This seems to have introduced tension to many city-states. The aristocratic regimes which generally governed the poleis were threatened by the new-found wealth of merchants, who in turn desired political power. From 650 BC onwards, the aristocracies had to fight not to be overthrown and replaced by populist tyrants.[a]
|
27 |
+
|
28 |
+
A growing population and a shortage of land also seem to have created internal strife between the poor and the rich in many city-states. In Sparta, the Messenian Wars resulted in the conquest of Messenia and enserfment of the Messenians, beginning in the latter half of the 8th century BC, an act without precedent in ancient Greece. This practice allowed a social revolution to occur.[15] The subjugated population, thenceforth known as helots, farmed and labored for Sparta, whilst every Spartan male citizen became a soldier of the Spartan Army in a permanently militarized state. Even the elite were obliged to live and train as soldiers; this commonality between rich and poor citizens served to defuse the social conflict. These reforms, attributed to Lycurgus of Sparta, were probably complete by 650 BC.
|
29 |
+
|
30 |
+
Athens suffered a land and agrarian crisis in the late 7th century BC, again resulting in civil strife. The Archon (chief magistrate) Draco made severe reforms to the law code in 621 BC (hence "draconian"), but these failed to quell the conflict. Eventually the moderate reforms of Solon (594 BC), improving the lot of the poor but firmly entrenching the aristocracy in power, gave Athens some stability.
|
31 |
+
|
32 |
+
By the 6th century BC several cities had emerged as dominant in Greek affairs: Athens, Sparta, Corinth, and Thebes. Each of them had brought the surrounding rural areas and smaller towns under their control, and Athens and Corinth had become major maritime and mercantile powers as well.
|
33 |
+
|
34 |
+
Rapidly increasing population in the 8th and 7th centuries BC had resulted in emigration of many Greeks to form colonies in Magna Graecia (Southern Italy and Sicily), Asia Minor and further afield. The emigration effectively ceased in the 6th century BC by which time the Greek world had, culturally and linguistically, become much larger than the area of present-day Greece. Greek colonies were not politically controlled by their founding cities, although they often retained religious and commercial links with them.
|
35 |
+
|
36 |
+
The emigration process also determined a long series of conflicts between the Greek cities of Sicily, especially Syracuse, and the Carthaginians. These conflicts lasted from 600 BC to 265 BC when the Roman Republic entered into an alliance with the Mamertines to fend off the hostilities by the new tyrant of Syracuse, Hiero II and then the Carthaginians. This way Rome became the new dominant power against the fading strength of the Sicilian Greek cities and the Carthaginian supremacy in the region. One year later the First Punic War erupted.
|
37 |
+
|
38 |
+
In this period, there was huge economic development in Greece, and also in its overseas colonies which experienced a growth in commerce and manufacturing. There was a great improvement in the living standards of the population. Some studies estimate that the average size of the Greek household, in the period from 800 BC to 300 BC, increased five times, which indicates[citation needed] a large increase in the average income of the population.
|
39 |
+
|
40 |
+
In the second half of the 6th century BC, Athens fell under the tyranny of Peisistratos and then of his sons Hippias and Hipparchos. However, in 510 BC, at the instigation of the Athenian aristocrat Cleisthenes, the Spartan king Cleomenes I helped the Athenians overthrow the tyranny. Afterwards, Sparta and Athens promptly turned on each other, at which point Cleomenes I installed Isagoras as a pro-Spartan archon. Eager to prevent Athens from becoming a Spartan puppet, Cleisthenes responded by proposing to his fellow citizens that Athens undergo a revolution: that all citizens share in political power, regardless of status: that Athens become a "democracy". So enthusiastically did the Athenians take to this idea that, having overthrown Isagoras and implemented Cleisthenes's reforms, they were easily able to repel a Spartan-led three-pronged invasion aimed at restoring Isagoras.[16] The advent of the democracy cured many of the ills of Athens and led to a 'golden age' for the Athenians.
|
41 |
+
|
42 |
+
In 499 BC, the Ionian city states under Persian rule rebelled against the Persian-supported tyrants that ruled them.[17] Supported by troops sent from Athens and Eretria, they advanced as far as Sardis and burnt the city down, before being driven back by a Persian counterattack.[18] The revolt continued until 494, when the rebelling Ionians were defeated.[19] Darius did not forget that the Athenians had assisted the Ionian revolt, however, and in 490 he assembled an armada to conquer Athens.[20] Despite being heavily outnumbered, the Athenians—supported by their Plataean allies—defeated the Persian forces at the Battle of Marathon, and the Persian fleet withdrew.[21]
|
43 |
+
|
44 |
+
Ten years later, a second invasion was launched by Darius' son Xerxes.[22] The city-states of northern and central Greece submitted to the Persian forces without resistance, but a coalition of 31 Greek city states, including Athens and Sparta, determined to resist the Persian invaders.[23] At the same time, Greek Sicily was invaded by a Carthaginian force.[24] In 480 BC, the first major battle of the invasion was fought at Thermopylae, where a small force of Greeks, led by three hundred Spartans, held a crucial pass into the heart of Greece for several days; at the same time Gelon, tyrant of Syracuse, defeated the Carthaginian invasion at the Battle of Himera.[25]
|
45 |
+
|
46 |
+
The Persians were defeated by a primarily Athenian naval force at the Battle of Salamis, and in 479 defeated on land at the Battle of Plataea.[26] The alliance against Persia continued, initially led by the Spartan Pausanias but from 477 by Athens,[27] and by 460 Persia had been driven out of the Aegean.[28] During this period of campaigning, the Delian league gradually transformed from a defensive alliance of Greek states into an Athenian empire, as Athens' growing naval power enabled it to compel other league states to comply with its policies.[29] Athens ended its campaigns against Persia in 450 BC, after a disastrous defeat in Egypt in 454 BC, and the death of Cimon in action against the Persians on Cyprus in 450.[30]
|
47 |
+
|
48 |
+
While Athenian activity against the Persian empire was ending, however, conflict between Sparta and Athens was increasing. Sparta was suspicious of the increasing Athenian power funded by the Delian League, and tensions rose when Sparta offered aid to reluctant members of the League to rebel against Athenian domination. These tensions were exacerbated in 462, when Athens sent a force to aid Sparta in overcoming a helot revolt, but their aid was rejected by the Spartans.[31] In the 450s, Athens took control of Boeotia, and won victories over Aegina and Corinth.[32] However, Athens failed to win a decisive victory, and in 447 lost Boeotia again.[33] Athens and Sparta signed the Thirty Years' Peace in the winter of 446/5, ending the conflict.[34]
|
49 |
+
|
50 |
+
Despite the peace of 446/5, Athenian relations with Sparta declined again in the 430s, and in 431 war broke out once again.[35] The first phase of the war is traditionally seen as a series of annual invasions of Attica by Sparta, which made little progress, while Athens were successful against the Corinthian empire in the north-west of Greece, and in defending their own empire, despite suffering from plague and Spartan invasion.[36] The turning point of this phase of the war usually seen as the Athenian victories at Pylos and Sphakteria.[37] Sparta sued for peace, but the Athenians rejected the proposal.[38] The Athenian failure to regain control at Boeotia at Delium and Brasidas' successes in the north of Greece in 424, improved Sparta's position after Sphakteria.[39] After the deaths of Cleon and Brasidas, the strongest objectors to peace on the Athenian and Spartan sides respectively, a peace treaty was agreed in 421.[40]
|
51 |
+
|
52 |
+
The peace did not last, however. In 418 an alliance between Athens and Argos was defeated by Sparta at Mantinea.[41] In 415 Athens launched a naval expedition against Sicily;[42] the expedition ended in disaster with almost the entire army killed.[43] Soon after the Athenian defeat in Syracuse, Athens' Ionian allies began to rebel against the Delian league, while at the same time Persia began to once again involve itself in Greek affairs on the Spartan side.[44] Initially the Athenian position continued to be relatively strong, winning important battles such as those at Cyzicus in 410 and Arginusae in 406.[45] However, in 405 the Spartans defeated Athens in the Battle of Aegospotami, and began to blockade Athens' harbour;[46] with no grain supply and in danger of starvation, Athens sued for peace, agreeing to surrender their fleet and join the Spartan-led Peloponnesian League.[47]
|
53 |
+
|
54 |
+
Greece thus entered the 4th century BC under a Spartan hegemony, but it was clear from the start that this was weak. A demographic crisis meant Sparta was overstretched, and by 395 BC Athens, Argos, Thebes, and Corinth felt able to challenge Spartan dominance, resulting in the Corinthian War (395–387 BC). Another war of stalemates, it ended with the status quo restored, after the threat of Persian intervention on behalf of the Spartans.
|
55 |
+
|
56 |
+
The Spartan hegemony lasted another 16 years, until, when attempting to impose their will on the Thebans, the Spartans were defeated at Leuctra in 371 BC. The Theban general Epaminondas then led Theban troops into the Peloponnese, whereupon other city-states defected from the Spartan cause. The Thebans were thus able to march into Messenia and free the population.
|
57 |
+
|
58 |
+
Deprived of land and its serfs, Sparta declined to a second-rank power. The Theban hegemony thus established was short-lived; at the Battle of Mantinea in 362 BC, Thebes lost its key leader, Epaminondas, and much of its manpower, even though they were victorious in battle. In fact such were the losses to all the great city-states at Mantinea that none could establish dominance in the aftermath.
|
59 |
+
|
60 |
+
The weakened state of the heartland of Greece coincided with the Rise of Macedon, led by Philip II. In twenty years, Philip had unified his kingdom, expanded it north and west at the expense of Illyrian tribes, and then conquered Thessaly and Thrace. His success stemmed from his innovative reforms to the Macedonian army. Phillip intervened repeatedly in the affairs of the southern city-states, culminating in his invasion of 338 BC.
|
61 |
+
|
62 |
+
Decisively defeating an allied army of Thebes and Athens at the Battle of Chaeronea (338 BC), he became de facto hegemon of all of Greece, except Sparta. He compelled the majority of the city-states to join the League of Corinth, allying them to him, and preventing them from warring with each other. Philip then entered into war against the Achaemenid Empire but was assassinated by Pausanias of Orestis early on in the conflict.
|
63 |
+
|
64 |
+
Alexander the Great, son and successor of Philip, continued the war. Alexander defeated Darius III of Persia and completely destroyed the Achaemenid Empire, annexing it to Macedon and earning himself the epithet 'the Great'. When Alexander died in 323 BC, Greek power and influence was at its zenith. However, there had been a fundamental shift away from the fierce independence and classical culture of the poleis—and instead towards the developing Hellenistic culture.
|
65 |
+
|
66 |
+
The Hellenistic period lasted from 323 BC, which marked the end of the wars of Alexander the Great, to the annexation of Greece by the Roman Republic in 146 BC. Although the establishment of Roman rule did not break the continuity of Hellenistic society and culture, which remained essentially unchanged until the advent of Christianity, it did mark the end of Greek political independence.
|
67 |
+
|
68 |
+
After the death of Alexander, his empire was, after quite some conflict, divided among his generals, resulting in the Ptolemaic Kingdom (Egypt and adjoining North Africa), the Seleucid Empire (the Levant, Mesopotamia and Persia) and the Antigonid dynasty (Macedonia). In the intervening period, the poleis of Greece were able to wrest back some of their freedom, although still nominally subject to the Macedonian Kingdom.
|
69 |
+
|
70 |
+
During the Hellenistic period, the importance of "Greece proper" (that is, the territory of modern Greece) within the Greek-speaking world declined sharply. The great centers of Hellenistic culture were Alexandria and Antioch, capitals of the Ptolemaic Kingdom and the Seleucid Empire, respectively.
|
71 |
+
|
72 |
+
The conquests of Alexander had numerous consequences for the Greek city-states. It greatly widened the horizons of the Greeks and led to a steady emigration, particularly of the young and ambitious, to the new Greek empires in the east.[48] Many Greeks migrated to Alexandria, Antioch and the many other new Hellenistic cities founded in Alexander's wake, as far away as what are now Afghanistan and Pakistan, where the Greco-Bactrian Kingdom and the Indo-Greek Kingdom survived until the end of the first century BC.
|
73 |
+
|
74 |
+
The city-states within Greece formed themselves into two leagues; the Achaean League (including Thebes, Corinth and Argos) and the Aetolian League (including Sparta and Athens). For much of the period until the Roman conquest, these leagues were usually at war with each other, and/or allied to different sides in the conflicts between the Diadochi (the successor states to Alexander's empire).
|
75 |
+
|
76 |
+
The Antigonid Kingdom became involved in a war with the Roman Republic in the late 3rd century. Although the First Macedonian War was inconclusive, the Romans, in typical fashion, continued to make war on Macedon until it was completely absorbed into the Roman Republic (by 149 BC). In the east the unwieldy Seleucid Empire gradually disintegrated, although a rump survived until 64 BC, whilst the Ptolemaic Kingdom continued in Egypt until 30 BC, when it too was conquered by the Romans. The Aetolian league grew wary of Roman involvement in Greece, and sided with the Seleucids in the Roman–Seleucid War; when the Romans were victorious, the league was effectively absorbed into the Republic. Although the Achaean league outlasted both the Aetolian league and Macedon, it was also soon defeated and absorbed by the Romans in 146 BC, bringing an end to the independence of all of Greece.
|
77 |
+
|
78 |
+
The Greek peninsula came under Roman rule during the 146 BC conquest of Greece after the Battle of Corinth. Macedonia became a Roman province while southern Greece came under the surveillance of Macedonia's prefect; however, some Greek poleis managed to maintain a partial independence and avoid taxation. The Aegean islands were added to this territory in 133 BC. Athens and other Greek cities revolted in 88 BC, and the peninsula was crushed by the Roman general Sulla. The Roman civil wars devastated the land even further, until Augustus organized the peninsula as the province of Achaea in 27 BC.
|
79 |
+
|
80 |
+
Greece was a key eastern province of the Roman Empire, as the Roman culture had long been in fact Greco-Roman. The Greek language served as a lingua franca in the East and in Italy, and many Greek intellectuals such as Galen would perform most of their work in Rome.
|
81 |
+
|
82 |
+
The territory of Greece is mountainous, and as a result, ancient Greece consisted of many smaller regions each with its own dialect, cultural peculiarities, and identity. Regionalism and regional conflicts were a prominent feature of ancient Greece. Cities tended to be located in valleys between mountains, or on coastal plains, and dominated a certain area around them.
|
83 |
+
|
84 |
+
In the south lay the Peloponnese, itself consisting of the regions of Laconia (southeast), Messenia (southwest), Elis (west), Achaia (north), Korinthia (northeast), Argolis (east), and Arcadia (center). These names survive to the present day as regional units of modern Greece, though with somewhat different boundaries. Mainland Greece to the north, nowadays known as Central Greece, consisted of Aetolia and Acarnania in the west, Locris, Doris, and Phocis in the center, while in the east lay Boeotia, Attica, and Megaris. Northeast lay Thessaly, while Epirus lay to the northwest. Epirus stretched from the Ambracian Gulf in the south to the Ceraunian mountains and the Aoos river in the north, and consisted of Chaonia (north), Molossia (center), and Thesprotia (south). In the northeast corner was Macedonia,[49] originally consisting Lower Macedonia and its regions, such as Elimeia, Pieria, and Orestis. Around the time of Alexander I of Macedon, the Argead kings of Macedon started to expand into Upper Macedonia, lands inhabited by independent Macedonian tribes like the Lyncestae and the Elmiotae and to the West, beyond the Axius river, into Eordaia, Bottiaea, Mygdonia, and Almopia, regions settled by Thracian tribes.[50] To the north of Macedonia lay various non-Greek peoples such as the Paeonians due north, the Thracians to the northeast, and the Illyrians, with whom the Macedonians were frequently in conflict, to the northwest. Chalcidice was settled early on by southern Greek colonists and was considered part of the Greek world, while from the late 2nd millennium BC substantial Greek settlement also occurred on the eastern shores of the Aegean, in Anatolia.
|
85 |
+
|
86 |
+
During the Archaic period, the population of Greece grew beyond the capacity of its limited arable land (according to one estimate, the population of ancient Greece increased by a factor larger than ten during the period from 800 BC to 400 BC, increasing from a population of 800,000 to a total estimated population of 10 to 13 million).[51]
|
87 |
+
|
88 |
+
From about 750 BC the Greeks began 250 years of expansion, settling colonies in all directions. To the east, the Aegean coast of Asia Minor was colonized first, followed by Cyprus and the coasts of Thrace, the Sea of Marmara and south coast of the Black Sea.
|
89 |
+
|
90 |
+
Eventually Greek colonization reached as far northeast as present day Ukraine and Russia (Taganrog). To the west the coasts of Illyria, Sicily and Southern Italy were settled, followed by Southern France, Corsica, and even northeastern Spain. Greek colonies were also founded in Egypt and Libya.
|
91 |
+
|
92 |
+
Modern Syracuse, Naples, Marseille and Istanbul had their beginnings as the Greek colonies Syracusae (Συράκουσαι), Neapolis (Νεάπολις), Massalia (Μασσαλία) and Byzantion (Βυζάντιον). These colonies played an important role in the spread of Greek influence throughout Europe and also aided in the establishment of long-distance trading networks between the Greek city-states, boosting the economy of ancient Greece.
|
93 |
+
|
94 |
+
Ancient Greece consisted of several hundred relatively independent city-states (poleis). This was a situation unlike that in most other contemporary societies, which were either tribal or kingdoms ruling over relatively large territories. Undoubtedly the geography of Greece—divided and sub-divided by hills, mountains, and rivers—contributed to the fragmentary nature of ancient Greece. On the one hand, the ancient Greeks had no doubt that they were "one people"; they had the same religion, same basic culture, and same language. Furthermore, the Greeks were very aware of their tribal origins; Herodotus was able to extensively categorise the city-states by tribe. Yet, although these higher-level relationships existed, they seem to have rarely had a major role in Greek politics. The independence of the poleis was fiercely defended; unification was something rarely contemplated by the ancient Greeks. Even when, during the second Persian invasion of Greece, a group of city-states allied themselves to defend Greece, the vast majority of poleis remained neutral, and after the Persian defeat, the allies quickly returned to infighting.[53]
|
95 |
+
|
96 |
+
Thus, the major peculiarities of the ancient Greek political system were its fragmentary nature (and that this does not particularly seem to have tribal origin), and the particular focus on urban centers within otherwise tiny states. The peculiarities of the Greek system are further evidenced by the colonies that they set up throughout the Mediterranean Sea, which, though they might count a certain Greek polis as their 'mother' (and remain sympathetic to her), were completely independent of the founding city.
|
97 |
+
|
98 |
+
Inevitably smaller poleis might be dominated by larger neighbors, but conquest or direct rule by another city-state appears to have been quite rare. Instead the poleis grouped themselves into leagues, membership of which was in a constant state of flux. Later in the Classical period, the leagues would become fewer and larger, be dominated by one city (particularly Athens, Sparta and Thebes); and often poleis would be compelled to join under threat of war (or as part of a peace treaty). Even after Philip II of Macedon "conquered" the heartlands of ancient Greece, he did not attempt to annex the territory, or unify it into a new province, but simply compelled most of the poleis to join his own Corinthian League.
|
99 |
+
|
100 |
+
Initially many Greek city-states seem to have been petty kingdoms; there was often a city official carrying some residual, ceremonial functions of the king (basileus), e.g., the archon basileus in Athens.[54] However, by the Archaic period and the first historical consciousness, most had already become aristocratic oligarchies. It is unclear exactly how this change occurred. For instance, in Athens, the kingship had been reduced to a hereditary, lifelong chief magistracy (archon) by c. 1050 BC; by 753 BC this had become a decennial, elected archonship; and finally by 683 BC an annually elected archonship. Through each stage more power would have been transferred to the aristocracy as a whole, and away from a single individual.
|
101 |
+
|
102 |
+
Inevitably, the domination of politics and concomitant aggregation of wealth by small groups of families was apt to cause social unrest in many poleis. In many cities a tyrant (not in the modern sense of repressive autocracies), would at some point seize control and govern according to their own will; often a populist agenda would help sustain them in power. In a system wracked with class conflict, government by a 'strongman' was often the best solution.
|
103 |
+
|
104 |
+
Athens fell under a tyranny in the second half of the 6th century. When this tyranny was ended, the Athenians founded the world's first democracy as a radical solution to prevent the aristocracy regaining power. A citizens' assembly (the Ecclesia), for the discussion of city policy, had existed since the reforms of Draco in 621 BC; all citizens were permitted to attend after the reforms of Solon (early 6th century), but the poorest citizens could not address the assembly or run for office. With the establishment of the democracy, the assembly became the de jure mechanism of government; all citizens had equal privileges in the assembly. However, non-citizens, such as metics (foreigners living in Athens) or slaves, had no political rights at all.
|
105 |
+
|
106 |
+
After the rise of the democracy in Athens, other city-states founded democracies. However, many retained more traditional forms of government. As so often in other matters, Sparta was a notable exception to the rest of Greece, ruled through the whole period by not one, but two hereditary monarchs. This was a form of diarchy. The Kings of Sparta belonged to the Agiads and the Eurypontids, descendants respectively of Eurysthenes and Procles. Both dynasties' founders were believed to be twin sons of Aristodemus, a Heraclid ruler. However, the powers of these kings were held in check by both a council of elders (the Gerousia) and magistrates specifically appointed to watch over the kings (the Ephors).
|
107 |
+
|
108 |
+
Only free, land owning, native-born men could be citizens entitled to the full protection of the law in a city-state. In most city-states, unlike the situation in Rome, social prominence did not allow special rights. Sometimes families controlled public religious functions, but this ordinarily did not give any extra power in the government. In Athens, the population was divided into four social classes based on wealth. People could change classes if they made more money. In Sparta, all male citizens were called homoioi, meaning "peers". However, Spartan kings, who served as the city-state's dual military and religious leaders, came from two families.[citation needed]
|
109 |
+
|
110 |
+
Slaves had no power or status. They had the right to have a family and own property, subject to their master's goodwill and permission, but they had no political rights. By 600 BC chattel slavery had spread in Greece. By the 5th century BC slaves made up one-third of the total population in some city-states. Between forty and eighty per cent of the population of Classical Athens were slaves.[55] Slaves outside of Sparta almost never revolted because they were made up of too many nationalities and were too scattered to organize. However, unlike later Western culture, the Ancient Greeks did not think in terms of race.[56]
|
111 |
+
|
112 |
+
Most families owned slaves as household servants and laborers, and even poor families might have owned a few slaves. Owners were not allowed to beat or kill their slaves. Owners often promised to free slaves in the future to encourage slaves to work hard. Unlike in Rome, freedmen did not become citizens. Instead, they were mixed into the population of metics, which included people from foreign countries or other city-states who were officially allowed to live in the state.
|
113 |
+
|
114 |
+
City-states legally owned slaves. These public slaves had a larger measure of independence than slaves owned by families, living on their own and performing specialized tasks. In Athens, public slaves were trained to look out for counterfeit coinage, while temple slaves acted as servants of the temple's deity and Scythian slaves were employed in Athens as a police force corralling citizens to political functions.
|
115 |
+
|
116 |
+
Sparta had a special type of slaves called helots. Helots were Messenians enslaved during the Messenian Wars by the state and assigned to families where they were forced to stay. Helots raised food and did household chores so that women could concentrate on raising strong children while men could devote their time to training as hoplites. Their masters treated them harshly, and helots revolted against their masters several times before in 370/69 they won their freedom.[57]
|
117 |
+
|
118 |
+
For most of Greek history, education was private, except in Sparta. During the Hellenistic period, some city-states established public schools. Only wealthy families could afford a teacher. Boys learned how to read, write and quote literature. They also learned to sing and play one musical instrument and were trained as athletes for military service. They studied not for a job but to become an effective citizen. Girls also learned to read, write and do simple arithmetic so they could manage the household. They almost never received education after childhood.[citation needed]
|
119 |
+
|
120 |
+
Boys went to school at the age of seven, or went to the barracks, if they lived in Sparta. The three types of teachings were: grammatistes for arithmetic, kitharistes for music and dancing, and Paedotribae for sports.
|
121 |
+
|
122 |
+
Boys from wealthy families attending the private school lessons were taken care of by a paidagogos, a household slave selected for this task who accompanied the boy during the day. Classes were held in teachers' private houses and included reading, writing, mathematics, singing, and playing the lyre and flute. When the boy became 12 years old the schooling started to include sports such as wrestling, running, and throwing discus and javelin. In Athens some older youths attended academy for the finer disciplines such as culture, sciences, music, and the arts. The schooling ended at age 18, followed by military training in the army usually for one or two years.[58]
|
123 |
+
|
124 |
+
Only a small number of boys continued their education after childhood, as in the Spartan agoge. A crucial part of a wealthy teenager's education was a mentorship with an elder, which in a few places and times may have included pederasty.[citation needed] The teenager learned by watching his mentor talking about politics in the agora, helping him perform his public duties, exercising with him in the gymnasium and attending symposia with him. The richest students continued their education by studying with famous teachers. Some of Athens' greatest such schools included the Lyceum (the so-called Peripatetic school founded by Aristotle of Stageira) and the Platonic Academy (founded by Plato of Athens). The education system of the wealthy ancient Greeks is also called Paideia.[citation needed]
|
125 |
+
|
126 |
+
At its economic height, in the 5th and 4th centuries BC, ancient Greece was the most advanced economy in the world. According to some economic historians, it was one of the most advanced pre-industrial economies. This is demonstrated by the average daily wage of the Greek worker which was, in terms of wheat, about 12 kg. This was more than 3 times the average daily wage of an Egyptian worker during the Roman period, about 3.75 kg.[59]
|
127 |
+
|
128 |
+
At least in the Archaic Period, the fragmentary nature of ancient Greece, with many competing city-states, increased the frequency of conflict but conversely limited the scale of warfare. Unable to maintain professional armies, the city-states relied on their own citizens to fight. This inevitably reduced the potential duration of campaigns, as citizens would need to return to their own professions (especially in the case of, for example, farmers). Campaigns would therefore often be restricted to summer. When battles occurred, they were usually set piece and intended to be decisive. Casualties were slight compared to later battles, rarely amounting to more than 5% of the losing side, but the slain often included the most prominent citizens and generals who led from the front.
|
129 |
+
|
130 |
+
The scale and scope of warfare in ancient Greece changed dramatically as a result of the Greco-Persian Wars. To fight the enormous armies of the Achaemenid Empire was effectively beyond the capabilities of a single city-state. The eventual triumph of the Greeks was achieved by alliances of city-states (the exact composition changing over time), allowing the pooling of resources and division of labor. Although alliances between city-states occurred before this time, nothing on this scale had been seen before. The rise of Athens and Sparta as pre-eminent powers during this conflict led directly to the Peloponnesian War, which saw further development of the nature of warfare, strategy and tactics. Fought between leagues of cities dominated by Athens and Sparta, the increased manpower and financial resources increased the scale, and allowed the diversification of warfare. Set-piece battles during the Peloponnesian war proved indecisive and instead there was increased reliance on attritionary strategies, naval battle and blockades and sieges. These changes greatly increased the number of casualties and the disruption of Greek society.
|
131 |
+
Athens owned one of the largest war fleets in ancient Greece. It had over 200 triremes each powered by 170 oarsmen who were seated in 3 rows on each side of the ship. The city could afford such a large fleet—it had over 34,000 oars men—because it owned a lot of silver mines that were worked by slaves.
|
132 |
+
|
133 |
+
According to Josiah Ober, Greek city-states faced approximately a one-in-three chance of destruction during the archaic and classical period.[60]
|
134 |
+
|
135 |
+
Ancient Greek philosophy focused on the role of reason and inquiry. In many ways, it had an important influence on modern philosophy, as well as modern science. Clear unbroken lines of influence lead from ancient Greek and Hellenistic philosophers, to medieval Muslim philosophers and Islamic scientists, to the European Renaissance and Enlightenment, to the secular sciences of the modern day.
|
136 |
+
|
137 |
+
Neither reason nor inquiry began with the Greeks. Defining the difference between the Greek quest for knowledge and the quests of the elder civilizations, such as the ancient Egyptians and Babylonians, has long been a topic of study by theorists of civilization.
|
138 |
+
|
139 |
+
Some of the well-known philosophers of ancient Greece were Plato and Socrates, among others. They have aided in information about ancient Greek society through writings such as The Republic, by Plato.
|
140 |
+
|
141 |
+
The earliest Greek literature was poetry, and was composed for performance rather than private consumption.[61] The earliest Greek poet known is Homer, although he was certainly part of an existing tradition of oral poetry.[62] Homer's poetry, though it was developed around the same time that the Greeks developed writing, would have been composed orally; the first poet to certainly compose their work in writing was Archilochus, a lyric poet from the mid-seventh century BC.[63] tragedy developed, around the end of the archaic period, taking elements from across the pre-existing genres of late archaic poetry.[64] Towards the beginning of the classical period, comedy began to develop—the earliest date associated with the genre is 486 BC, when a competition for comedy became an official event at the City Dionysia in Athens, though the first preserved ancient comedy is Aristophanes' Acharnians, produced in 425.[65]
|
142 |
+
|
143 |
+
Like poetry, Greek prose had its origins in the archaic period, and the earliest writers of Greek philosophy, history, and medical literature all date to the sixth century BC.[66] Prose first emerged as the writing style adopted by the presocratic philosophers Anaximander and Anaximenes—though Thales of Miletus, considered the first Greek philosopher, apparently wrote nothing.[67] Prose as a genre reached maturity in the classical era,[68] and the major Greek prose genres—philosophy, history, rhetoric, and dialogue—developed in this period.[69]
|
144 |
+
|
145 |
+
The Hellenistic period saw the literary centre of the Greek world move from Athens, where it had been in the classical period, to Alexandria. At the same time, other Hellenistic kings such as the Antigonids and the Attalids were patrons of scholarship and literature, turning Pella and Pergamon respectively into cultural centres.[70] It was thanks to this cultural patronage by Hellenistic kings, and especially the Museum at Alexandria, which ensured that so much ancient Greek literature has survived.[71] The Library of Alexandria, part of the Museum, had the previously-unenvisaged aim of collecting together copies of all known authors in Greek. Almost all of the surviving non-technical Hellenistic literature is poetry,[72] and Hellenistic poetry tended to be highly intellectual,[73] blending different genres and traditions, and avoiding linear narratives.[74] The Hellenistic period also saw a shift in the ways literature was consumed—while in the archaic and classical periods literature had typically been experienced in public performance, in the Hellenistic period it was more commonly read privately.[75] At the same time, Hellenistic poets began to write for private, rather than public, consumption.[76]
|
146 |
+
|
147 |
+
With Octavian's victory at Actium in 31 BC, Rome began to become a major centre of Greek literature, as important Greek authors such as Strabo and Dionysius of Halicarnassus came to Rome.[77] The period of greatest innovation in Greek literature under Rome was the "long second century" from approximately AD 80 to around AD 230.[78] This innovation was especially marked in prose, with the development of the novel and a revival of prominence for display oratory both dating to this period.[79]
|
148 |
+
|
149 |
+
Music was present almost universally in Greek society, from marriages and funerals to religious ceremonies, theatre, folk music and the ballad-like reciting of epic poetry. There are significant fragments of actual Greek musical notation as well as many literary references to ancient Greek music. Greek art depicts musical instruments and dance. The word music derives from the name of the Muses, the daughters of Zeus who were patron goddesses of the arts.
|
150 |
+
|
151 |
+
Ancient Greek mathematics contributed many important developments to the field of mathematics, including the basic rules of geometry, the idea of formal mathematical proof, and discoveries in number theory, mathematical analysis, applied mathematics, and approached close to establishing integral calculus. The discoveries of several Greek mathematicians, including Pythagoras, Euclid, and Archimedes, are still used in mathematical teaching today.
|
152 |
+
|
153 |
+
The Greeks developed astronomy, which they treated as a branch of mathematics, to a highly sophisticated level. The first geometrical, three-dimensional models to explain the apparent motion of the planets were developed in the 4th century BC by Eudoxus of Cnidus and Callippus of Cyzicus. Their younger contemporary Heraclides Ponticus proposed that the Earth rotates around its axis. In the 3rd century BC Aristarchus of Samos was the first to suggest a heliocentric system. Archimedes in his treatise The Sand Reckoner revives Aristarchus' hypothesis that "the fixed stars and the Sun remain unmoved, while the Earth revolves about the Sun on the circumference of a circle". Otherwise, only fragmentary descriptions of Aristarchus' idea survive.[80] Eratosthenes, using the angles of shadows created at widely separated regions, estimated the circumference of the Earth with great accuracy.[81] In the 2nd century BC Hipparchus of Nicea made a number of contributions, including the first measurement of precession and the compilation of the first star catalog in which he proposed the modern system of apparent magnitudes.
|
154 |
+
|
155 |
+
The Antikythera mechanism, a device for calculating the movements of planets, dates from about 80 BC, and was the first ancestor of the astronomical computer. It was discovered in an ancient shipwreck off the Greek island of Antikythera, between Kythera and Crete. The device became famous for its use of a differential gear, previously believed to have been invented in the 16th century, and the miniaturization and complexity of its parts, comparable to a clock made in the 18th century. The original mechanism is displayed in the Bronze collection of the National Archaeological Museum of Athens, accompanied by a replica.
|
156 |
+
|
157 |
+
The ancient Greeks also made important discoveries in the medical field. Hippocrates was a physician of the Classical period, and is considered one of the most outstanding figures in the history of medicine. He is referred to as the "father of medicine"[82][83] in recognition of his lasting contributions to the field as the founder of the Hippocratic school of medicine. This intellectual school revolutionized medicine in ancient Greece, establishing it as a discipline distinct from other fields that it had traditionally been associated with (notably theurgy and philosophy), thus making medicine a profession.[84][85]
|
158 |
+
|
159 |
+
The art of ancient Greece has exercised an enormous influence on the culture of many countries from ancient times to the present day, particularly in the areas of sculpture and architecture. In the West, the art of the Roman Empire was largely derived from Greek models. In the East, Alexander the Great's conquests initiated several centuries of exchange between Greek, Central Asian and Indian cultures, resulting in Greco-Buddhist art, with ramifications as far as Japan. Following the Renaissance in Europe, the humanist aesthetic and the high technical standards of Greek art inspired generations of European artists. Well into the 19th century, the classical tradition derived from Greece dominated the art of the western world.
|
160 |
+
|
161 |
+
Religion was a central part of ancient Greek life.[86] Though the Greeks of different cities and tribes worshipped similar gods, religious practices were not uniform and the gods were thought of differently in different places.[87] The Greeks were polytheistic, worshipping many gods, but as early as the sixth century BC a pantheon of twelve Olympians began to develop.[87] Greek religion was influenced by the practices of the Greeks' near eastern neighbours at least as early as the archaic period, and by the Hellenistic period this influence was seen in both directions.[88]
|
162 |
+
|
163 |
+
The most important religious act in ancient Greece was animal sacrifice, most commonly of sheep and goats.[89] Sacrifice was accompanied by public prayer,[90] and prayer and hymns were themselves a major part of ancient Greek religious life.[91]
|
164 |
+
|
165 |
+
The civilization of ancient Greece has been immensely influential on language, politics, educational systems, philosophy, science, and the arts. It became the Leitkultur of the Roman Empire to the point of marginalizing native Italic traditions. As Horace put it,
|
166 |
+
|
167 |
+
Via the Roman Empire, Greek culture came to be foundational to Western culture in general.
|
168 |
+
The Byzantine Empire inherited Classical Greek culture directly, without Latin intermediation, and the preservation of classical Greek learning in medieval Byzantine tradition further exerted strong influence on the Slavs and later on the Islamic Golden Age and the Western European Renaissance. A modern revival of Classical Greek learning took place in the Neoclassicism movement in 18th- and 19th-century Europe and the Americas.
|
en/1156.html.txt
ADDED
@@ -0,0 +1,110 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
A civilization (or civilisation) is any complex society characterized by urban development, social stratification, a form of government and symbolic systems of communication such as writing.[1][2][3][4][5][6][7][8]
|
4 |
+
|
5 |
+
Civilizations are intimately associated with and often further defined by other socio-politico-economic characteristics, including centralization, the domestication of both humans and other organisms, specialization of labour, culturally ingrained ideologies of progress and supremacism, monumental architecture, taxation, societal dependence upon farming and expansionism.[2][3][4][6][7][8] Historically, civilization has often been understood as a larger and "more advanced" culture, in contrast to smaller, supposedly primitive cultures.[1][3][4][9] In this broad sense, a civilization contrasts with non-centralized tribal societies, including the cultures of nomadic pastoralists, Neolithic societies or hunter-gatherers, but sometimes it also contrasts with the cultures found within civilizations themselves. Civilizations are organized in densely populated settlements divided into hierarchical social classes with a ruling elite and subordinate urban and rural populations, which engage in intensive agriculture, mining, small-scale manufacture and trade. Civilization concentrates power, extending human control over the rest of nature, including over other human beings.[10]
|
6 |
+
|
7 |
+
Civilization, as its etymology (below) suggests, is a concept originally linked to towns and cities. The earliest emergence of civilizations is generally associated with the final stages of the Neolithic Revolution, culminating in the relatively rapid process of urban revolution and state formation, a political development associated with the appearance of a governing elite.
|
8 |
+
|
9 |
+
The English word civilization comes from the 16th-century French civilisé ("civilized"), from Latin civilis ("civil"), related to civis ("citizen") and civitas ("city").[11] The fundamental treatise is Norbert Elias's The Civilizing Process (1939), which traces social mores from medieval courtly society to the Early Modern period.[12] In The Philosophy of Civilization (1923), Albert Schweitzer outlines two opinions: one purely material and the other material and ethical. He said that the world crisis was from humanity losing the ethical idea of civilization, "the sum total of all progress made by man in every sphere of action and from every point of view in so far as the progress helps towards the spiritual perfecting of individuals as the progress of all progress".[13]
|
10 |
+
|
11 |
+
Adjectives like "civility" developed in the mid-16th century. The abstract noun "civilization", meaning "civilized condition", came in the 1760s, again from French. The first known use in French is in 1757, by Victor de Riqueti, marquis de Mirabeau, and the first use in English is attributed to Adam Ferguson, who in his 1767 Essay on the History of Civil Society wrote, "Not only the individual advances from infancy to manhood, but the species itself from rudeness to civilisation".[14] The word was therefore opposed to barbarism or rudeness, in the active pursuit of progress characteristic of the Age of Enlightenment.
|
12 |
+
|
13 |
+
In the late 1700s and early 1800s, during the French Revolution, "civilization" was used in the singular, never in the plural, and meant the progress of humanity as a whole. This is still the case in French.[15] The use of "civilizations" as a countable noun was in occasional use in the 19th century,[16] but has become much more common in the later 20th century, sometimes just meaning culture (itself in origin an uncountable noun, made countable in the context of ethnography).[17] Only in this generalized sense does it become possible to speak of a "medieval civilization", which in Elias's sense would have been an oxymoron.
|
14 |
+
|
15 |
+
Already in the 18th century, civilization was not always seen as an improvement. One historically important distinction between culture and civilization is from the writings of Rousseau, particularly his work about education, Emile. Here, civilization, being more rational and socially driven, is not fully in accord with human nature, and "human wholeness is achievable only through the recovery of or approximation to an original prediscursive or prerational natural unity" (see noble savage). From this, a new approach was developed, especially in Germany, first by Johann Gottfried Herder, and later by philosophers such as Kierkegaard and Nietzsche. This sees cultures as natural organisms, not defined by "conscious, rational, deliberative acts", but a kind of pre-rational "folk spirit". Civilization, in contrast, though more rational and more successful in material progress, is unnatural and leads to "vices of social life" such as guile, hypocrisy, envy and avarice.[15] In World War II, Leo Strauss, having fled Germany, argued in New York that this opinion of civilization was behind Nazism and German militarism and nihilism.[18]
|
16 |
+
|
17 |
+
Social scientists such as V. Gordon Childe have named a number of traits that distinguish a civilization from other kinds of society.[19] Civilizations have been distinguished by their means of subsistence, types of livelihood, settlement patterns, forms of government, social stratification, economic systems, literacy and other cultural traits. Andrew Nikiforuk argues that "civilizations relied on shackled human muscle. It took the energy of slaves to plant crops, clothe emperors, and build cities" and considers slavery to be a common feature of pre-modern civilizations.[20]
|
18 |
+
|
19 |
+
All civilizations have depended on agriculture for subsistence, with the possible exception of some early civilizations in Peru which may have depended upon maritime resources.[21][22] Grain farms can result in accumulated storage and a surplus of food, particularly when people use intensive agricultural techniques such as artificial fertilization, irrigation and crop rotation. It is possible but more difficult to accumulate horticultural production, and so civilizations based on horticultural gardening have been very rare.[23] Grain surpluses have been especially important because grain can be stored for a long time. A surplus of food permits some people to do things besides produce food for a living: early civilizations included soldiers, artisans, priests and priestesses, and other people with specialized careers. A surplus of food results in a division of labour and a more diverse range of human activity, a defining trait of civilizations. However, in some places hunter-gatherers have had access to food surpluses, such as among some of the indigenous peoples of the Pacific Northwest and perhaps during the Mesolithic Natufian culture. It is possible that food surpluses and relatively large scale social organization and division of labour predates plant and animal domestication.[24]
|
20 |
+
|
21 |
+
Civilizations have distinctly different settlement patterns from other societies. The word "civilization" is sometimes simply defined as "'living in cities'".[25] Non-farmers tend to gather in cities to work and to trade.
|
22 |
+
|
23 |
+
Compared with other societies, civilizations have a more complex political structure, namely the state.[26] State societies are more stratified[27] than other societies; there is a greater difference among the social classes. The ruling class, normally concentrated in the cities, has control over much of the surplus and exercises its will through the actions of a government or bureaucracy. Morton Fried, a conflict theorist and Elman Service, an integration theorist, have classified human cultures based on political systems and social inequality. This system of classification contains four categories[28]
|
24 |
+
|
25 |
+
Economically, civilizations display more complex patterns of ownership and exchange than less organized societies. Living in one place allows people to accumulate more personal possessions than nomadic people. Some people also acquire landed property, or private ownership of the land. Because a percentage of people in civilizations do not grow their own food, they must trade their goods and services for food in a market system, or receive food through the levy of tribute, redistributive taxation, tariffs or tithes from the food producing segment of the population. Early human cultures functioned through a gift economy supplemented by limited barter systems. By the early Iron Age, contemporary civilizations developed money as a medium of exchange for increasingly complex transactions. In a village, the potter makes a pot for the brewer and the brewer compensates the potter by giving him a certain amount of beer. In a city, the potter may need a new roof, the roofer may need new shoes, the cobbler may need new horseshoes, the blacksmith may need a new coat and the tanner may need a new pot. These people may not be personally acquainted with one another and their needs may not occur all at the same time. A monetary system is a way of organizing these obligations to ensure that they are fulfilled. From the days of the earliest monetarized civilizations, monopolistic controls of monetary systems have benefited the social and political elites.
|
26 |
+
|
27 |
+
Writing, developed first by people in Sumer, is considered a hallmark of civilization and "appears to accompany the rise of complex administrative bureaucracies or the conquest state".[31] Traders and bureaucrats relied on writing to keep accurate records. Like money, writing was necessitated by the size of the population of a city and the complexity of its commerce among people who are not all personally acquainted with each other. However, writing is not always necessary for civilization, as shown by the Inca civilization of the Andes, which did not use writing at all except from a complex recording system consisting of cords and nodes instead: the "Quipus", and still functioned as a civilized society.
|
28 |
+
|
29 |
+
Aided by their division of labour and central government planning, civilizations have developed many other diverse cultural traits. These include organized religion, development in the arts, and countless new advances in science and technology.
|
30 |
+
|
31 |
+
Through history, successful civilizations have spread, taking over more and more territory, and assimilating more and more previously-uncivilized people. Nevertheless, some tribes or people remain uncivilized even to this day. These cultures are called by some "primitive", a term that is regarded by others as pejorative. "Primitive" implies in some way that a culture is "first" (Latin = primus), that it has not changed since the dawn of humanity, though this has been demonstrated not to be true. Specifically, as all of today's cultures are contemporaries, today's so-called primitive cultures are in no way antecedent to those we consider civilized. Anthropologists today use the term "non-literate" to describe these peoples.
|
32 |
+
|
33 |
+
Civilization has been spread by colonization, invasion, religious conversion, the extension of bureaucratic control and trade, and by introducing agriculture and writing to non-literate peoples. Some non-civilized people may willingly adapt to civilized behaviour. But civilization is also spread by the technical, material and social dominance that civilization engenders.
|
34 |
+
|
35 |
+
Assessments of what level of civilization a polity has reached are based on comparisons of the relative importance of agricultural as opposed to trade or manufacturing capacities, the territorial extensions of its power, the complexity of its division of labour, and the carrying capacity of its urban centres. Secondary elements include a developed transportation system, writing, standardized measurement, currency, contractual and tort-based legal systems, art, architecture, mathematics, scientific understanding, metallurgy, political structures and organized religion.
|
36 |
+
|
37 |
+
Traditionally, polities that managed to achieve notable military, ideological and economic power defined themselves as "civilized" as opposed to other societies or human groupings outside their sphere of influence – calling the latter barbarians, savages, and primitives.
|
38 |
+
|
39 |
+
"Civilization" can also refer to the culture of a complex society, not just the society itself. Every society, civilization or not, has a specific set of ideas and customs, and a certain set of manufactures and arts that make it unique. Civilizations tend to develop intricate cultures, including a state-based decision making apparatus, a literature, professional art, architecture, organized religion and complex customs of education, coercion and control associated with maintaining the elite.
|
40 |
+
|
41 |
+
The intricate culture associated with civilization has a tendency to spread to and influence other cultures, sometimes assimilating them into the civilization (a classic example being Chinese civilization and its influence on nearby civilizations such as Korea, Japan and Vietnam). Many civilizations are actually large cultural spheres containing many nations and regions. The civilization in which someone lives is that person's broadest cultural identity.
|
42 |
+
|
43 |
+
Many historians have focused on these broad cultural spheres and have treated civilizations as discrete units. Early twentieth-century philosopher Oswald Spengler,[32] uses the German word Kultur, "culture", for what many call a "civilization". Spengler believed a civilization's coherence is based on a single primary cultural symbol. Cultures experience cycles of birth, life, decline and death, often supplanted by a potent new culture, formed around a compelling new cultural symbol. Spengler states civilization is the beginning of the decline of a culture as "the most external and artificial states of which a species of developed humanity is capable".[32]
|
44 |
+
|
45 |
+
This "unified culture" concept of civilization also influenced the theories of historian Arnold J. Toynbee in the mid-twentieth century. Toynbee explored civilization processes in his multi-volume A Study of History, which traced the rise and, in most cases, the decline of 21 civilizations and five "arrested civilizations". Civilizations generally declined and fell, according to Toynbee, because of the failure of a "creative minority", through moral or religious decline, to meet some important challenge, rather than mere economic or environmental causes.
|
46 |
+
|
47 |
+
Samuel P. Huntington defines civilization as "the highest cultural grouping of people and the broadest level of cultural identity people have short of that which distinguishes humans from other species". Huntington's theories about civilizations are discussed below.[33]
|
48 |
+
|
49 |
+
Another group of theorists, making use of systems theory, looks at a civilization as a complex system, i.e., a framework by which a group of objects can be analysed that work in concert to produce some result. Civilizations can be seen as networks of cities that emerge from pre-urban cultures and are defined by the economic, political, military, diplomatic, social and cultural interactions among them. Any organization is a complex social system and a civilization is a large organization. Systems theory helps guard against superficial and misleading analogies in the study and description of civilizations.
|
50 |
+
|
51 |
+
Systems theorists look at many types of relations between cities, including economic relations, cultural exchanges and political/diplomatic/military relations. These spheres often occur on different scales. For example, trade networks were, until the nineteenth century, much larger than either cultural spheres or political spheres. Extensive trade routes, including the Silk Road through Central Asia and Indian Ocean sea routes linking the Roman Empire, Persian Empire, India and China, were well established 2000 years ago, when these civilizations scarcely shared any political, diplomatic, military, or cultural relations. The first evidence of such long distance trade is in the ancient world. During the Uruk period, Guillermo Algaze has argued that trade relations connected Egypt, Mesopotamia, Iran and Afghanistan.[34] Resin found later in the Royal Cemetery at Ur is suggested was traded northwards from Mozambique.
|
52 |
+
|
53 |
+
Many theorists argue that the entire world has already become integrated into a single "world system", a process known as globalization. Different civilizations and societies all over the globe are economically, politically, and even culturally interdependent in many ways. There is debate over when this integration began, and what sort of integration – cultural, technological, economic, political, or military-diplomatic – is the key indicator in determining the extent of a civilization. David Wilkinson has proposed that economic and military-diplomatic integration of the Mesopotamian and Egyptian civilizations resulted in the creation of what he calls the "Central Civilization" around 1500 BCE.[35] Central Civilization later expanded to include the entire Middle East and Europe, and then expanded to a global scale with European colonization, integrating the Americas, Australia, China and Japan by the nineteenth century. According to Wilkinson, civilizations can be culturally heterogeneous, like the Central Civilization, or homogeneous, like the Japanese civilization. What Huntington calls the "clash of civilizations" might be characterized by Wilkinson as a clash of cultural spheres within a single global civilization. Others point to the Crusades as the first step in globalization. The more conventional viewpoint is that networks of societies have expanded and shrunk since ancient times, and that the current globalized economy and culture is a product of recent European colonialism.[citation needed]
|
54 |
+
|
55 |
+
The notion of world history as a succession of "civilizations" is an entirely modern one.
|
56 |
+
In the European Age of Discovery, emerging Modernity was put into stark contrast with the
|
57 |
+
Neolithic and Mesolithic stage of the cultures of the New World, suggesting
|
58 |
+
that the complex states had emerged at some time in prehistory.[36]
|
59 |
+
The term "civilization" as it is now most commonly understood, a complex state with centralization, social stratification and specialization of labour, corresponds to early empires that arise in the Fertile Crescent in the Early Bronze Age, around roughly 3000 BC.
|
60 |
+
Gordon Childe defined the emergence of civilization as the result of two successive revolutions: the Neolithic Revolution, triggering the development of settled communities, and the Urban Revolution.
|
61 |
+
|
62 |
+
At first, the Neolithic was associated with shifting subsistence cultivation, where continuous farming led to the depletion of soil fertility resulting in the requirement to cultivate fields further and further removed from the settlement, eventually compelling the settlement itself to move. In major semi-arid river valleys, annual flooding renewed soil fertility every year, with the result that population densities could rise significantly.
|
63 |
+
This encouraged a secondary products revolution in which people used domesticated animals not just for meat, but also for milk, wool, manure and pulling ploughs and carts – a development that spread through the Eurasian Oecumene.[definition needed]
|
64 |
+
|
65 |
+
The earlier neolithic technology and lifestyle was established first in Western Asia (for example at Göbekli Tepe, from about 9,130 BCE), and later in the Yellow River and Yangtze basins in China (for example the Pengtoushan culture from 7,500 BCE), and later spread.
|
66 |
+
Mesopotamia is the site of the earliest developments of the Neolithic Revolution from around 10,000 BCE, with civilizations developing from 6,500 years ago. This area has been identified as having "inspired some of the most important developments in human history including the invention of the wheel, the planting of the first cereal crops and the development of cursive script."[37]
|
67 |
+
Similar pre-civilized "neolithic revolutions" also began independently from 7,000 BCE in northwestern South America (the Norte Chico civilization)[38] and Mesoamerica.[39]
|
68 |
+
|
69 |
+
The 8.2 Kiloyear Arid Event and the 5.9 Kiloyear Interpluvial saw the drying out of semiarid regions and a major spread of deserts.[40] This climate change shifted the cost-benefit ratio of endemic violence between communities, which saw the abandonment of unwalled village communities and the appearance of walled cities, associated with the first civilizations.
|
70 |
+
|
71 |
+
This "urban revolution" marked the beginning of the accumulation of transferable surpluses, which helped economies and cities develop. It was associated with the state monopoly of violence, the appearance of a soldier class and endemic warfare, the rapid development of hierarchies, and the appearance of human sacrifice.[41]
|
72 |
+
|
73 |
+
The civilized urban revolution in turn was dependent upon the development of sedentism, the domestication of grains and animals and development of lifestyles that facilitated economies of scale and accumulation of surplus production by certain social sectors. The transition from complex cultures to civilizations, while still disputed, seems to be associated with the development of state structures, in which power was further monopolized by an elite ruling class[42] who practiced human sacrifice.[43]
|
74 |
+
|
75 |
+
Towards the end of the Neolithic period, various elitist Chalcolithic civilizations began to rise in various "cradles" from around 3300 BCE, expanding into large-scale empires in the course of the Bronze Age (Old Kingdom of Egypt, Akkadian Empire, Assyrian Empire, Old Assyrian Empire, Hittite Empire).
|
76 |
+
|
77 |
+
A parallel development took place independently in the Pre-Columbian Americas, where the Mayans began to be urbanized around 500 BCE, and the fully fledged Aztec and Inca emerged by the 15th century, briefly before European contact.
|
78 |
+
|
79 |
+
The Bronze Age collapse was followed by the Iron Age around 1200 BCE, during which a number of new civilizations emerged, culminating in a period from the 8th to the 3rd century BCE which Karl Jaspers termed the Axial Age, presented as a critical transitional phase leading to classical civilization.[44]
|
80 |
+
William Hardy McNeill proposed that this period of history was one in which cultural contact between previously separate civilizations saw the "closure of the oecumene" and led to accelerated social change from China to the Mediterranean, associated with the spread of coinage, larger empires and new religions. This view has recently been championed by Christopher Chase-Dunn and other world systems theorists.
|
81 |
+
|
82 |
+
A major technological and cultural transition to modernity began approximately 1500 CE in Western Europe, and from this beginning new approaches to science and law spread rapidly around the world, incorporating earlier cultures into the industrial and technological civilization of the present.[43][45]
|
83 |
+
|
84 |
+
Civilizations are traditionally understood as ending in one of two ways; either through incorporation into another expanding civilization (e.g. As Ancient Egypt was incorporated into Hellenistic Greek, and subsequently Roman civilizations), or by collapsing and reverting to a simpler form of living, as happens in so-called Dark Ages.[46]
|
85 |
+
|
86 |
+
There have been many explanations put forward for the collapse of civilization. Some focus on historical examples, and others on general theory.
|
87 |
+
|
88 |
+
Political scientist Samuel Huntington has argued that the defining characteristic of the 21st century will be a clash of civilizations.[55] According to Huntington, conflicts between civilizations will supplant the conflicts between nation-states and ideologies that characterized the 19th and 20th centuries. These views have been strongly challenged by others like Edward Said, Muhammed Asadi and Amartya Sen.[56] Ronald Inglehart and Pippa Norris have argued that the "true clash of civilizations" between the Muslim world and the West is caused by the Muslim rejection of the West's more liberal sexual values, rather than a difference in political ideology, although they note that this lack of tolerance is likely to lead to an eventual rejection of (true) democracy.[57] In Identity and Violence Sen questions if people should be divided along the lines of a supposed "civilization", defined by religion and culture only. He argues that this ignores the many others identities that make up people and leads to a focus on differences.
|
89 |
+
|
90 |
+
Cultural Historian Morris Berman suggests in Dark Ages America: the End of Empire that in the corporate consumerist United States, the very factors that once propelled it to greatness―extreme individualism, territorial and economic expansion, and the pursuit of material wealth―have pushed the United States across a critical threshold where collapse is inevitable. Politically associated with over-reach, and as a result of the environmental exhaustion and polarization of wealth between rich and poor, he concludes the current system is fast arriving at a situation where continuation of the existing system saddled with huge deficits and a hollowed-out economy is physically, socially, economically and politically impossible.[58] Although developed in much more depth, Berman's thesis is similar in some ways to that of Urban Planner, Jane Jacobs who argues that the five pillars of United States culture are in serious decay: community and family; higher education; the effective practice of science; taxation and government; and the self-regulation of the learned professions. The corrosion of these pillars, Jacobs argues, is linked to societal ills such as environmental crisis, racism and the growing gulf between rich and poor.[59]
|
91 |
+
|
92 |
+
Cultural critic and author Derrick Jensen argues that modern civilization is directed towards the domination of the environment and humanity itself in an intrinsically harmful, unsustainable, and self-destructive fashion.[60] Defending his definition both linguistically and historically, he defines civilization as "a culture... that both leads to and emerges from the growth of cities", with "cities" defined as "people living more or less permanently in one place in densities high enough to require the routine importation of food and other necessities of life".[61] This need for civilizations to import ever more resources, he argues, stems from their over-exploitation and diminution of their own local resources. Therefore, civilizations inherently adopt imperialist and expansionist policies and, to maintain these, highly militarized, hierarchically structured, and coercion-based cultures and lifestyles.
|
93 |
+
|
94 |
+
The Kardashev scale classifies civilizations based on their level of technological advancement, specifically measured by the amount of energy a civilization is able to harness. The scale is only hypothetical, but it puts energy consumption in a cosmic perspective. The Kardashev scale makes provisions for civilizations far more technologically advanced than any currently known to exist.
|
95 |
+
|
96 |
+
The pyramids of Giza are among the most recognizable symbols of the civilization of ancient Egypt.
|
97 |
+
|
98 |
+
The Acropolis in Greece, directly influencing architecture and engineering in Western, Islamic and Eastern civilizations up to the present day, 2400 years after construction
|
99 |
+
|
100 |
+
The Persepolis in Iran: Pictures of the Gate of All Nations, the main entrance for all representatives of other nations and states. Persepolis appears to have been a grand ceremonial complex, that it was especially used for celebrating Nowruz, the Persian New Year, in 515 BC.
|
101 |
+
|
102 |
+
The Temples of Baalbek in Lebanon show us the religious and architectural styles of some of the world's most influential civilizations including the Phoenicians, Babylonians, Persians, Greeks, Romans, Byzantines and Arabs
|
103 |
+
|
104 |
+
The Roman Forum in Rome, Italy, the political, economic, cultural and religious center of the Ancient Rome civilization, during the Republic and later Empire, its ruins still visible today in modern-day Rome
|
105 |
+
|
106 |
+
While the Great Wall of China was built to protect Ancient Chinese states and empires against the raids and invasions of nomadic groups, over thousands of years the region of China was also home to many influential civilizations
|
107 |
+
|
108 |
+
Virupaksha Temple at Hampi in India. The region of India is home and center to major religions such as Hinduism, Buddhism, Jainism and Sikhism and has influenced other cultures and civilizations, particularly in Asia.
|
109 |
+
|
110 |
+
The current scientific consensus is that human beings are the only animal species with the cognitive ability to create civilizations. A recent thought experiment, however, has considered whether it would "be possible to detect an industrial civilization in the geological record" given the paucity of geological information about eras before the quaternary.[62]
|
en/1157.html.txt
ADDED
@@ -0,0 +1,184 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
The Inca Empire (Quechua: Tawantinsuyu, lit. "The Four Regions"[4]), also known as the Incan Empire and the Inka Empire, was the largest empire in pre-Columbian America.[5] The administrative, political and military center of the empire was located in the city of Cusco. The Inca civilization arose from the Peruvian highlands sometime in the early 13th century. Its last stronghold was conquered by the Spanish in 1572.
|
4 |
+
|
5 |
+
From 1438 to 1533, the Incas incorporated a large portion of western South America, centered on the Andean Mountains, using conquest and peaceful assimilation, among other methods. At its largest, the empire joined Peru, western Ecuador, western and south central Bolivia, northwest Argentina, a large portion of what is today Chile, and the southwesternmost tip of Colombia into a state comparable to the historical empires of Eurasia. Its official language was Quechua.[6] Many local forms of worship persisted in the empire, most of them concerning local sacred Huacas, but the Inca leadership encouraged the sun worship of Inti – their sun god – and imposed its sovereignty above other cults such as that of Pachamama.[7] The Incas considered their king, the Sapa Inca, to be the "son of the sun."[8]
|
6 |
+
|
7 |
+
The Inca Empire was unusual in that it lacked many features associated with civilization in the Old World. Anthropologist Gordon McEwan wrote that:[9]
|
8 |
+
|
9 |
+
The Incas lacked the use of wheeled vehicles. They lacked animals to ride and draft animals that could pull wagons and plows... [They] lacked the knowledge of iron and steel... Above all, they lacked a system of writing... Despite these supposed handicaps, the Incas were still able to construct one of the greatest imperial states in human history.
|
10 |
+
|
11 |
+
Notable features of the Inca Empire include its monumental architecture, especially stonework, extensive road network reaching all corners of the empire, finely-woven textiles, use of knotted strings (quipu) for record keeping and communication, agricultural innovations in a difficult environment, and the organization and management fostered or imposed on its people and their labor.
|
12 |
+
|
13 |
+
The Incan economy has been described in contradictory ways by scholars:[10]
|
14 |
+
|
15 |
+
... feudal, slave, socialist (here one may choose between socialist paradise or socialist tyranny)
|
16 |
+
|
17 |
+
The Inca Empire functioned largely without money and without markets. Instead, exchange of goods and services was based on reciprocity between individuals and among individuals, groups, and Inca rulers. "Taxes" consisted of a labour obligation of a person to the Empire. The Inca rulers (who theoretically owned all the means of production) reciprocated by granting access to land and goods and providing food and drink in celebratory feasts for their subjects.[11]
|
18 |
+
|
19 |
+
The Inca referred to their empire as Tawantinsuyu,[4] "the four suyu". In Quechua, tawa is four and -ntin is a suffix naming a group, so that a tawantin is a quartet, a group of four things taken together, in this case the four suyu ("regions" or "provinces") whose corners met at the capital. The four suyu were: Chinchaysuyu (north), Antisuyu (east; the Amazon jungle), Qullasuyu (south) and Kuntisuyu (west). The name Tawantinsuyu was, therefore, a descriptive term indicating a union of provinces. The Spanish transliterated the name as Tahuatinsuyo or Tahuatinsuyu.
|
20 |
+
|
21 |
+
The term Inka means "ruler" or "lord" in Quechua and was used to refer to the ruling class or the ruling family.[12] The Incas were a very small percentage of the total population of the empire, probably numbering only 15,000 to 40,000, but ruling a population of around 10 million people.[13] The Spanish adopted the term (transliterated as Inca in Spanish) as an ethnic term referring to all subjects of the empire rather than simply the ruling class. As such, the name Imperio inca ("Inca Empire") referred to the nation that they encountered and subsequently conquered.
|
22 |
+
|
23 |
+
The Inca Empire was the last chapter of thousands of years of Andean civilizations. The Andean civilization was one of five civilizations in the world deemed by scholars to be "pristine", that is indigenous and not derivative from other civilizations.[14]
|
24 |
+
|
25 |
+
The Inca Empire was preceded by two large-scale empires in the Andes: the Tiwanaku (c. 300–1100 AD), based around Lake Titicaca and the Wari or Huari (c. 600–1100 AD) centered near the city of Ayacucho. The Wari occupied the Cuzco area for about 400 years. Thus, many of the characteristics of the Inca Empire derived from earlier multi-ethnic and expansive Andean cultures.[15]
|
26 |
+
|
27 |
+
Carl Troll has argued that the development of the Inca state in the central Andes was aided by conditions that allow for the elaboration of the staple food chuño. Chuño, which can be stored for long periods, is made of potato dried at the freezing temperatures that are common at nighttime in the southern Peruvian highlands. Such a link between the Inca state and chuño may be questioned, as potatoes and other crops such as maize can also be dried with only sunlight.[16] Troll did also argue that llamas, the Inca's pack animal, can be found in its largest numbers in this very same region.[16] It is worth considering the maximum extent of the Inca Empire roughly coincided with the greatest distribution of llamas and alpacas in Pre-Hispanic America.[17] The link between the Andean biomes of puna and páramo, pastoralism and the Inca state is a matter of research.[18] As a third point Troll pointed out irrigation technology as advantageous to the Inca state-building.[18] While Troll theorized environmental influences on the Inca Empire, he opposed environmental determinism, arguing that culture lay at the core of the Inca civilization.[18]
|
28 |
+
|
29 |
+
The Inca people were a pastoral tribe in the Cusco area around the 12th century. Incan oral history tells an origin story of three caves. The center cave at Tampu T'uqu (Tambo Tocco) was named Qhapaq T'uqu ("principal niche", also spelled Capac Tocco). The other caves were Maras T'uqu (Maras Tocco) and Sutiq T'uqu (Sutic Tocco).[19] Four brothers and four sisters stepped out of the middle cave. They were: Ayar Manco, Ayar Cachi, Ayar Awqa (Ayar Auca) and Ayar Uchu; and Mama Ocllo, Mama Raua, Mama Huaco and Mama Qura (Mama Cora). Out of the side caves came the people who were to be the ancestors of all the Inca clans.
|
30 |
+
|
31 |
+
Ayar Manco carried a magic staff made of the finest gold. Where this staff landed, the people would live. They traveled for a long time. On the way, Ayar Cachi boasted about his strength and power. His siblings tricked him into returning to the cave to get a sacred llama. When he went into the cave, they trapped him inside to get rid of him.
|
32 |
+
|
33 |
+
Ayar Uchu decided to stay on the top of the cave to look over the Inca people. The minute he proclaimed that, he turned to stone. They built a shrine around the stone and it became a sacred object. Ayar Auca grew tired of all this and decided to travel alone. Only Ayar Manco and his four sisters remained.
|
34 |
+
|
35 |
+
Finally, they reached Cusco. The staff sank into the ground. Before they arrived, Mama Ocllo had already borne Ayar Manco a child, Sinchi Roca. The people who were already living in Cusco fought hard to keep their land, but Mama Huaca was a good fighter. When the enemy attacked, she threw her bolas (several stones tied together that spun through the air when thrown) at a soldier (gualla) and killed him instantly. The other people became afraid and ran away.
|
36 |
+
|
37 |
+
After that, Ayar Manco became known as Manco Cápac, the founder of the Inca. It is said that he and his sisters built the first Inca homes in the valley with their own hands. When the time came, Manco Cápac turned to stone like his brothers before him. His son, Sinchi Roca, became the second emperor of the Inca.[20]
|
38 |
+
|
39 |
+
Under the leadership of Manco Cápac, the Inca formed the small city-state Kingdom of Cusco (Quechua Qusqu', Qosqo). In 1438, they began a far-reaching expansion under the command of Sapa Inca (paramount leader) Pachacuti-Cusi Yupanqui, whose name literally meant "earth-shaker". The name of Pachacuti was given to him after he conquered the Tribe of Chancas (modern Apurímac). During his reign, he and his son Tupac Yupanqui brought much of the modern-day territory of Peru under Inca control.[21]
|
40 |
+
|
41 |
+
Pachacuti reorganized the kingdom of Cusco into the Tahuantinsuyu, which consisted of a central government with the Inca at its head and four provincial governments with strong leaders: Chinchasuyu (NW), Antisuyu (NE), Kuntisuyu (SW) and Qullasuyu (SE).[22] Pachacuti is thought to have built Machu Picchu, either as a family home or summer retreat, although it may have been an agricultural station.[23]
|
42 |
+
|
43 |
+
Pachacuti sent spies to regions he wanted in his empire and they brought to him reports on political organization, military strength and wealth. He then sent messages to their leaders extolling the benefits of joining his empire, offering them presents of luxury goods such as high quality textiles and promising that they would be materially richer as his subjects.
|
44 |
+
|
45 |
+
Most accepted the rule of the Inca as a fait accompli and acquiesced peacefully. Refusal to accept Inca rule resulted in military conquest. Following conquest the local rulers were executed. The ruler's children were brought to Cusco to learn about Inca administration systems, then return to rule their native lands. This allowed the Inca to indoctrinate them into the Inca nobility and, with luck, marry their daughters into families at various corners of the empire.
|
46 |
+
|
47 |
+
Traditionally the son of the Inca ruler led the army. Pachacuti's son Túpac Inca Yupanqui began conquests to the north in 1463 and continued them as Inca ruler after Pachacuti's death in 1471. Túpac Inca's most important conquest was the Kingdom of Chimor, the Inca's only serious rival for the Peruvian coast. Túpac Inca's empire then stretched north into modern-day Ecuador and Colombia.
|
48 |
+
|
49 |
+
Túpac Inca's son Huayna Cápac added a small portion of land to the north in modern-day Ecuador. At its height, the Inca Empire included Peru, western and south central Bolivia, southwest Ecuador and a large portion of what is today Chile, north of the Maule River. Traditional historiography claims the advance south halted after the Battle of the Maule where they met determined resistance from the Mapuche.[24] This view is challenged by historian Osvaldo Silva who argues instead that it was the social and political framework of the Mapuche that posed the main difficulty in imposing imperial rule.[24] Silva does accept that the battle of the Maule was a stalemate, but argues the Incas lacked incentives for conquest they had had when fighting more complex societies such as the Chimú Empire.[24] Silva also disputes the date given by traditional historiography for the battle: the late 15th century during the reign of Topa Inca Yupanqui (1471–93).[24] Instead, he places it in 1532 during the Inca Civil War.[24] Nevertheless, Silva agrees on the claim that the bulk of the Incan conquests were made during the late 15th century.[24] At the time of the Incan Civil War an Inca army was, according to Diego de Rosales, subduing a revolt among the Diaguitas of Copiapó and Coquimbo.[24]
|
50 |
+
|
51 |
+
The empire's push into the Amazon Basin near the Chinchipe River was stopped by the Shuar in 1527.[25] The empire extended into corners of Argentina and Colombia. However, most of the southern portion of the Inca empire, the portion denominated as Qullasuyu, was located in the Altiplano.
|
52 |
+
|
53 |
+
The Inca Empire was an amalgamation of languages, cultures and peoples. The components of the empire were not all uniformly loyal, nor were the local cultures all fully integrated. The Inca empire as a whole had an economy based on exchange and taxation of luxury goods and labour. The following quote describes a method of taxation:
|
54 |
+
|
55 |
+
For as is well known to all, not a single village of the highlands or the plains failed to pay the tribute levied on it by those who were in charge of these matters. There were even provinces where, when the natives alleged that they were unable to pay their tribute, the Inca ordered that each inhabitant should be obliged to turn in every four months a large quill full of live lice, which was the Inca's way of teaching and accustoming them to pay tribute.[26]
|
56 |
+
|
57 |
+
Spanish conquistadors led by Francisco Pizarro and his brothers explored south from what is today Panama, reaching Inca territory by 1526.[27] It was clear that they had reached a wealthy land with prospects of great treasure, and after another expedition in 1529 Pizarro traveled to Spain and received royal approval to conquer the region and be its viceroy. This approval was received as detailed in the following quote: "In July 1529 the Queen of Spain signed a charter allowing Pizarro to conquer the Incas. Pizarro was named governor and captain of all conquests in Peru, or New Castile, as the Spanish now called the land."[28]
|
58 |
+
|
59 |
+
When the conquistadors returned to Peru in 1532, a war of succession between the sons of Sapa Inca Huayna Capac, Huáscar and Atahualpa, and unrest among newly conquered territories weakened the empire. Perhaps more importantly, smallpox, influenza, typhus and measles had spread from Central America.
|
60 |
+
|
61 |
+
The forces led by Pizarro consisted of 168 men, one cannon, and 27 horses. Conquistadors ported lances, arquebuses, steel armor and long swords. In contrast, the Inca used weapons made out of wood, stone, copper and bronze, while using an Alpaca fiber based armor, putting them at significant technological disadvantage—none of their weapons could pierce the Spanish steel armor. In addition, due to the absence of horses in the Americas, the Inca did not develop tactics to fight cavalry. However, the Inca were still effective warriors, being able to successfully fight the Mapuche, which later would strategically defeat the Spanish as they expanded further south.
|
62 |
+
|
63 |
+
The first engagement between the Inca and the Spanish was the Battle of Puná, near present-day Guayaquil, Ecuador, on the Pacific Coast; Pizarro then founded the city of Piura in July 1532. Hernando de Soto was sent inland to explore the interior and returned with an invitation to meet the Inca, Atahualpa, who had defeated his brother in the civil war and was resting at Cajamarca with his army of 80,000 troops, that were at the moment armed only with hunting tools (knives and lassos for hunting llamas).
|
64 |
+
|
65 |
+
Pizarro and some of his men, most notably a friar named Vincente de Valverde, met with the Inca, who had brought only a small retinue. The Inca offered them ceremonial chicha in a golden cup, which the Spanish rejected. The Spanish interpreter, Friar Vincente, read the "Requerimiento" that demanded that he and his empire accept the rule of King Charles I of Spain and convert to Christianity. Atahualpa dismissed the message and asked them to leave. After this, the Spanish began their attack against the mostly unarmed Inca, captured Atahualpa as hostage, and forced the Inca to collaborate.
|
66 |
+
|
67 |
+
Atahualpa offered the Spaniards enough gold to fill the room he was imprisoned in and twice that amount of silver. The Inca fulfilled this ransom, but Pizarro deceived them, refusing to release the Inca afterwards. During Atahualpa's imprisonment Huáscar was assassinated elsewhere. The Spaniards maintained that this was at Atahualpa's orders; this was used as one of the charges against Atahualpa when the Spaniards finally executed him, in August 1533.[29]
|
68 |
+
|
69 |
+
Although "defeat" often implies an unwanted loss in battle, much of the Inca elite "actually welcomed the Spanish invaders as liberators and willingly settled down with them to share rule of Andean farmers and miners."[30]
|
70 |
+
|
71 |
+
The Spanish installed Atahualpa's brother Manco Inca Yupanqui in power; for some time Manco cooperated with the Spanish while they fought to put down resistance in the north. Meanwhile, an associate of Pizarro, Diego de Almagro, attempted to claim Cusco. Manco tried to use this intra-Spanish feud to his advantage, recapturing Cusco in 1536, but the Spanish retook the city afterwards. Manco Inca then retreated to the mountains of Vilcabamba and established the small Neo-Inca State, where he and his successors ruled for another 36 years, sometimes raiding the Spanish or inciting revolts against them. In 1572 the last Inca stronghold was conquered and the last ruler, Túpac Amaru, Manco's son, was captured and executed.[31] This ended resistance to the Spanish conquest under the political authority of the Inca state.
|
72 |
+
|
73 |
+
After the fall of the Inca Empire many aspects of Inca culture were systematically destroyed, including their sophisticated farming system, known as the vertical archipelago model of agriculture.[32] Spanish colonial officials used the Inca mita corvée labor system for colonial aims, sometimes brutally. One member of each family was forced to work in the gold and silver mines, the foremost of which was the titanic silver mine at Potosí. When a family member died, which would usually happen within a year or two, the family was required to send a replacement.[citation needed]
|
74 |
+
|
75 |
+
The effects of smallpox on the Inca empire were even more devastating. Beginning in Colombia, smallpox spread rapidly before the Spanish invaders first arrived in the empire. The spread was probably aided by the efficient Inca road system. Smallpox was only the first epidemic.[33] Other diseases, including a probable Typhus outbreak in 1546, influenza and smallpox together in 1558, smallpox again in 1589, diphtheria in 1614, and measles in 1618, all ravaged the Inca people.
|
76 |
+
|
77 |
+
The number of people inhabiting Tawantinsuyu at its peak is uncertain, with estimates ranging from 4–37 million. Most population estimates are in the range of 6 to 14 million. In spite of the fact that the Inca kept excellent census records using their quipus, knowledge of how to read them was lost as almost all fell into disuse and disintegrated over time or were destroyed by the Spaniards.[34]
|
78 |
+
|
79 |
+
The empire was extremely linguistically diverse. Some of the most important languages were Quechua, Aymara, Puquina and Mochica, respectively mainly spoken in the Central Andes, the Altiplano or (Qullasuyu), the south Peruvian coast (Kuntisuyu), and the area of the north Peruvian coast (Chinchaysuyu) around Chan Chan, today Trujillo. Other languages included Quignam, Jaqaru, Leco, Uru-Chipaya languages, Kunza, Humahuaca, Cacán, Mapudungun, Culle, Chachapoya, Catacao languages, Manta, and Barbacoan languages, as well as numerous Amazonian languages on the frontier regions. The exact linguistic topography of the pre-Columbian and early colonial Andes remains incompletely understood, owing to the extinction of several languages and the loss of historical records.
|
80 |
+
|
81 |
+
In order to manage this diversity, the Inca lords promoted the usage of Quechua, especially the variety of modern-day Lima [35] as the Qhapaq Runasimi ("great language of the people"), or the official language/lingua franca. Defined by mutual intelligibility, Quechua is actually a family of languages rather than one single language, parallel to the Romance or Slavic languages in Europe. Most communities within the empire, even those resistant to Inca rule, learned to speak a variety of Quechua (forming new regional varieties with distinct phonetics) in order to communicate with the Inca lords and mitma colonists, as well as the wider integrating society, but largely retained their native languages as well. The Incas also had their own ethnic language, referred to as Qhapaq simi ("royal language"), which is thought to have been closely related to or a dialect of Puquina, which appears to have been the official language of the former Tiwanaku Empire, from which the Incas claimed descent, making Qhapaq simi a source of prestige for them. The split between Qhapaq simi and Qhapaq Runasimi also exemplifies the larger split between hatun and hunin (high and low) society in general.
|
82 |
+
|
83 |
+
There are several common misconceptions about the history of Quechua, as it is frequently identified as the "Inca language". Quechua did not originate with the Incas, had been a lingua franca in multiple areas before the Inca expansions, was diverse before the rise of the Incas, and it was not the native or original language of the Incas. In addition, the main official language of the Inca Empire was the coastal Quechua variety, native to modern Lima, not the Cusco dialect. The pre-Inca Chincha Kingdom, with whom the Incas struck an alliance, had made this variety into a local prestige language by their extensive trading activities. The Peruvian coast was also the most populous and economically active region of the Inca Empire, and employing coastal Quechua offered an alternative to neighboring Mochica, the language of the rival state of Chimu. Trade had also been spreading Quechua northwards before the Inca expansions, towards Cajamarca and Ecuador, and was likely the official language of the older Wari Empire. However, the Incas have left an impressive linguistic legacy, in that they introduced Quechua to many areas where it is still widely spoken today, including Ecuador, southern Bolivia, southern Colombia, and parts of the Amazon basin. The Spanish conquerors continued the official usage of Quechua during the early colonial period, and transformed it into a literary language.[36]
|
84 |
+
|
85 |
+
The Incas were not known to develop a written form of language; however, they visually recorded narratives through paintings on vases and cups (qirus).[37] These paintings are usually accompanied by geometric patterns known as toqapu, which are also found in textiles. Researchers have speculated that toqapu patterns could have served as a form of written communication (e.g.: heraldry, or glyphs), however this remains unclear.[38] The Incas also kept records by using quipus.
|
86 |
+
|
87 |
+
The high infant mortality rates that plagued the Inca Empire caused all newborn infants to be given the term ‘wawa’ when they were born. Most families did not invest very much into their child until they reached the age of two or three years old. Once the child reached the age of three, a "coming of age" ceremony occurred, called the rutuchikuy. For the Incas, this ceremony indicated that the child had entered the stage of "ignorance". During this ceremony, the family would invite all relatives to their house for food and dance, and then each member of the family would receive a lock of hair from the child. After each family member had received a lock, the father would shave the child's head. This stage of life was categorized by a stage of "ignorance, inexperience, and lack of reason, a condition that the child would overcome with time."[39] For Incan society, in order to advance from the stage of ignorance to development the child must learn the roles associated with their gender.
|
88 |
+
|
89 |
+
The next important ritual was to celebrate the maturity of a child. Unlike the coming of age ceremony, the celebration of maturity signified the child's sexual potency. This celebration of puberty was called warachikuy for boys and qikuchikuy for girls. The warachikuy ceremony included dancing, fasting, tasks to display strength, and family ceremonies. The boy would also be given new clothes and taught how to act as an unmarried man. The qikuchikuy signified the onset of menstruation, upon which the girl would go into the forest alone and return only once the bleeding had ended. In the forest she would fast, and, once returned, the girl would be given a new name, adult clothing, and advice. This "folly" stage of life was the time young adults were allowed to have sex without being a parent.[39]
|
90 |
+
|
91 |
+
Between the ages of 20 and 30, people were considered young adults, "ripe for serious thought and labor."[39] Young adults were able to retain their youthful status by living at home and assisting in their home community. Young adults only reached full maturity and independence once they had married.
|
92 |
+
|
93 |
+
At the end of life, the terms for men and women denote loss of sexual vitality and humanity. Specifically, the "decrepitude" stage signifies the loss of mental well-being and further physical decline.
|
94 |
+
|
95 |
+
In the Incan Empire, the age of marriage differed for men and women: men typically married at the age of 20, while women usually got married about four years earlier at the age of 16.[40] Men who were highly ranked in society could have multiple wives, but those lower in the ranks could only take a single wife.[41] Marriages were typically within classes and resembled a more business-like agreement. Once married, the women were expected to cook, collect food and watch over the children and livestock.[40] Girls and mothers would also work around the house to keep it orderly to please the public inspectors.[42] These duties remained the same even after wives became pregnant and with the added responsibility of praying and making offerings to Kanopa, who was the god of pregnancy.[40] It was typical for marriages to begin on a trial basis with both men and women having a say in the longevity of the marriage. If the man felt that it wouldn't work out or if the woman wanted to return to her parents’ home the marriage would end. Once the marriage was final, the only way the two could be divorced was if they did not have a child together.[40] Marriage within the Empire was crucial for survival. A family was considered disadvantaged if there was not a married couple at the center because everyday life centered around the balance of male and female tasks.[43]
|
96 |
+
|
97 |
+
According to some historians, such as Terence N. D'Altroy, male and female roles were considered equal in Inca society. The "indigenous cultures saw the two genders as complementary parts of a whole."[43] In other words, there was not a hierarchical structure in the domestic sphere for the Incas. Within the domestic sphere, women were known as the weavers. Women's everyday tasks included: spinning, watching the children, weaving cloth, cooking, brewing chichi, preparing fields for cultivation, planting seeds, bearing children, harvesting, weeding, hoeing, herding, and carrying water.[44] Men on the other hand, "weeded, plowed, participated in combat, helped in the harvest, carried firewood, built houses, herded llama and alpaca, and spun and wove when necessary".[44] This relationship between the genders may have been complementary. Unsurprisingly, onlooking Spaniards believed women were treated like slaves, because women did not work in Spanish society to the same extent, and certainly did not work in fields.[45] Women were sometimes allowed to own land and herds because inheritance was passed down from both the mother's and father's side of the family.[46] Kinship within the Inca society followed a parallel line of descent. In other words, women ascended from women and men ascended from men. Due to the parallel descent, a woman had access to land and other necessities through her mother.[44]
|
98 |
+
|
99 |
+
Inca myths were transmitted orally until early Spanish colonists recorded them; however, some scholars claim that they were recorded on quipus, Andean knotted string records.[47]
|
100 |
+
|
101 |
+
The Inca believed in reincarnation.[48] After death, the passage to the next world was fraught with difficulties. The spirit of the dead, camaquen, would need to follow a long road and during the trip the assistance of a black dog that could see in the dark was required. Most Incas imagined the after world to be like an earthly paradise with flower-covered fields and snow-capped mountains.
|
102 |
+
|
103 |
+
It was important to the Inca that they not die as a result of burning or that the body of the deceased not be incinerated. Burning would cause their vital force to disappear and threaten their passage to the after world. Those who obeyed the Inca moral code – ama suwa, ama llulla, ama quella (do not steal, do not lie, do not be lazy) – "went to live in the Sun's warmth while others spent their eternal days in the cold earth".[49] The Inca nobility practiced cranial deformation.[50] They wrapped tight cloth straps around the heads of newborns to shape their soft skulls into a more conical form, thus distinguishing the nobility from other social classes.
|
104 |
+
|
105 |
+
The Incas made human sacrifices. As many as 4,000 servants, court officials, favorites and concubines were killed upon the death of the Inca Huayna Capac in 1527.[51] The Incas performed child sacrifices around important events, such as the death of the Sapa Inca or during a famine. These sacrifices were known as qhapaq hucha.[52]
|
106 |
+
|
107 |
+
The Incas were polytheists who worshipped many gods. These included:
|
108 |
+
|
109 |
+
The Inca Empire employed central planning. The Inca Empire traded with outside regions, although they did not operate a substantial internal market economy. While axe-monies were used along the northern coast, presumably by the provincial mindaláe trading class,[53] most households in the empire lived in a traditional economy in which households were required to pay taxes, usually in the form of the mit'a corvée labor, and military obligations,[54] though barter (or trueque) was present in some areas.[55] In return, the state provided security, food in times of hardship through the supply of emergency resources, agricultural projects (e.g. aqueducts and terraces) to increase productivity and occasional feasts. While mit'a was used by the state to obtain labor, individual villages had a pre-inca system of communal work, known as mink'a. This system survives to the modern day, known as mink'a or faena. The economy rested on the material foundations of the vertical archipelago, a system of ecological complementarity in accessing resources[56] and the cultural foundation of ayni, or reciprocal exchange.[57][58]
|
110 |
+
|
111 |
+
The Sapa Inca was conceptualized as divine and was effectively head of the state religion. The Willaq Umu (or Chief Priest) was second to the emperor. Local religious traditions continued and in some cases such as the Oracle at Pachacamac on the Peruvian coast, were officially venerated. Following Pachacuti, the Sapa Inca claimed descent from Inti, who placed a high value on imperial blood; by the end of the empire, it was common to incestuously wed brother and sister. He was "son of the sun," and his people the intip churin, or "children of the sun," and both his right to rule and mission to conquer derived from his holy ancestor. The Sapa Inca also presided over ideologically important festivals, notably during the Inti Raymi, or "Sunfest" attended by soldiers, mummified rulers, nobles, clerics and the general population of Cusco beginning on the June solstice and culminating nine days later with the ritual breaking of the earth using a foot plow by the Inca. Moreover, Cusco was considered cosmologically central, loaded as it was with huacas and radiating ceque lines and geographic center of the Four-Quarters; Inca Garcilaso de la Vega called it "the navel of the universe".[59][60][61][62]
|
112 |
+
|
113 |
+
The Inca Empire was a federalist system consisting of a central government with the Inca at its head and four-quarters, or suyu: Chinchay Suyu (NW), Anti Suyu (NE), Kunti Suyu (SW) and Qulla Suyu (SE). The four corners of these quarters met at the center, Cusco. These suyu were likely created around 1460 during the reign of Pachacuti before the empire reached its largest territorial extent. At the time the suyu were established they were roughly of equal size and only later changed their proportions as the empire expanded north and south along the Andes.[63]
|
114 |
+
|
115 |
+
Cusco was likely not organized as a wamani, or province. Rather, it was probably somewhat akin to a modern federal district, like Washington, DC or Mexico City. The city sat at the center of the four suyu and served as the preeminent center of politics and religion. While Cusco was essentially governed by the Sapa Inca, his relatives and the royal panaqa lineages, each suyu was governed by an Apu, a term of esteem used for men of high status and for venerated mountains. Both Cusco as a district and the four suyu as administrative regions were grouped into upper hanan and lower hurin divisions. As the Inca did not have written records, it is impossible to exhaustively list the constituent wamani. However, colonial records allow us to reconstruct a partial list. There were likely more than 86 wamani, with more than 48 in the highlands and more than 38 on the coast.[64][65][66]
|
116 |
+
|
117 |
+
The most populous suyu was Chinchaysuyu, which encompassed the former Chimu empire and much of the northern Andes. At its largest extent, it extended through much of modern Ecuador and into modern Colombia.
|
118 |
+
|
119 |
+
The largest suyu by area was Qullasuyu, named after the Aymara-speaking Qulla people. It encompassed the Bolivian Altiplano and much of the southern Andes, reaching Argentina and as far south as the Maipo or Maule river in Central Chile.[67] Historian José Bengoa singled out Quillota as likely being the foremost Inca settlement in Chile.[68]
|
120 |
+
|
121 |
+
The second smallest suyu, Antisuyu, was northwest of Cusco in the high Andes. Its name is the root of the word "Andes."[69]
|
122 |
+
|
123 |
+
Kuntisuyu was the smallest suyu, located along the southern coast of modern Peru, extending into the highlands towards Cusco.[70]
|
124 |
+
|
125 |
+
The Inca state had no separate judiciary or codified laws. Customs, expectations and traditional local power holders governed behavior. The state had legal force, such as through tokoyrikoq (lit. "he who sees all"), or inspectors. The highest such inspector, typically a blood relative to the Sapa Inca, acted independently of the conventional hierarchy, providing a point of view for the Sapa Inca free of bureaucratic influence.[71]
|
126 |
+
|
127 |
+
The Inca had three moral precepts that governed their behavior:
|
128 |
+
|
129 |
+
Colonial sources are not entirely clear or in agreement about Inca government structure, such as exact duties and functions of government positions. But the basic structure can be broadly described. The top was the Sapa Inca. Below that may have been the Willaq Umu, literally the "priest who recounts", the High Priest of the Sun.[72] However, beneath the Sapa Inca also sat the Inkap rantin, who was a confidant and assistant to the Sapa Inca, perhaps similar to a Prime Minister.[73] Starting with Topa Inca Yupanqui, a "Council of the Realm" was composed of 16 nobles: 2 from hanan Cusco; 2 from hurin Cusco; 4 from Chinchaysuyu; 2 from Cuntisuyu; 4 from Collasuyu; and 2 from Antisuyu. This weighting of representation balanced the hanan and hurin divisions of the empire, both within Cusco and within the Quarters (hanan suyukuna and hurin suyukuna).[74]
|
130 |
+
|
131 |
+
While provincial bureaucracy and government varied greatly, the basic organization was decimal. Taxpayers – male heads of household of a certain age range – were organized into corvée labor units (often doubling as military units) that formed the state's muscle as part of mit'a service. Each unit of more than 100 tax-payers were headed by a kuraka, while smaller units were headed by a kamayuq, a lower, non-hereditary status. However, while kuraka status was hereditary and typically served for life, the position of a kuraka in the hierarchy was subject to change based on the privileges of superiors in the hierarchy; a pachaka kuraka could be appointed to the position by a waranqa kuraka. Furthermore, one kuraka in each decimal level could serve as the head of one of the nine groups at a lower level, so that a pachaka kuraka might also be a waranqa kuraka, in effect directly responsible for one unit of 100 tax-payers and less directly responsible for nine other such units.[75][76][77]
|
132 |
+
|
133 |
+
Francisco Pizarro
|
134 |
+
|
135 |
+
Architecture was the most important of the Incan arts, with textiles reflecting architectural motifs. The most notable example is Machu Picchu, which was constructed by Inca engineers. The prime Inca structures were made of stone blocks that fit together so well that a knife could not be fitted through the stonework. These constructs have survived for centuries, with no use of mortar to sustain them.
|
136 |
+
|
137 |
+
This process was first used on a large scale by the Pucara (c. 300 BC–AD 300) peoples to the south in Lake Titicaca and later in the city of Tiwanaku (c. AD 400–1100) in present-day Bolivia. The rocks were sculpted to fit together exactly by repeatedly lowering a rock onto another and carving away any sections on the lower rock where the dust was compressed. The tight fit and the concavity on the lower rocks made them extraordinarily stable, despite the ongoing challenge of earthquakes and volcanic activity.
|
138 |
+
|
139 |
+
Physical measures used by the Inca were based on human body parts. Units included fingers, the distance from thumb to forefinger, palms, cubits and wingspans. The most basic distance unit was thatkiy or thatki, or one pace. The next largest unit was reported by Cobo to be the topo or tupu, measuring 6,000 thatkiys, or about 7.7 km (4.8 mi); careful study has shown that a range of 4.0 to 6.3 km (2.5 to 3.9 mi) is likely. Next was the wamani, composed of 30 topos (roughly 232 km or 144 mi). To measure area, 25 by 50 wingspans were used, reckoned in topos (roughly 3,280 km2 or 1,270 sq mi). It seems likely that distance was often interpreted as one day's walk; the distance between tambo way-stations varies widely in terms of distance, but far less in terms of time to walk that distance.[80][81]
|
140 |
+
|
141 |
+
Inca calendars were strongly tied to astronomy. Inca astronomers understood equinoxes, solstices and zenith passages, along with the Venus cycle. They could not, however, predict eclipses. The Inca calendar was essentially lunisolar, as two calendars were maintained in parallel, one solar and one lunar. As 12 lunar months fall 11 days short of a full 365-day solar year, those in charge of the calendar had to adjust every winter solstice. Each lunar month was marked with festivals and rituals.[82] Apparently, the days of the week were not named and days were not grouped into weeks. Similarly, months were not grouped into seasons. Time during a day was not measured in hours or minutes, but in terms of how far the sun had travelled or in how long it had taken to perform a task.[83]
|
142 |
+
|
143 |
+
The sophistication of Inca administration, calendrics and engineering required facility with numbers. Numerical information was stored in the knots of quipu strings, allowing for compact storage of large numbers.[84][85] These numbers were stored in base-10 digits, the same base used by the Quechua language[86] and in administrative and military units.[76] These numbers, stored in quipu, could be calculated on yupanas, grids with squares of positionally varying mathematical values, perhaps functioning as an abacus.[87] Calculation was facilitated by moving piles of tokens, seeds or pebbles between compartments of the yupana. It is likely that Inca mathematics at least allowed division of integers into integers or fractions and multiplication of integers and fractions.[88]
|
144 |
+
|
145 |
+
According to mid-17th-century Jesuit chronicler Bernabé Cobo,[89] the Inca designated officials to perform accounting-related tasks. These officials were called quipo camayos. Study of khipu sample VA 42527 (Museum für Völkerkunde, Berlin)[90] revealed that the numbers arranged in calendrically significant patterns were used for agricultural purposes in the "farm account books" kept by the khipukamayuq (accountant or warehouse keeper) to facilitate the closing of accounting books.[91]
|
146 |
+
|
147 |
+
Ceramics were painted using the polychrome technique portraying numerous motifs including animals, birds, waves, felines (popular in the Chavin culture) and geometric patterns found in the Nazca style of ceramics. In a culture without a written language, ceramics portrayed the basic scenes of everyday life, including the smelting of metals, relationships and scenes of tribal warfare. The most distinctive Inca ceramic objects are the Cusco bottles or "aryballos".[92] Many of these pieces are on display in Lima in the Larco Archaeological Museum and the National Museum of Archaeology, Anthropology and History.
|
148 |
+
|
149 |
+
Almost all of the gold and silver work of the Incan empire was melted down by the conquistadors, and shipped back to Spain.[93]
|
150 |
+
|
151 |
+
The Inca recorded information on assemblages of knotted strings, known as Quipu, although they can no longer be decoded. Originally it was thought that Quipu were used only as mnemonic devices or to record numerical data. Quipus are also believed to record history and literature.[94]
|
152 |
+
|
153 |
+
The Inca made many discoveries in medicine.[95] They performed successful skull surgery, by cutting holes in the skull to alleviate fluid buildup and inflammation caused by head wounds. Many skull surgeries performed by Inca surgeons were successful. Survival rates were 80–90%, compared to about 30% before Inca times.[96]
|
154 |
+
|
155 |
+
The Incas revered the coca plant as sacred/magical. Its leaves were used in moderate amounts to lessen hunger and pain during work, but were mostly used for religious and health purposes.[97] The Spaniards took advantage of the effects of chewing coca leaves.[97] The Chasqui, messengers who ran throughout the empire to deliver messages, chewed coca leaves for extra energy. Coca leaves were also used as an anaesthetic during surgeries.
|
156 |
+
|
157 |
+
The Inca army was the most powerful at that time, because any ordinary villager or farmer could be recruited as a soldier as part of the mit'a system of mandatory public service. Every able bodied male Inca of fighting age had to take part in war in some capacity at least once and to prepare for warfare again when needed. By the time the empire reached its largest size, every section of the empire contributed in setting up an army for war.
|
158 |
+
|
159 |
+
The Incas had no iron or steel and their weapons were not much more effective than those of their opponents so they often defeated opponents by sheer force of numbers, or else by persuading them to surrender beforehand by offering generous terms.[98] Inca weaponry included "hardwood spears launched using throwers, arrows, javelins, slings, the bolas, clubs, and maces with star-shaped heads made of copper or bronze."[98][99] Rolling rocks downhill onto the enemy was a common strategy, taking advantage of the hilly terrain.[100] Fighting was sometimes accompanied by drums and trumpets made of wood, shell or bone.[101][102] Armor included:[98][103]
|
160 |
+
|
161 |
+
Roads allowed quick movement (on foot) for the Inca army and shelters called tambo and storage silos called qullqas were built one day's travelling distance from each other, so that an army on campaign could always be fed and rested. This can be seen in names of ruins such as Ollantay Tambo, or My Lord's Storehouse. These were set up so the Inca and his entourage would always have supplies (and possibly shelter) ready as they traveled.
|
162 |
+
|
163 |
+
Chronicles and references from the 16th and 17th centuries support the idea of a banner. However, it represented the Inca (emperor), not the empire.
|
164 |
+
|
165 |
+
Francisco López de Jerez[106] wrote in 1534:
|
166 |
+
|
167 |
+
... todos venían repartidos en sus escuadras con sus banderas y capitanes que los mandan, con tanto concierto como turcos.(... all of them came distributed into squads, with their flags and captains commanding them, as well-ordered as Turks.)
|
168 |
+
|
169 |
+
Chronicler Bernabé Cobo wrote:
|
170 |
+
|
171 |
+
The royal standard or banner was a small square flag, ten or twelve spans around, made of cotton or wool cloth, placed on the end of a long staff, stretched and stiff such that it did not wave in the air and on it each king painted his arms and emblems, for each one chose different ones, though the sign of the Incas was the rainbow and two parallel snakes along the width with the tassel as a crown, which each king used to add for a badge or blazon those preferred, like a lion, an eagle and other figures.
|
172 |
+
(... el guión o estandarte real era una banderilla cuadrada y pequeña, de diez o doce palmos de ruedo, hecha de lienzo de algodón o de lana, iba puesta en el remate de una asta larga, tendida y tiesa, sin que ondease al aire, y en ella pintaba cada rey sus armas y divisas, porque cada uno las escogía diferentes, aunque las generales de los Incas eran el arco celeste y dos culebras tendidas a lo largo paralelas con la borda que le servía de corona, a las cuales solía añadir por divisa y blasón cada rey las que le parecía, como un león, un águila y otras figuras.)-Bernabé Cobo, Historia del Nuevo Mundo (1653)
|
173 |
+
|
174 |
+
Guaman Poma's 1615 book, El primer nueva corónica y buen gobierno, shows numerous line drawings of Inca flags.[107] In his 1847 book A History of the Conquest of Peru, "William H. Prescott ... says that in the Inca army each company had its particular banner and that the imperial standard, high above all, displayed the glittering device of the rainbow, the armorial ensign of the Incas."[108] A 1917 world flags book says the Inca "heir-apparent ... was entitled to display the royal standard of the rainbow in his military campaigns."[109]
|
175 |
+
|
176 |
+
In modern times the rainbow flag has been wrongly associated with the Tawantinsuyu and displayed as a symbol of Inca heritage by some groups in Peru and Bolivia. The city of Cusco also flies the Rainbow Flag, but as an official flag of the city. The Peruvian president Alejandro Toledo (2001–2006) flew the Rainbow Flag in Lima's presidential palace. However, according to Peruvian historiography, the Inca Empire never had a flag. Peruvian historian María Rostworowski said, "I bet my life, the Inca never had that flag, it never existed, no chronicler mentioned it".[110] Also, to the Peruvian newspaper El Comercio, the flag dates to the first decades of the 20th century,[111] and even the Congress of the Republic of Peru has determined that flag is a fake by citing the conclusion of National Academy of Peruvian History:
|
177 |
+
|
178 |
+
"The official use of the wrongly called 'Tawantinsuyu flag' is a mistake. In the Pre-Hispanic Andean World there did not exist the concept of a flag, it did not belong to their historic context".[111]
|
179 |
+
National Academy of Peruvian History
|
180 |
+
|
181 |
+
Incas were able to adapt to their high-altitude living through successful acclimatization, which is characterized by increasing oxygen supply to the blood tissues. For the native Inca living in the Andean highlands, this was achieved through the development of a larger lung capacity, and an increase in red blood cell counts, hemoglobin concentration, and capillary beds.[112]
|
182 |
+
|
183 |
+
Compared to other humans, the Incas had slower heart rates, almost one-third larger lung capacity, about 2 L (4 pints) more blood volume and double the amount of hemoglobin, which transfers oxygen from the lungs to the rest of the body. While the Conquistadors may have been slightly taller, the Inca had the advantage of coping with the extraordinary altitude.
|
184 |
+
|
en/1158.html.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
en/1159.html.txt
ADDED
@@ -0,0 +1,231 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
C (/siː/, as in the letter c) is a general-purpose, procedural computer programming language supporting structured programming, lexical variable scope, and recursion, with a static type system. By design, C provides constructs that map efficiently to typical machine instructions. It has found lasting use in applications previously coded in assembly language. Such applications include operating systems and various application software for computers architectures that range from supercomputers to PLCs and embedded systems.
|
4 |
+
|
5 |
+
A successor to the programming language B, C was originally developed at Bell Labs by Dennis Ritchie between 1972 and 1973 to construct utilities running on Unix. It was applied to re-implementing the kernel of the Unix operating system.[5] During the 1980s, C gradually gained popularity. It has become one of the most widely used programming languages,[6][7] with C compilers from various vendors available for the majority of existing computer architectures and operating systems. C has been standardized by the ANSI since 1989 (ANSI C) and by the International Organization for Standardization (ISO).
|
6 |
+
|
7 |
+
C is an imperative procedural language. It was designed to be compiled to provide low-level access to memory and language constructs that map efficiently to machine instructions, all with minimal runtime support. Despite its low-level capabilities, the language was designed to encourage cross-platform programming. A standards-compliant C program written with portability in mind can be compiled for a wide variety of computer platforms and operating systems with few changes to its source code.
|
8 |
+
|
9 |
+
Like most procedural languages in the ALGOL tradition, C has facilities for structured programming and allows lexical variable scope and recursion. Its static type system prevents unintended operations. In C, all executable code is contained within subroutines (also called "functions", though not strictly in the sense of functional programming). Function parameters are always passed by value. Pass-by-reference is simulated in C by explicitly passing pointer values. C program source text is free-format, using the semicolon as a statement terminator and curly braces for grouping blocks of statements.
|
10 |
+
|
11 |
+
The C language also exhibits the following characteristics:
|
12 |
+
|
13 |
+
While C does not include certain features found in other languages (such as object orientation and garbage collection), these can be implemented or emulated, often through the use of external libraries (e.g., the GLib Object System or the Boehm garbage collector).
|
14 |
+
|
15 |
+
Many later languages have borrowed directly or indirectly from C, including C++, C#, Unix's C shell, D, Go, Java, JavaScript (including transpilers), Limbo, LPC, Objective-C, Perl, PHP, Python, Rust, Swift, Verilog and SystemVerilog (hardware description languages).[4] These languages have drawn many of their control structures and other basic features from C. Most of them (Python being a dramatic exception) also express highly similar syntax to C, and they tend to combine the recognizable expression and statement syntax of C with underlying type systems, data models, and semantics that can be radically different.
|
16 |
+
|
17 |
+
The origin of C is closely tied to the development of the Unix operating system, originally implemented in assembly language on a PDP-7 by Dennis Ritchie and Ken Thompson, incorporating several ideas from colleagues. Eventually, they decided to port the operating system to a PDP-11. The original PDP-11 version of Unix was also developed in assembly language.[9]
|
18 |
+
|
19 |
+
Thompson desired a programming language to make utilities for the new platform. At first, he tried to make a Fortran compiler, but soon gave up the idea. Instead, he created a cut-down version of the recently developed BCPL systems programming language. The official description of BCPL was not available at the time,[10] and Thompson modified the syntax to be less wordy, producing the similar but somewhat simpler B.[9] However, few utilities were ultimately written in B because it was too slow, and B could not take advantage of PDP-11 features such as byte addressability.
|
20 |
+
|
21 |
+
In 1972, Ritchie started to improve B, which resulted in creating a new language C.[11] The C compiler and some utilities made with it were included in Version 2 Unix.[12]
|
22 |
+
|
23 |
+
At Version 4 Unix, released in November 1973, the Unix kernel was extensively re-implemented in C.[9] By this time, the C language had acquired some powerful features such as struct types.
|
24 |
+
|
25 |
+
Unix was one of the first operating system kernels implemented in a language other than assembly. Earlier instances include the Multics system (which was written in PL/I) and Master Control Program (MCP) for the Burroughs B5000 (which was written in ALGOL) in 1961. In around 1977, Ritchie and Stephen C. Johnson made further changes to the language to facilitate portability of the Unix operating system. Johnson's Portable C Compiler served as the basis for several implementations of C on new platforms.[11]
|
26 |
+
|
27 |
+
In 1978, Brian Kernighan and Dennis Ritchie published the first edition of The C Programming Language.[1] This book, known to C programmers as K&R, served for many years as an informal specification of the language. The version of C that it describes is commonly referred to as "K&R C". The second edition of the book[13] covers the later ANSI C standard, described below.
|
28 |
+
|
29 |
+
K&R introduced several language features:
|
30 |
+
|
31 |
+
Even after the publication of the 1989 ANSI standard, for many years K&R C was still considered the "lowest common denominator" to which C programmers restricted themselves when maximum portability was desired, since many older compilers were still in use, and because carefully written K&R C code can be legal Standard C as well.
|
32 |
+
|
33 |
+
In early versions of C, only functions that return types other than int must be declared if used before the function definition; functions used without prior declaration were presumed to return type int.
|
34 |
+
|
35 |
+
For example:
|
36 |
+
|
37 |
+
The int type specifiers which are commented out could be omitted in K&R C, but are required in later standards.
|
38 |
+
|
39 |
+
Since K&R function declarations did not include any information about function arguments, function parameter type checks were not performed, although some compilers would issue a warning message if a local function was called with the wrong number of arguments, or if multiple calls to an external function used different numbers or types of arguments. Separate tools such as Unix's lint utility were developed that (among other things) could check for consistency of function use across multiple source files.
|
40 |
+
|
41 |
+
In the years following the publication of K&R C, several features were added to the language, supported by compilers from AT&T (in particular PCC[14]) and some other vendors. These included:
|
42 |
+
|
43 |
+
The large number of extensions and lack of agreement on a standard library, together with the language popularity and the fact that not even the Unix compilers precisely implemented the K&R specification, led to the necessity of standardization.
|
44 |
+
|
45 |
+
During the late 1970s and 1980s, versions of C were implemented for a wide variety of mainframe computers, minicomputers, and microcomputers, including the IBM PC, as its popularity began to increase significantly.
|
46 |
+
|
47 |
+
In 1983, the American National Standards Institute (ANSI) formed a committee, X3J11, to establish a standard specification of C. X3J11 based the C standard on the Unix implementation; however, the non-portable portion of the Unix C library was handed off to the IEEE working group 1003 to become the basis for the 1988 POSIX standard. In 1989, the C standard was ratified as ANSI X3.159-1989 "Programming Language C". This version of the language is often referred to as ANSI C, Standard C, or sometimes C89.
|
48 |
+
|
49 |
+
In 1990, the ANSI C standard (with formatting changes) was adopted by the International Organization for Standardization (ISO) as ISO/IEC 9899:1990, which is sometimes called C90. Therefore, the terms "C89" and "C90" refer to the same programming language.
|
50 |
+
|
51 |
+
ANSI, like other national standards bodies, no longer develops the C standard independently, but defers to the international C standard, maintained by the working group ISO/IEC JTC1/SC22/WG14. National adoption of an update to the international standard typically occurs within a year of ISO publication.
|
52 |
+
|
53 |
+
One of the aims of the C standardization process was to produce a superset of K&R C, incorporating many of the subsequently introduced unofficial features. The standards committee also included several additional features such as function prototypes (borrowed from C++), void pointers, support for international character sets and locales, and preprocessor enhancements. Although the syntax for parameter declarations was augmented to include the style used in C++, the K&R interface continued to be permitted, for compatibility with existing source code.
|
54 |
+
|
55 |
+
C89 is supported by current C compilers, and most modern C code is based on it. Any program written only in Standard C and without any hardware-dependent assumptions will run correctly on any platform with a conforming C implementation, within its resource limits. Without such precautions, programs may compile only on a certain platform or with a particular compiler, due, for example, to the use of non-standard libraries, such as GUI libraries, or to a reliance on compiler- or platform-specific attributes such as the exact size of data types and byte endianness.
|
56 |
+
|
57 |
+
In cases where code must be compilable by either standard-conforming or K&R C-based compilers, the __STDC__ macro can be used to split the code into Standard and K&R sections to prevent the use on a K&R C-based compiler of features available only in Standard C.
|
58 |
+
|
59 |
+
After the ANSI/ISO standardization process, the C language specification remained relatively static for several years. In 1995, Normative Amendment 1 to the 1990 C standard (ISO/IEC 9899/AMD1:1995, known informally as C95) was published, to correct some details and to add more extensive support for international character sets.[15]
|
60 |
+
|
61 |
+
The C standard was further revised in the late 1990s, leading to the publication of ISO/IEC 9899:1999 in 1999, which is commonly referred to as "C99". It has since been amended three times by Technical Corrigenda.[16]
|
62 |
+
|
63 |
+
C99 introduced several new features, including inline functions, several new data types (including long long int and a complex type to represent complex numbers), variable-length arrays and flexible array members, improved support for IEEE 754 floating point, support for variadic macros (macros of variable arity), and support for one-line comments beginning with //, as in BCPL or C++. Many of these had already been implemented as extensions in several C compilers.
|
64 |
+
|
65 |
+
C99 is for the most part backward compatible with C90, but is stricter in some ways; in particular, a declaration that lacks a type specifier no longer has int implicitly assumed. A standard macro __STDC_VERSION__ is defined with value 199901L to indicate that C99 support is available. GCC, Solaris Studio, and other C compilers now support many or all of the new features of C99. The C compiler in Microsoft Visual C++, however, implements the C89 standard and those parts of C99 that are required for compatibility with C++11.[17]
|
66 |
+
|
67 |
+
In 2007, work began on another revision of the C standard, informally called "C1X" until its official publication on 2011-12-08. The C standards committee adopted guidelines to limit the adoption of new features that had not been tested by existing implementations.
|
68 |
+
|
69 |
+
The C11 standard adds numerous new features to C and the library, including type generic macros, anonymous structures, improved Unicode support, atomic operations, multi-threading, and bounds-checked functions. It also makes some portions of the existing C99 library optional, and improves compatibility with C++. The standard macro __STDC_VERSION__ is defined as 201112L to indicate that C11 support is available.
|
70 |
+
|
71 |
+
Published in June 2018, C18 is the current standard for the C programming language. It introduces no new language features, only technical corrections, and clarifications to defects in C11. The standard macro __STDC_VERSION__ is defined as 201710L.
|
72 |
+
|
73 |
+
Historically, embedded C programming requires nonstandard extensions to the C language in order to support exotic features such as fixed-point arithmetic, multiple distinct memory banks, and basic I/O operations.
|
74 |
+
|
75 |
+
In 2008, the C Standards Committee published a technical report extending the C language[18] to address these issues by providing a common standard for all implementations to adhere to. It includes a number of features not available in normal C, such as fixed-point arithmetic, named address spaces, and basic I/O hardware addressing.
|
76 |
+
|
77 |
+
C has a formal grammar specified by the C standard.[19] Line endings are generally not significant in C; however, line boundaries do have significance during the preprocessing phase. Comments may appear either between the delimiters /* and */, or (since C99) following // until the end of the line. Comments delimited by /* and */ do not nest, and these sequences of characters are not interpreted as comment delimiters if they appear inside string or character literals.[20]
|
78 |
+
|
79 |
+
C source files contain declarations and function definitions. Function definitions, in turn, contain declarations and statements. Declarations either define new types using keywords such as struct, union, and enum, or assign types to and perhaps reserve storage for new variables, usually by writing the type followed by the variable name. Keywords such as char and int specify built-in types. Sections of code are enclosed in braces ({ and }, sometimes called "curly brackets") to limit the scope of declarations and to act as a single statement for control structures.
|
80 |
+
|
81 |
+
As an imperative language, C uses statements to specify actions. The most common statement is an expression statement, consisting of an expression to be evaluated, followed by a semicolon; as a side effect of the evaluation, functions may be called and variables may be assigned new values. To modify the normal sequential execution of statements, C provides several control-flow statements identified by reserved keywords. Structured programming is supported by if(-else) conditional execution and by do-while, while, and for iterative execution (looping). The for statement has separate initialization, testing, and reinitialization expressions, any or all of which can be omitted. break and continue can be used to leave the innermost enclosing loop statement or skip to its reinitialization. There is also a non-structured goto statement which branches directly to the designated label within the function. switch selects a case to be executed based on the value of an integer expression.
|
82 |
+
|
83 |
+
Expressions can use a variety of built-in operators and may contain function calls. The order in which arguments to functions and operands to most operators are evaluated is unspecified. The evaluations may even be interleaved. However, all side effects (including storage to variables) will occur before the next "sequence point"; sequence points include the end of each expression statement, and the entry to and return from each function call. Sequence points also occur during evaluation of expressions containing certain operators (&&, ||, ?: and the comma operator). This permits a high degree of object code optimization by the compiler, but requires C programmers to take more care to obtain reliable results than is needed for other programming languages.
|
84 |
+
|
85 |
+
Kernighan and Ritchie say in the Introduction of The C Programming Language: "C, like any other language, has its blemishes. Some of the operators have the wrong precedence; some parts of the syntax could be better."[21] The C standard did not attempt to correct many of these blemishes, because of the impact of such changes on already existing software.
|
86 |
+
|
87 |
+
The basic C source character set includes the following characters:
|
88 |
+
|
89 |
+
Newline indicates the end of a text line; it need not correspond to an actual single character, although for convenience C treats it as one.
|
90 |
+
|
91 |
+
Additional multi-byte encoded characters may be used in string literals, but they are not entirely portable. The latest C standard (C11) allows multi-national Unicode characters to be embedded portably within C source text by using \uXXXX or \UXXXXXXXX encoding (where the X denotes a hexadecimal character), although this feature is not yet widely implemented.
|
92 |
+
|
93 |
+
The basic C execution character set contains the same characters, along with representations for alert, backspace, and carriage return. Run-time support for extended character sets has increased with each revision of the C standard.
|
94 |
+
|
95 |
+
C89 has 32 reserved words, also known as keywords, which are the words that cannot be used for any purposes other than those for which they are predefined:
|
96 |
+
|
97 |
+
|
98 |
+
|
99 |
+
|
100 |
+
|
101 |
+
|
102 |
+
|
103 |
+
|
104 |
+
|
105 |
+
C99 reserved five more words:
|
106 |
+
|
107 |
+
|
108 |
+
|
109 |
+
|
110 |
+
|
111 |
+
|
112 |
+
|
113 |
+
C11 reserved seven more words:[22]
|
114 |
+
|
115 |
+
|
116 |
+
|
117 |
+
|
118 |
+
|
119 |
+
|
120 |
+
|
121 |
+
|
122 |
+
|
123 |
+
Most of the recently reserved words begin with an underscore followed by a capital letter, because identifiers of that form were previously reserved by the C standard for use only by implementations. Since existing program source code should not have been using these identifiers, it would not be affected when C implementations started supporting these extensions to the programming language. Some standard headers do define more convenient synonyms for underscored identifiers. The language previously included a reserved word called entry, but this was seldom implemented, and has now been removed as a reserved word.[23]
|
124 |
+
|
125 |
+
C supports a rich set of operators, which are symbols used within an expression to specify the manipulations to be performed while evaluating that expression. C has operators for:
|
126 |
+
|
127 |
+
C uses the operator = (used in mathematics to express equality) to indicate assignment, following the precedent of Fortran and PL/I, but unlike ALGOL and its derivatives. C uses the operator == to test for equality. The similarity between these two operators (assignment and equality) may result in the accidental use of one in place of the other, and in many cases, the mistake does not produce an error message (although some compilers produce warnings). For example, the conditional expression if (a == b + 1) might mistakenly be written as if (a = b + 1), which will be evaluated as true if a is not zero after the assignment.[24]
|
128 |
+
|
129 |
+
The C operator precedence is not always intuitive. For example, the operator == binds more tightly than (is executed prior to) the operators & (bitwise AND) and | (bitwise OR) in expressions such as x & 1 == 0, which must be written as (x & 1) == 0 if that is the coder's intent.[25]
|
130 |
+
|
131 |
+
The "hello, world" example, which appeared in the first edition of K&R, has become the model for an introductory program in most programming textbooks. The program prints "hello, world" to the standard output, which is usually a terminal or screen display.
|
132 |
+
|
133 |
+
The original version was:[26]
|
134 |
+
|
135 |
+
A standard-conforming "hello, world" program is:[a]
|
136 |
+
|
137 |
+
The first line of the program contains a preprocessing directive, indicated by #include. This causes the compiler to replace that line with the entire text of the stdio.h standard header, which contains declarations for standard input and output functions such as printf and scanf. The angle brackets surrounding stdio.h indicate that stdio.h is located using a search strategy that prefers headers provided with the compiler to other headers having the same name, as opposed to double quotes which typically include local or project-specific header files.
|
138 |
+
|
139 |
+
The next line indicates that a function named main is being defined. The main function serves a special purpose in C programs; the run-time environment calls the main function to begin program execution. The type specifier int indicates that the value that is returned to the invoker (in this case the run-time environment) as a result of evaluating the main function, is an integer. The keyword void as a parameter list indicates that this function takes no arguments.[b]
|
140 |
+
|
141 |
+
The opening curly brace indicates the beginning of the definition of the main function.
|
142 |
+
|
143 |
+
The next line calls (diverts execution to) a function named printf, which in this case is supplied from a system library. In this call, the printf function is passed (provided with) a single argument, the address of the first character in the string literal "hello, world\n". The string literal is an unnamed array with elements of type char, set up automatically by the compiler with a final 0-valued character to mark the end of the array (printf needs to know this). The \n is an escape sequence that C translates to a newline character, which on output signifies the end of the current line. The return value of the printf function is of type int, but it is silently discarded since it is not used. (A more careful program might test the return value to determine whether or not the printf function succeeded.) The semicolon ; terminates the statement.
|
144 |
+
|
145 |
+
The closing curly brace indicates the end of the code for the main function. According to the C99 specification and newer, the main function, unlike any other function, will implicitly return a value of 0 upon reaching the } that terminates the function. (Formerly an explicit return 0; statement was required.) This is interpreted by the run-time system as an exit code indicating successful execution.[27]
|
146 |
+
|
147 |
+
The type system in C is static and weakly typed, which makes it similar to the type system of ALGOL descendants such as Pascal.[28] There are built-in types for integers of various sizes, both signed and unsigned, floating-point numbers, and enumerated types (enum). Integer type char is often used for single-byte characters. C99 added a boolean datatype. There are also derived types including arrays, pointers, records (struct), and unions (union).
|
148 |
+
|
149 |
+
C is often used in low-level systems programming where escapes from the type system may be necessary. The compiler attempts to ensure type correctness of most expressions, but the programmer can override the checks in various ways, either by using a type cast to explicitly convert a value from one type to another, or by using pointers or unions to reinterpret the underlying bits of a data object in some other way.
|
150 |
+
|
151 |
+
Some find C's declaration syntax unintuitive, particularly for function pointers. (Ritchie's idea was to declare identifiers in contexts resembling their use: "declaration reflects use".)[29]
|
152 |
+
|
153 |
+
C's usual arithmetic conversions allow for efficient code to be generated, but can sometimes produce unexpected results. For example, a comparison of signed and unsigned integers of equal width requires a conversion of the signed value to unsigned. This can generate unexpected results if the signed value is negative.
|
154 |
+
|
155 |
+
C supports the use of pointers, a type of reference that records the address or location of an object or function in memory. Pointers can be dereferenced to access data stored at the address pointed to, or to invoke a pointed-to function. Pointers can be manipulated using assignment or pointer arithmetic. The run-time representation of a pointer value is typically a raw memory address (perhaps augmented by an offset-within-word field), but since a pointer's type includes the type of the thing pointed to, expressions including pointers can be type-checked at compile time. Pointer arithmetic is automatically scaled by the size of the pointed-to data type. Pointers are used for many purposes in C. Text strings are commonly manipulated using pointers into arrays of characters. Dynamic memory allocation is performed using pointers. Many data types, such as trees, are commonly implemented as dynamically allocated struct objects linked together using pointers. Pointers to functions are useful for passing functions as arguments to higher-order functions (such as qsort or bsearch) or as callbacks to be invoked by event handlers.[27]
|
156 |
+
|
157 |
+
A null pointer value explicitly points to no valid location. Dereferencing a null pointer value is undefined, often resulting in a segmentation fault. Null pointer values are useful for indicating special cases such as no "next" pointer in the final node of a linked list, or as an error indication from functions returning pointers. In appropriate contexts in source code, such as for assigning to a pointer variable, a null pointer constant can be written as 0, with or without explicit casting to a pointer type, or as the NULL macro defined by several standard headers. In conditional contexts, null pointer values evaluate to false, while all other pointer values evaluate to true.
|
158 |
+
|
159 |
+
Void pointers (void *) point to objects of unspecified type, and can therefore be used as "generic" data pointers. Since the size and type of the pointed-to object is not known, void pointers cannot be dereferenced, nor is pointer arithmetic on them allowed, although they can easily be (and in many contexts implicitly are) converted to and from any other object pointer type.[27]
|
160 |
+
|
161 |
+
Careless use of pointers is potentially dangerous. Because they are typically unchecked, a pointer variable can be made to point to any arbitrary location, which can cause undesirable effects. Although properly used pointers point to safe places, they can be made to point to unsafe places by using invalid pointer arithmetic; the objects they point to may continue to be used after deallocation (dangling pointers); they may be used without having been initialized (wild pointers); or they may be directly assigned an unsafe value using a cast, union, or through another corrupt pointer. In general, C is permissive in allowing manipulation of and conversion between pointer types, although compilers typically provide options for various levels of checking. Some other programming languages address these problems by using more restrictive reference types.
|
162 |
+
|
163 |
+
Array types in C are traditionally of a fixed, static size specified at compile time. (The more recent C99 standard also allows a form of variable-length arrays.) However, it is also possible to allocate a block of memory (of arbitrary size) at run-time, using the standard library's malloc function, and treat it as an array. C's unification of arrays and pointers means that declared arrays and these dynamically allocated simulated arrays are virtually interchangeable.
|
164 |
+
|
165 |
+
Since arrays are always accessed (in effect) via pointers, array accesses are typically not checked against the underlying array size, although some compilers may provide bounds checking as an option.[30][31] Array bounds violations are therefore possible and rather common in carelessly written code, and can lead to various repercussions, including illegal memory accesses, corruption of data, buffer overruns, and run-time exceptions. If bounds checking is desired, it must be done manually.
|
166 |
+
|
167 |
+
C does not have a special provision for declaring multi-dimensional arrays, but rather relies on recursion within the type system to declare arrays of arrays, which effectively accomplishes the same thing. The index values of the resulting "multi-dimensional array" can be thought of as increasing in row-major order.
|
168 |
+
|
169 |
+
Multi-dimensional arrays are commonly used in numerical algorithms (mainly from applied linear algebra) to store matrices. The structure of the C array is well suited to this particular task. However, since arrays are passed merely as pointers, the bounds of the array must be known fixed values or else explicitly passed to any subroutine that requires them, and dynamically sized arrays of arrays cannot be accessed using double indexing. (A workaround for this is to allocate the array with an additional "row vector" of pointers to the columns.)
|
170 |
+
|
171 |
+
C99 introduced "variable-length arrays" which address some, but not all, of the issues with ordinary C arrays.
|
172 |
+
|
173 |
+
The subscript notation x[i] (where x designates a pointer) is syntactic sugar for *(x+i).[32] Taking advantage of the compiler's knowledge of the pointer type, the address that x + i points to is not the base address (pointed to by x) incremented by i bytes, but rather is defined to be the base address incremented by i multiplied by the size of an element that x points to. Thus, x[i] designates the i+1th element of the array.
|
174 |
+
|
175 |
+
Furthermore, in most expression contexts (a notable exception is as operand of sizeof), the name of an array is automatically converted to a pointer to the array's first element. This implies that an array is never copied as a whole when named as an argument to a function, but rather only the address of its first element is passed. Therefore, although function calls in C use pass-by-value semantics, arrays are in effect passed by reference.
|
176 |
+
|
177 |
+
The size of an element can be determined by applying the operator sizeof to any dereferenced element of x, as in n = sizeof *x or n = sizeof x[0], and the number of elements in a declared array A can be determined as sizeof A / sizeof A[0]. The latter only applies to array names: variables declared with subscripts (int A[20]). Due to the semantics of C, it is not possible to determine the entire size of arrays through pointers to arrays or those created by dynamic allocation (malloc); code such as sizeof arr / sizeof arr[0] (where arr designates a pointer) will not work since the compiler assumes the size of the pointer itself is being requested.[33][34] Since array name arguments to sizeof are not converted to pointers, they do not exhibit such ambiguity. However, arrays created by dynamic allocation are accessed by pointers rather than true array variables, so they suffer from the same sizeof issues as array pointers.
|
178 |
+
|
179 |
+
Thus, despite this apparent equivalence between array and pointer variables, there is still a distinction to be made between them. Even though the name of an array is, in most expression contexts, converted into a pointer (to its first element), this pointer does not itself occupy any storage; the array name is not an l-value, and its address is a constant, unlike a pointer variable. Consequently, what an array "points to" cannot be changed, and it is impossible to assign a new address to an array name. Array contents may be copied, however, by using the memcpy function, or by accessing the individual elements.
|
180 |
+
|
181 |
+
One of the most important functions of a programming language is to provide facilities for managing memory and the objects that are stored in memory. C provides three distinct ways to allocate memory for objects:[27]
|
182 |
+
|
183 |
+
These three approaches are appropriate in different situations and have various trade-offs. For example, static memory allocation has little allocation overhead, automatic allocation may involve slightly more overhead, and dynamic memory allocation can potentially have a great deal of overhead for both allocation and deallocation. The persistent nature of static objects is useful for maintaining state information across function calls, automatic allocation is easy to use but stack space is typically much more limited and transient than either static memory or heap space, and dynamic memory allocation allows convenient allocation of objects whose size is known only at run-time. Most C programs make extensive use of all three.
|
184 |
+
|
185 |
+
Where possible, automatic or static allocation is usually simplest because the storage is managed by the compiler, freeing the programmer of the potentially error-prone chore of manually allocating and releasing storage. However, many data structures can change in size at runtime, and since static allocations (and automatic allocations before C99) must have a fixed size at compile-time, there are many situations in which dynamic allocation is necessary.[27] Prior to the C99 standard, variable-sized arrays were a common example of this. (See the article on malloc for an example of dynamically allocated arrays.) Unlike automatic allocation, which can fail at run time with uncontrolled consequences, the dynamic allocation functions return an indication (in the form of a null pointer value) when the required storage cannot be allocated. (Static allocation that is too large is usually detected by the linker or loader, before the program can even begin execution.)
|
186 |
+
|
187 |
+
Unless otherwise specified, static objects contain zero or null pointer values upon program startup. Automatically and dynamically allocated objects are initialized only if an initial value is explicitly specified; otherwise they initially have indeterminate values (typically, whatever bit pattern happens to be present in the storage, which might not even represent a valid value for that type). If the program attempts to access an uninitialized value, the results are undefined. Many modern compilers try to detect and warn about this problem, but both false positives and false negatives can occur.
|
188 |
+
|
189 |
+
Another issue is that heap memory allocation has to be synchronized with its actual usage in any program in order for it to be reused as much as possible. For example, if the only pointer to a heap memory allocation goes out of scope or has its value overwritten before free() is called, then that memory cannot be recovered for later reuse and is essentially lost to the program, a phenomenon known as a memory leak. Conversely, it is possible for memory to be freed but continue to be referenced, leading to unpredictable results. Typically, the symptoms will appear in a portion of the program far removed from the actual error, making it difficult to track down the problem. (Such issues are ameliorated in languages with automatic garbage collection.)
|
190 |
+
|
191 |
+
The C programming language uses libraries as its primary method of extension. In C, a library is a set of functions contained within a single "archive" file. Each library typically has a header file, which contains the prototypes of the functions contained within the library that may be used by a program, and declarations of special data types and macro symbols used with these functions. In order for a program to use a library, it must include the library's header file, and the library must be linked with the program, which in many cases requires compiler flags (e.g., -lm, shorthand for "link the math library").[27]
|
192 |
+
|
193 |
+
The most common C library is the C standard library, which is specified by the ISO and ANSI C standards and comes with every C implementation (implementations which target limited environments such as embedded systems may provide only a subset of the standard library). This library supports stream input and output, memory allocation, mathematics, character strings, and time values. Several separate standard headers (for example, stdio.h) specify the interfaces for these and other standard library facilities.
|
194 |
+
|
195 |
+
Another common set of C library functions are those used by applications specifically targeted for Unix and Unix-like systems, especially functions which provide an interface to the kernel. These functions are detailed in various standards such as POSIX and the Single UNIX Specification.
|
196 |
+
|
197 |
+
Since many programs have been written in C, there are a wide variety of other libraries available. Libraries are often written in C because C compilers generate efficient object code; programmers then create interfaces to the library so that the routines can be used from higher-level languages like Java, Perl, and Python.[27]
|
198 |
+
|
199 |
+
File input and output (I/O) is not part of the C language itself but instead is handled by libraries (such as the C standard library) and their associated header files (e.g. stdio.h). File handling is generally implemented through high-level I/O which works through streams. A stream is from this perspective a data flow that is independent of devices, while a file is a concrete device. The high level I/O is done through the association of a stream to a file. In the C standard library, a buffer (a memory area or queue) is temporarily used to store data before it's sent to the final destination. This reduces the time spent waiting for slower devices, for example a hard drive or solid state drive. Low-level I/O functions are not part of the standard C library but are generally part of "bare metal" programming (programming that's independent of any operating system such as most but not all embedded programming). With few exceptions, implementations include low-level I/O.
|
200 |
+
|
201 |
+
A number of tools have been developed to help C programmers find and fix statements with undefined behavior or possibly erroneous expressions, with greater rigor than that provided by the compiler. The tool lint was the first such, leading to many others.
|
202 |
+
|
203 |
+
Automated source code checking and auditing are beneficial in any language, and for C many such tools exist, such as Lint. A common practice is to use Lint to detect questionable code when a program is first written. Once a program passes Lint, it is then compiled using the C compiler. Also, many compilers can optionally warn about syntactically valid constructs that are likely to actually be errors. MISRA C is a proprietary set of guidelines to avoid such questionable code, developed for embedded systems.[35]
|
204 |
+
|
205 |
+
There are also compilers, libraries, and operating system level mechanisms for performing actions that are not a standard part of C, such as bounds checking for arrays, detection of buffer overflow, serialization, dynamic memory tracking, and automatic garbage collection.
|
206 |
+
|
207 |
+
Tools such as Purify or Valgrind and linking with libraries containing special versions of the memory allocation functions can help uncover runtime errors in memory usage.
|
208 |
+
|
209 |
+
C is widely used for systems programming in implementing operating systems and embedded system applications,[37] because C code, when written for portability, can be used for most purposes, yet when needed, system-specific code can be used to access specific hardware addresses and to perform type punning to match externally imposed interface requirements, with a low run-time demand on system resources.
|
210 |
+
|
211 |
+
C can also be used for website programming using CGI as a "gateway" for information between the Web application, the server, and the browser.[38] C is often chosen over interpreted languages because of its speed, stability, and near-universal availability.[39]
|
212 |
+
|
213 |
+
One consequence of C's wide availability and efficiency is that compilers, libraries and interpreters of other programming languages are often implemented in C. The reference implementations of Python, Perl and PHP, for example, are all written in C.
|
214 |
+
|
215 |
+
Because the layer of abstraction is thin and the overhead is low, C enables programmers to create efficient implementations of algorithms and data structures, useful for computationally intense programs. For example, the GNU Multiple Precision Arithmetic Library, the GNU Scientific Library, Mathematica, and MATLAB are completely or partially written in C.
|
216 |
+
|
217 |
+
C is sometimes used as an intermediate language by implementations of other languages. This approach may be used for portability or convenience; by using C as an intermediate language, additional machine-specific code generators are not necessary. C has some features, such as line-number preprocessor directives and optional superfluous commas at the end of initializer lists, that support compilation of generated code. However, some of C's shortcomings have prompted the development of other C-based languages specifically designed for use as intermediate languages, such as C--.
|
218 |
+
|
219 |
+
C has also been widely used to implement end-user applications. However, such applications can also be written in newer, higher-level languages.
|
220 |
+
|
221 |
+
C has both directly and indirectly influenced many later languages such as C#, D, Go, Java, JavaScript, Limbo, LPC, Perl, PHP, Python, and Unix's C shell.[40] The most pervasive influence has been syntactical, all of the languages mentioned combine the statement and (more or less recognizably) expression syntax of C with type systems, data models and/or large-scale program structures that differ from those of C, sometimes radically.
|
222 |
+
|
223 |
+
Several C or near-C interpreters exist, including Ch and CINT, which can also be used for scripting.
|
224 |
+
|
225 |
+
When object-oriented languages became popular, C++ and Objective-C were two different extensions of C that provided object-oriented capabilities. Both languages were originally implemented as source-to-source compilers; source code was translated into C, and then compiled with a C compiler.[41]
|
226 |
+
|
227 |
+
The C++ programming language was devised by Bjarne Stroustrup as an approach to providing object-oriented functionality with a C-like syntax.[42] C++ adds greater typing strength, scoping, and other tools useful in object-oriented programming, and permits generic programming via templates. Nearly a superset of C, C++ now supports most of C, with a few exceptions.
|
228 |
+
|
229 |
+
Objective-C was originally a very "thin" layer on top of C, and remains a strict superset of C that permits object-oriented programming using a hybrid dynamic/static typing paradigm. Objective-C derives its syntax from both C and Smalltalk: syntax that involves preprocessing, expressions, function declarations, and function calls is inherited from C, while the syntax for object-oriented features was originally taken from Smalltalk.
|
230 |
+
|
231 |
+
In addition to C++ and Objective-C, Ch, Cilk and Unified Parallel C are nearly supersets of C.
|
en/116.html.txt
ADDED
@@ -0,0 +1,105 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
In chemistry, alcohol is an organic compound that carries at least one hydroxyl functional group (−OH) bound to a saturated carbon atom.[2] The term alcohol originally referred to the primary alcohol ethanol (ethyl alcohol), which is used as a drug and is the main alcohol present in alcoholic beverages. An important class of alcohols, of which methanol and ethanol are the simplest members, includes all compounds for which the general formula is CnH2n+1OH. Simple monoalcohols that are the subject of this article include primary (RCH2OH), secondary (R2CHOH) and tertiary (R3COH) alcohols.
|
6 |
+
|
7 |
+
The suffix -ol appears in the IUPAC chemical name of all substances where the hydroxyl group is the functional group with the highest priority. When a higher priority group is present in the compound, the prefix hydroxy- is used in its IUPAC name. The suffix -ol in non-IUPAC names (such as paracetamol or cholesterol) also typically indicates that the substance is an alcohol. However, many substances that contain hydroxyl functional groups (particularly sugars, such as glucose and sucrose) have names which include neither the suffix -ol, nor the prefix hydroxy-.
|
8 |
+
|
9 |
+
Alcohol distillation possibly originated in the Indus valley civilization as early as 2000 BCE. The people of India used an alcoholic drink called Sura made from fermented rice, barley, jaggery, and flowers of the madhyaka tree.[3] Alcohol distillation was known to Islamic chemists as early as the eighth century.[4][5]
|
10 |
+
|
11 |
+
The Arab chemist, al-Kindi, unambiguously described the distillation of wine in a treatise titled as "The Book of the chemistry of Perfume and Distillations".[6][7][8]
|
12 |
+
|
13 |
+
The word "alcohol" is from the Arabic kohl (Arabic: الكحل, romanized: al-kuḥl), a powder used as an eyeliner.[9] Al- is the Arabic definite article, equivalent to the in English. Alcohol was originally used for the very fine powder produced by the sublimation of the natural mineral stibnite to form antimony trisulfide Sb2S3. It was considered to be the essence or "spirit" of this mineral. It was used as an antiseptic, eyeliner, and cosmetic. The meaning of alcohol was extended to distilled substances in general, and then narrowed to ethanol, when "spirits" was a synonym for hard liquor.[10]
|
14 |
+
|
15 |
+
Bartholomew Traheron, in his 1543 translation of John of Vigo, introduces the word as a term used by "barbarous" authors for "fine powder." Vigo wrote: "the barbarous auctours use alcohol, or (as I fynde it sometymes wryten) alcofoll, for moost fine poudre."[11]
|
16 |
+
|
17 |
+
The 1657 Lexicon Chymicum, by William Johnson glosses the word as "antimonium sive stibium."[12] By extension, the word came to refer to any fluid obtained by distillation, including "alcohol of wine," the distilled essence of wine. Libavius in Alchymia (1594) refers to "vini alcohol vel vinum alcalisatum". Johnson (1657) glosses alcohol vini as "quando omnis superfluitas vini a vino separatur, ita ut accensum ardeat donec totum consumatur, nihilque fæcum aut phlegmatis in fundo remaneat." The word's meaning became restricted to "spirit of wine" (the chemical known today as ethanol) in the 18th century and was extended to the class of substances so-called as "alcohols" in modern chemistry after 1850.[11]
|
18 |
+
|
19 |
+
The term ethanol was invented in 1892, combining the word ethane with the "-ol" ending of "alcohol".[13]
|
20 |
+
|
21 |
+
IUPAC nomenclature is used in scientific publications and where precise identification of the substance is important, especially in cases where the relative complexity of the molecule does not make such a systematic name unwieldy. In naming simple alcohols, the name of the alkane chain loses the terminal e and adds the suffix -ol, e.g., as in "ethanol" from the alkane chain name "ethane".[14] When necessary, the position of the hydroxyl group is indicated by a number between the alkane name and the -ol: propan-1-ol for CH3CH2CH2OH, propan-2-ol for CH3CH(OH)CH3. If a higher priority group is present (such as an aldehyde, ketone, or carboxylic acid), then the prefix hydroxy-is used,[14] e.g., as in 1-hydroxy-2-propanone (CH3C(O)CH2OH).[15]
|
22 |
+
|
23 |
+
In cases where the OH functional group is bonded to an sp2 carbon on an aromatic ring the molecule is known as a phenol, and is named using the IUPAC rules for naming phenols.[16]
|
24 |
+
|
25 |
+
In other less formal contexts, an alcohol is often called with the name of the corresponding alkyl group followed by the word "alcohol", e.g., methyl alcohol, ethyl alcohol. Propyl alcohol may be n-propyl alcohol or isopropyl alcohol, depending on whether the hydroxyl group is bonded to the end or middle carbon on the straight propane chain. As described under systematic naming, if another group on the molecule takes priority, the alcohol moiety is often indicated using the "hydroxy-" prefix.[17]
|
26 |
+
|
27 |
+
Alcohols are then classified into primary, secondary (sec-, s-), and tertiary (tert-, t-), based upon the number of carbon atoms connected to the carbon atom that bears the hydroxyl functional group. (The respective numeric shorthands 1°, 2°, and 3° are also sometimes used in informal settings.[18]) The primary alcohols have general formulas RCH2OH. The simplest primary alcohol is methanol (CH3OH), for which R=H, and the next is ethanol, for which R=CH3, the methyl group. Secondary alcohols are those of the form RR'CHOH, the simplest of which is 2-propanol (R=R'=CH3). For the tertiary alcohols the general form is RR'R"COH. The simplest example is tert-butanol (2-methylpropan-2-ol), for which each of R, R', and R" is CH3. In these shorthands, R, R', and R" represent substituents, alkyl or other attached, generally organic groups.
|
28 |
+
|
29 |
+
In archaic nomenclature, alcohols can be named as derivatives of methanol using "-carbinol" as the ending. For instance, (CH3)3COH can be named trimethylcarbinol.
|
30 |
+
|
31 |
+
Alcohols have a long history of myriad uses. For simple mono-alcohols, which is the focus on this article, the following are most important industrial alcohols:[20]
|
32 |
+
|
33 |
+
Methanol is the most common industrial alcohol, with about 12 million tons/y produced in 1980. The combined capacity of the other alcohols is about the same, distributed roughly equally.[20]
|
34 |
+
|
35 |
+
With respect to acute toxicity, simple alcohols have low acute toxicities. Doses of several milliliters are tolerated. For pentanols, hexanols, octanols and longer alcohols, LD50 range from 2–5 g/kg (rats, oral). Methanol and ethanol are less acutely toxic. All alcohols are mild skin irritants.[20]
|
36 |
+
|
37 |
+
The metabolism of methanol (and ethylene glycol) is affected by the presence of ethanol, which has a higher affinity for liver alcohol dehydrogenase. In this way methanol will be excreted intact in urine.[21][22][23]
|
38 |
+
|
39 |
+
In general, the hydroxyl group makes alcohols polar. Those groups can form hydrogen bonds to one another and to most other compounds. Owing to the presence of the polar OH alcohols are more water-soluble than simple hydrocarbons. Methanol, ethanol, and propanol are miscible in water. Butanol, with a four-carbon chain, is moderately soluble.
|
40 |
+
|
41 |
+
Because of hydrogen bonding, alcohols tend to have higher boiling points than comparable hydrocarbons and ethers. The boiling point of the alcohol ethanol is 78.29 °C, compared to 69 °C for the hydrocarbon hexane, and 34.6 °C for diethyl ether.
|
42 |
+
|
43 |
+
Simple alcohols are found widely in nature. Ethanol is most prominent because it is the product of fermentation, a major energy-producing pathway. The other simple alcohols are formed in only trace amounts. More complex alcohols are pervasive, as manifested in sugars, some amino acids, and fatty acids.
|
44 |
+
|
45 |
+
In the Ziegler process, linear alcohols are produced from ethylene and triethylaluminium followed by oxidation and hydrolysis.[20] An idealized synthesis of 1-octanol is shown:
|
46 |
+
|
47 |
+
The process generates a range of alcohols that are separated by distillation.
|
48 |
+
|
49 |
+
Many higher alcohols are produced by hydroformylation of alkenes followed by hydrogenation. When applied to a terminal alkene, as is common, one typically obtains a linear alcohol:[20]
|
50 |
+
|
51 |
+
Such processes give fatty alcohols, which are useful for detergents.
|
52 |
+
|
53 |
+
Some low molecular weight alcohols of industrial importance are produced by the addition of water to alkenes. Ethanol, isopropanol, 2-butanol, and tert-butanol are produced by this general method. Two implementations are employed, the direct and indirect methods. The direct method avoids the formation of stable intermediates, typically using acid catalysts. In the indirect method, the alkene is converted to the sulfate ester, which is subsequently hydrolyzed. The direct hydration using ethylene (ethylene hydration)[24] or other alkenes from cracking of fractions of distilled crude oil.
|
54 |
+
|
55 |
+
Hydration is also used industrially to produce the diol ethylene glycol from ethylene oxide.
|
56 |
+
|
57 |
+
Ethanol is obtained by fermentation using glucose produced from sugar from the hydrolysis of starch, in the presence of yeast and temperature of less than 37 °C to produce ethanol. For instance, such a process might proceed by the conversion of sucrose by the enzyme invertase into glucose and fructose, then the conversion of glucose by the enzyme complex zymase into ethanol (and carbon dioxide).
|
58 |
+
|
59 |
+
Several species of the benign bacteria in the intestine use fermentation as a form of anaerobic metabolism. This metabolic reaction produces ethanol as a waste product. Thus, human bodies contain some quantity of alcohol endogenously produced by these bacteria. In rare cases, this can be sufficient to cause "auto-brewery syndrome" in which intoxicating quantities of alcohol are produced.[25][26][27]
|
60 |
+
|
61 |
+
Like ethanol, butanol can be produced by fermentation processes. Saccharomyces yeast are known to produce these higher alcohols at temperatures above 75 °F (24 °C). The bacterium Clostridium acetobutylicum can feed on cellulose to produce butanol on an industrial scale.[28]
|
62 |
+
|
63 |
+
Primary alkyl halides react with aqueous NaOH or KOH mainly to primary alcohols in nucleophilic aliphatic substitution. (Secondary and especially tertiary alkyl halides will give the elimination (alkene) product instead). Grignard reagents react with carbonyl groups to secondary and tertiary alcohols. Related reactions are the Barbier reaction and the Nozaki-Hiyama reaction.
|
64 |
+
|
65 |
+
Aldehydes or ketones are reduced with sodium borohydride or lithium aluminium hydride (after an acidic workup). Another reduction by aluminiumisopropylates is the Meerwein-Ponndorf-Verley reduction. Noyori asymmetric hydrogenation is the asymmetric reduction of β-keto-esters.
|
66 |
+
|
67 |
+
Alkenes engage in an acid catalysed hydration reaction using concentrated sulfuric acid as a catalyst that gives usually secondary or tertiary alcohols. The hydroboration-oxidation and oxymercuration-reduction of alkenes are more reliable in organic synthesis. Alkenes react with NBS and water in halohydrin formation reaction. Amines can be converted to diazonium salts, which are then hydrolyzed.
|
68 |
+
|
69 |
+
The formation of a secondary alcohol via reduction and hydration is shown:
|
70 |
+
|
71 |
+
With a pKa of around 16–19, they are, in general, slightly weaker acids than water. With strong bases such as sodium hydride or sodium they form salts called alkoxides, with the general formula RO− M+.
|
72 |
+
|
73 |
+
The acidity of alcohols is strongly affected by solvation. In the gas phase, alcohols are more acidic than in water.[29]
|
74 |
+
|
75 |
+
The OH group is not a good leaving group in nucleophilic substitution reactions, so neutral alcohols do not react in such reactions. However, if the oxygen is first protonated to give R−OH2+, the leaving group (water) is much more stable, and the nucleophilic substitution can take place. For instance, tertiary alcohols react with hydrochloric acid to produce tertiary alkyl halides, where the hydroxyl group is replaced by a chlorine atom by unimolecular nucleophilic substitution. If primary or secondary alcohols are to be reacted with hydrochloric acid, an activator such as zinc chloride is needed. In alternative fashion, the conversion may be performed directly using thionyl chloride.[1]
|
76 |
+
|
77 |
+
|
78 |
+
|
79 |
+
Alcohols may, likewise, be converted to alkyl bromides using hydrobromic acid or phosphorus tribromide, for example:
|
80 |
+
|
81 |
+
In the Barton-McCombie deoxygenation an alcohol is deoxygenated to an alkane with tributyltin hydride or a trimethylborane-water complex in a radical substitution reaction.
|
82 |
+
|
83 |
+
Meanwhile, the oxygen atom has lone pairs of nonbonded electrons that render it weakly basic in the presence of strong acids such as sulfuric acid. For example, with methanol:
|
84 |
+
|
85 |
+
|
86 |
+
|
87 |
+
Upon treatment with strong acids, alcohols undergo the E1 elimination reaction to produce alkenes. The reaction, in general, obeys Zaitsev's Rule, which states that the most stable (usually the most substituted) alkene is formed. Tertiary alcohols eliminate easily at just above room temperature, but primary alcohols require a higher temperature.
|
88 |
+
|
89 |
+
This is a diagram of acid catalysed dehydration of ethanol to produce ethylene:
|
90 |
+
|
91 |
+
|
92 |
+
|
93 |
+
A more controlled elimination reaction requires the formation of the xanthate ester.
|
94 |
+
|
95 |
+
Tertiary alcohols react with strong acids to generate carbocations. The reaction is related to their dehydration, e.g. isobutylene from tert-butyl alcohol. A special kind of dehydration reaction involves triphenylmethanol and especially its amine-substituted derivatives. When treated with acid, these alcohols lose water to give stable carbocations, which are commercial dyes.[30]
|
96 |
+
|
97 |
+
Alcohol and carboxylic acids react in the so-called Fischer esterification. The reaction usually requires a catalyst, such as concentrated sulfuric acid:
|
98 |
+
|
99 |
+
Other types of ester are prepared in a similar manner – for example, tosyl (tosylate) esters are made by reaction of the alcohol with p-toluenesulfonyl chloride in pyridine.
|
100 |
+
|
101 |
+
Primary alcohols (R-CH2OH) can be oxidized either to aldehydes (R-CHO) or to carboxylic acids (R-CO2H). The oxidation of secondary alcohols (R1R2CH-OH) normally terminates at the ketone (R1R2C=O) stage. Tertiary alcohols (R1R2R3C-OH) are resistant to oxidation.
|
102 |
+
|
103 |
+
The direct oxidation of primary alcohols to carboxylic acids normally proceeds via the corresponding aldehyde, which is transformed via an aldehyde hydrate (R-CH(OH)2) by reaction with water before it can be further oxidized to the carboxylic acid.
|
104 |
+
|
105 |
+
Reagents useful for the transformation of primary alcohols to aldehydes are normally also suitable for the oxidation of secondary alcohols to ketones. These include Collins reagent and Dess-Martin periodinane. The direct oxidation of primary alcohols to carboxylic acids can be carried out using potassium permanganate or the Jones reagent.
|
en/1160.html.txt
ADDED
@@ -0,0 +1,231 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
C (/siː/, as in the letter c) is a general-purpose, procedural computer programming language supporting structured programming, lexical variable scope, and recursion, with a static type system. By design, C provides constructs that map efficiently to typical machine instructions. It has found lasting use in applications previously coded in assembly language. Such applications include operating systems and various application software for computers architectures that range from supercomputers to PLCs and embedded systems.
|
4 |
+
|
5 |
+
A successor to the programming language B, C was originally developed at Bell Labs by Dennis Ritchie between 1972 and 1973 to construct utilities running on Unix. It was applied to re-implementing the kernel of the Unix operating system.[5] During the 1980s, C gradually gained popularity. It has become one of the most widely used programming languages,[6][7] with C compilers from various vendors available for the majority of existing computer architectures and operating systems. C has been standardized by the ANSI since 1989 (ANSI C) and by the International Organization for Standardization (ISO).
|
6 |
+
|
7 |
+
C is an imperative procedural language. It was designed to be compiled to provide low-level access to memory and language constructs that map efficiently to machine instructions, all with minimal runtime support. Despite its low-level capabilities, the language was designed to encourage cross-platform programming. A standards-compliant C program written with portability in mind can be compiled for a wide variety of computer platforms and operating systems with few changes to its source code.
|
8 |
+
|
9 |
+
Like most procedural languages in the ALGOL tradition, C has facilities for structured programming and allows lexical variable scope and recursion. Its static type system prevents unintended operations. In C, all executable code is contained within subroutines (also called "functions", though not strictly in the sense of functional programming). Function parameters are always passed by value. Pass-by-reference is simulated in C by explicitly passing pointer values. C program source text is free-format, using the semicolon as a statement terminator and curly braces for grouping blocks of statements.
|
10 |
+
|
11 |
+
The C language also exhibits the following characteristics:
|
12 |
+
|
13 |
+
While C does not include certain features found in other languages (such as object orientation and garbage collection), these can be implemented or emulated, often through the use of external libraries (e.g., the GLib Object System or the Boehm garbage collector).
|
14 |
+
|
15 |
+
Many later languages have borrowed directly or indirectly from C, including C++, C#, Unix's C shell, D, Go, Java, JavaScript (including transpilers), Limbo, LPC, Objective-C, Perl, PHP, Python, Rust, Swift, Verilog and SystemVerilog (hardware description languages).[4] These languages have drawn many of their control structures and other basic features from C. Most of them (Python being a dramatic exception) also express highly similar syntax to C, and they tend to combine the recognizable expression and statement syntax of C with underlying type systems, data models, and semantics that can be radically different.
|
16 |
+
|
17 |
+
The origin of C is closely tied to the development of the Unix operating system, originally implemented in assembly language on a PDP-7 by Dennis Ritchie and Ken Thompson, incorporating several ideas from colleagues. Eventually, they decided to port the operating system to a PDP-11. The original PDP-11 version of Unix was also developed in assembly language.[9]
|
18 |
+
|
19 |
+
Thompson desired a programming language to make utilities for the new platform. At first, he tried to make a Fortran compiler, but soon gave up the idea. Instead, he created a cut-down version of the recently developed BCPL systems programming language. The official description of BCPL was not available at the time,[10] and Thompson modified the syntax to be less wordy, producing the similar but somewhat simpler B.[9] However, few utilities were ultimately written in B because it was too slow, and B could not take advantage of PDP-11 features such as byte addressability.
|
20 |
+
|
21 |
+
In 1972, Ritchie started to improve B, which resulted in creating a new language C.[11] The C compiler and some utilities made with it were included in Version 2 Unix.[12]
|
22 |
+
|
23 |
+
At Version 4 Unix, released in November 1973, the Unix kernel was extensively re-implemented in C.[9] By this time, the C language had acquired some powerful features such as struct types.
|
24 |
+
|
25 |
+
Unix was one of the first operating system kernels implemented in a language other than assembly. Earlier instances include the Multics system (which was written in PL/I) and Master Control Program (MCP) for the Burroughs B5000 (which was written in ALGOL) in 1961. In around 1977, Ritchie and Stephen C. Johnson made further changes to the language to facilitate portability of the Unix operating system. Johnson's Portable C Compiler served as the basis for several implementations of C on new platforms.[11]
|
26 |
+
|
27 |
+
In 1978, Brian Kernighan and Dennis Ritchie published the first edition of The C Programming Language.[1] This book, known to C programmers as K&R, served for many years as an informal specification of the language. The version of C that it describes is commonly referred to as "K&R C". The second edition of the book[13] covers the later ANSI C standard, described below.
|
28 |
+
|
29 |
+
K&R introduced several language features:
|
30 |
+
|
31 |
+
Even after the publication of the 1989 ANSI standard, for many years K&R C was still considered the "lowest common denominator" to which C programmers restricted themselves when maximum portability was desired, since many older compilers were still in use, and because carefully written K&R C code can be legal Standard C as well.
|
32 |
+
|
33 |
+
In early versions of C, only functions that return types other than int must be declared if used before the function definition; functions used without prior declaration were presumed to return type int.
|
34 |
+
|
35 |
+
For example:
|
36 |
+
|
37 |
+
The int type specifiers which are commented out could be omitted in K&R C, but are required in later standards.
|
38 |
+
|
39 |
+
Since K&R function declarations did not include any information about function arguments, function parameter type checks were not performed, although some compilers would issue a warning message if a local function was called with the wrong number of arguments, or if multiple calls to an external function used different numbers or types of arguments. Separate tools such as Unix's lint utility were developed that (among other things) could check for consistency of function use across multiple source files.
|
40 |
+
|
41 |
+
In the years following the publication of K&R C, several features were added to the language, supported by compilers from AT&T (in particular PCC[14]) and some other vendors. These included:
|
42 |
+
|
43 |
+
The large number of extensions and lack of agreement on a standard library, together with the language popularity and the fact that not even the Unix compilers precisely implemented the K&R specification, led to the necessity of standardization.
|
44 |
+
|
45 |
+
During the late 1970s and 1980s, versions of C were implemented for a wide variety of mainframe computers, minicomputers, and microcomputers, including the IBM PC, as its popularity began to increase significantly.
|
46 |
+
|
47 |
+
In 1983, the American National Standards Institute (ANSI) formed a committee, X3J11, to establish a standard specification of C. X3J11 based the C standard on the Unix implementation; however, the non-portable portion of the Unix C library was handed off to the IEEE working group 1003 to become the basis for the 1988 POSIX standard. In 1989, the C standard was ratified as ANSI X3.159-1989 "Programming Language C". This version of the language is often referred to as ANSI C, Standard C, or sometimes C89.
|
48 |
+
|
49 |
+
In 1990, the ANSI C standard (with formatting changes) was adopted by the International Organization for Standardization (ISO) as ISO/IEC 9899:1990, which is sometimes called C90. Therefore, the terms "C89" and "C90" refer to the same programming language.
|
50 |
+
|
51 |
+
ANSI, like other national standards bodies, no longer develops the C standard independently, but defers to the international C standard, maintained by the working group ISO/IEC JTC1/SC22/WG14. National adoption of an update to the international standard typically occurs within a year of ISO publication.
|
52 |
+
|
53 |
+
One of the aims of the C standardization process was to produce a superset of K&R C, incorporating many of the subsequently introduced unofficial features. The standards committee also included several additional features such as function prototypes (borrowed from C++), void pointers, support for international character sets and locales, and preprocessor enhancements. Although the syntax for parameter declarations was augmented to include the style used in C++, the K&R interface continued to be permitted, for compatibility with existing source code.
|
54 |
+
|
55 |
+
C89 is supported by current C compilers, and most modern C code is based on it. Any program written only in Standard C and without any hardware-dependent assumptions will run correctly on any platform with a conforming C implementation, within its resource limits. Without such precautions, programs may compile only on a certain platform or with a particular compiler, due, for example, to the use of non-standard libraries, such as GUI libraries, or to a reliance on compiler- or platform-specific attributes such as the exact size of data types and byte endianness.
|
56 |
+
|
57 |
+
In cases where code must be compilable by either standard-conforming or K&R C-based compilers, the __STDC__ macro can be used to split the code into Standard and K&R sections to prevent the use on a K&R C-based compiler of features available only in Standard C.
|
58 |
+
|
59 |
+
After the ANSI/ISO standardization process, the C language specification remained relatively static for several years. In 1995, Normative Amendment 1 to the 1990 C standard (ISO/IEC 9899/AMD1:1995, known informally as C95) was published, to correct some details and to add more extensive support for international character sets.[15]
|
60 |
+
|
61 |
+
The C standard was further revised in the late 1990s, leading to the publication of ISO/IEC 9899:1999 in 1999, which is commonly referred to as "C99". It has since been amended three times by Technical Corrigenda.[16]
|
62 |
+
|
63 |
+
C99 introduced several new features, including inline functions, several new data types (including long long int and a complex type to represent complex numbers), variable-length arrays and flexible array members, improved support for IEEE 754 floating point, support for variadic macros (macros of variable arity), and support for one-line comments beginning with //, as in BCPL or C++. Many of these had already been implemented as extensions in several C compilers.
|
64 |
+
|
65 |
+
C99 is for the most part backward compatible with C90, but is stricter in some ways; in particular, a declaration that lacks a type specifier no longer has int implicitly assumed. A standard macro __STDC_VERSION__ is defined with value 199901L to indicate that C99 support is available. GCC, Solaris Studio, and other C compilers now support many or all of the new features of C99. The C compiler in Microsoft Visual C++, however, implements the C89 standard and those parts of C99 that are required for compatibility with C++11.[17]
|
66 |
+
|
67 |
+
In 2007, work began on another revision of the C standard, informally called "C1X" until its official publication on 2011-12-08. The C standards committee adopted guidelines to limit the adoption of new features that had not been tested by existing implementations.
|
68 |
+
|
69 |
+
The C11 standard adds numerous new features to C and the library, including type generic macros, anonymous structures, improved Unicode support, atomic operations, multi-threading, and bounds-checked functions. It also makes some portions of the existing C99 library optional, and improves compatibility with C++. The standard macro __STDC_VERSION__ is defined as 201112L to indicate that C11 support is available.
|
70 |
+
|
71 |
+
Published in June 2018, C18 is the current standard for the C programming language. It introduces no new language features, only technical corrections, and clarifications to defects in C11. The standard macro __STDC_VERSION__ is defined as 201710L.
|
72 |
+
|
73 |
+
Historically, embedded C programming requires nonstandard extensions to the C language in order to support exotic features such as fixed-point arithmetic, multiple distinct memory banks, and basic I/O operations.
|
74 |
+
|
75 |
+
In 2008, the C Standards Committee published a technical report extending the C language[18] to address these issues by providing a common standard for all implementations to adhere to. It includes a number of features not available in normal C, such as fixed-point arithmetic, named address spaces, and basic I/O hardware addressing.
|
76 |
+
|
77 |
+
C has a formal grammar specified by the C standard.[19] Line endings are generally not significant in C; however, line boundaries do have significance during the preprocessing phase. Comments may appear either between the delimiters /* and */, or (since C99) following // until the end of the line. Comments delimited by /* and */ do not nest, and these sequences of characters are not interpreted as comment delimiters if they appear inside string or character literals.[20]
|
78 |
+
|
79 |
+
C source files contain declarations and function definitions. Function definitions, in turn, contain declarations and statements. Declarations either define new types using keywords such as struct, union, and enum, or assign types to and perhaps reserve storage for new variables, usually by writing the type followed by the variable name. Keywords such as char and int specify built-in types. Sections of code are enclosed in braces ({ and }, sometimes called "curly brackets") to limit the scope of declarations and to act as a single statement for control structures.
|
80 |
+
|
81 |
+
As an imperative language, C uses statements to specify actions. The most common statement is an expression statement, consisting of an expression to be evaluated, followed by a semicolon; as a side effect of the evaluation, functions may be called and variables may be assigned new values. To modify the normal sequential execution of statements, C provides several control-flow statements identified by reserved keywords. Structured programming is supported by if(-else) conditional execution and by do-while, while, and for iterative execution (looping). The for statement has separate initialization, testing, and reinitialization expressions, any or all of which can be omitted. break and continue can be used to leave the innermost enclosing loop statement or skip to its reinitialization. There is also a non-structured goto statement which branches directly to the designated label within the function. switch selects a case to be executed based on the value of an integer expression.
|
82 |
+
|
83 |
+
Expressions can use a variety of built-in operators and may contain function calls. The order in which arguments to functions and operands to most operators are evaluated is unspecified. The evaluations may even be interleaved. However, all side effects (including storage to variables) will occur before the next "sequence point"; sequence points include the end of each expression statement, and the entry to and return from each function call. Sequence points also occur during evaluation of expressions containing certain operators (&&, ||, ?: and the comma operator). This permits a high degree of object code optimization by the compiler, but requires C programmers to take more care to obtain reliable results than is needed for other programming languages.
|
84 |
+
|
85 |
+
Kernighan and Ritchie say in the Introduction of The C Programming Language: "C, like any other language, has its blemishes. Some of the operators have the wrong precedence; some parts of the syntax could be better."[21] The C standard did not attempt to correct many of these blemishes, because of the impact of such changes on already existing software.
|
86 |
+
|
87 |
+
The basic C source character set includes the following characters:
|
88 |
+
|
89 |
+
Newline indicates the end of a text line; it need not correspond to an actual single character, although for convenience C treats it as one.
|
90 |
+
|
91 |
+
Additional multi-byte encoded characters may be used in string literals, but they are not entirely portable. The latest C standard (C11) allows multi-national Unicode characters to be embedded portably within C source text by using \uXXXX or \UXXXXXXXX encoding (where the X denotes a hexadecimal character), although this feature is not yet widely implemented.
|
92 |
+
|
93 |
+
The basic C execution character set contains the same characters, along with representations for alert, backspace, and carriage return. Run-time support for extended character sets has increased with each revision of the C standard.
|
94 |
+
|
95 |
+
C89 has 32 reserved words, also known as keywords, which are the words that cannot be used for any purposes other than those for which they are predefined:
|
96 |
+
|
97 |
+
|
98 |
+
|
99 |
+
|
100 |
+
|
101 |
+
|
102 |
+
|
103 |
+
|
104 |
+
|
105 |
+
C99 reserved five more words:
|
106 |
+
|
107 |
+
|
108 |
+
|
109 |
+
|
110 |
+
|
111 |
+
|
112 |
+
|
113 |
+
C11 reserved seven more words:[22]
|
114 |
+
|
115 |
+
|
116 |
+
|
117 |
+
|
118 |
+
|
119 |
+
|
120 |
+
|
121 |
+
|
122 |
+
|
123 |
+
Most of the recently reserved words begin with an underscore followed by a capital letter, because identifiers of that form were previously reserved by the C standard for use only by implementations. Since existing program source code should not have been using these identifiers, it would not be affected when C implementations started supporting these extensions to the programming language. Some standard headers do define more convenient synonyms for underscored identifiers. The language previously included a reserved word called entry, but this was seldom implemented, and has now been removed as a reserved word.[23]
|
124 |
+
|
125 |
+
C supports a rich set of operators, which are symbols used within an expression to specify the manipulations to be performed while evaluating that expression. C has operators for:
|
126 |
+
|
127 |
+
C uses the operator = (used in mathematics to express equality) to indicate assignment, following the precedent of Fortran and PL/I, but unlike ALGOL and its derivatives. C uses the operator == to test for equality. The similarity between these two operators (assignment and equality) may result in the accidental use of one in place of the other, and in many cases, the mistake does not produce an error message (although some compilers produce warnings). For example, the conditional expression if (a == b + 1) might mistakenly be written as if (a = b + 1), which will be evaluated as true if a is not zero after the assignment.[24]
|
128 |
+
|
129 |
+
The C operator precedence is not always intuitive. For example, the operator == binds more tightly than (is executed prior to) the operators & (bitwise AND) and | (bitwise OR) in expressions such as x & 1 == 0, which must be written as (x & 1) == 0 if that is the coder's intent.[25]
|
130 |
+
|
131 |
+
The "hello, world" example, which appeared in the first edition of K&R, has become the model for an introductory program in most programming textbooks. The program prints "hello, world" to the standard output, which is usually a terminal or screen display.
|
132 |
+
|
133 |
+
The original version was:[26]
|
134 |
+
|
135 |
+
A standard-conforming "hello, world" program is:[a]
|
136 |
+
|
137 |
+
The first line of the program contains a preprocessing directive, indicated by #include. This causes the compiler to replace that line with the entire text of the stdio.h standard header, which contains declarations for standard input and output functions such as printf and scanf. The angle brackets surrounding stdio.h indicate that stdio.h is located using a search strategy that prefers headers provided with the compiler to other headers having the same name, as opposed to double quotes which typically include local or project-specific header files.
|
138 |
+
|
139 |
+
The next line indicates that a function named main is being defined. The main function serves a special purpose in C programs; the run-time environment calls the main function to begin program execution. The type specifier int indicates that the value that is returned to the invoker (in this case the run-time environment) as a result of evaluating the main function, is an integer. The keyword void as a parameter list indicates that this function takes no arguments.[b]
|
140 |
+
|
141 |
+
The opening curly brace indicates the beginning of the definition of the main function.
|
142 |
+
|
143 |
+
The next line calls (diverts execution to) a function named printf, which in this case is supplied from a system library. In this call, the printf function is passed (provided with) a single argument, the address of the first character in the string literal "hello, world\n". The string literal is an unnamed array with elements of type char, set up automatically by the compiler with a final 0-valued character to mark the end of the array (printf needs to know this). The \n is an escape sequence that C translates to a newline character, which on output signifies the end of the current line. The return value of the printf function is of type int, but it is silently discarded since it is not used. (A more careful program might test the return value to determine whether or not the printf function succeeded.) The semicolon ; terminates the statement.
|
144 |
+
|
145 |
+
The closing curly brace indicates the end of the code for the main function. According to the C99 specification and newer, the main function, unlike any other function, will implicitly return a value of 0 upon reaching the } that terminates the function. (Formerly an explicit return 0; statement was required.) This is interpreted by the run-time system as an exit code indicating successful execution.[27]
|
146 |
+
|
147 |
+
The type system in C is static and weakly typed, which makes it similar to the type system of ALGOL descendants such as Pascal.[28] There are built-in types for integers of various sizes, both signed and unsigned, floating-point numbers, and enumerated types (enum). Integer type char is often used for single-byte characters. C99 added a boolean datatype. There are also derived types including arrays, pointers, records (struct), and unions (union).
|
148 |
+
|
149 |
+
C is often used in low-level systems programming where escapes from the type system may be necessary. The compiler attempts to ensure type correctness of most expressions, but the programmer can override the checks in various ways, either by using a type cast to explicitly convert a value from one type to another, or by using pointers or unions to reinterpret the underlying bits of a data object in some other way.
|
150 |
+
|
151 |
+
Some find C's declaration syntax unintuitive, particularly for function pointers. (Ritchie's idea was to declare identifiers in contexts resembling their use: "declaration reflects use".)[29]
|
152 |
+
|
153 |
+
C's usual arithmetic conversions allow for efficient code to be generated, but can sometimes produce unexpected results. For example, a comparison of signed and unsigned integers of equal width requires a conversion of the signed value to unsigned. This can generate unexpected results if the signed value is negative.
|
154 |
+
|
155 |
+
C supports the use of pointers, a type of reference that records the address or location of an object or function in memory. Pointers can be dereferenced to access data stored at the address pointed to, or to invoke a pointed-to function. Pointers can be manipulated using assignment or pointer arithmetic. The run-time representation of a pointer value is typically a raw memory address (perhaps augmented by an offset-within-word field), but since a pointer's type includes the type of the thing pointed to, expressions including pointers can be type-checked at compile time. Pointer arithmetic is automatically scaled by the size of the pointed-to data type. Pointers are used for many purposes in C. Text strings are commonly manipulated using pointers into arrays of characters. Dynamic memory allocation is performed using pointers. Many data types, such as trees, are commonly implemented as dynamically allocated struct objects linked together using pointers. Pointers to functions are useful for passing functions as arguments to higher-order functions (such as qsort or bsearch) or as callbacks to be invoked by event handlers.[27]
|
156 |
+
|
157 |
+
A null pointer value explicitly points to no valid location. Dereferencing a null pointer value is undefined, often resulting in a segmentation fault. Null pointer values are useful for indicating special cases such as no "next" pointer in the final node of a linked list, or as an error indication from functions returning pointers. In appropriate contexts in source code, such as for assigning to a pointer variable, a null pointer constant can be written as 0, with or without explicit casting to a pointer type, or as the NULL macro defined by several standard headers. In conditional contexts, null pointer values evaluate to false, while all other pointer values evaluate to true.
|
158 |
+
|
159 |
+
Void pointers (void *) point to objects of unspecified type, and can therefore be used as "generic" data pointers. Since the size and type of the pointed-to object is not known, void pointers cannot be dereferenced, nor is pointer arithmetic on them allowed, although they can easily be (and in many contexts implicitly are) converted to and from any other object pointer type.[27]
|
160 |
+
|
161 |
+
Careless use of pointers is potentially dangerous. Because they are typically unchecked, a pointer variable can be made to point to any arbitrary location, which can cause undesirable effects. Although properly used pointers point to safe places, they can be made to point to unsafe places by using invalid pointer arithmetic; the objects they point to may continue to be used after deallocation (dangling pointers); they may be used without having been initialized (wild pointers); or they may be directly assigned an unsafe value using a cast, union, or through another corrupt pointer. In general, C is permissive in allowing manipulation of and conversion between pointer types, although compilers typically provide options for various levels of checking. Some other programming languages address these problems by using more restrictive reference types.
|
162 |
+
|
163 |
+
Array types in C are traditionally of a fixed, static size specified at compile time. (The more recent C99 standard also allows a form of variable-length arrays.) However, it is also possible to allocate a block of memory (of arbitrary size) at run-time, using the standard library's malloc function, and treat it as an array. C's unification of arrays and pointers means that declared arrays and these dynamically allocated simulated arrays are virtually interchangeable.
|
164 |
+
|
165 |
+
Since arrays are always accessed (in effect) via pointers, array accesses are typically not checked against the underlying array size, although some compilers may provide bounds checking as an option.[30][31] Array bounds violations are therefore possible and rather common in carelessly written code, and can lead to various repercussions, including illegal memory accesses, corruption of data, buffer overruns, and run-time exceptions. If bounds checking is desired, it must be done manually.
|
166 |
+
|
167 |
+
C does not have a special provision for declaring multi-dimensional arrays, but rather relies on recursion within the type system to declare arrays of arrays, which effectively accomplishes the same thing. The index values of the resulting "multi-dimensional array" can be thought of as increasing in row-major order.
|
168 |
+
|
169 |
+
Multi-dimensional arrays are commonly used in numerical algorithms (mainly from applied linear algebra) to store matrices. The structure of the C array is well suited to this particular task. However, since arrays are passed merely as pointers, the bounds of the array must be known fixed values or else explicitly passed to any subroutine that requires them, and dynamically sized arrays of arrays cannot be accessed using double indexing. (A workaround for this is to allocate the array with an additional "row vector" of pointers to the columns.)
|
170 |
+
|
171 |
+
C99 introduced "variable-length arrays" which address some, but not all, of the issues with ordinary C arrays.
|
172 |
+
|
173 |
+
The subscript notation x[i] (where x designates a pointer) is syntactic sugar for *(x+i).[32] Taking advantage of the compiler's knowledge of the pointer type, the address that x + i points to is not the base address (pointed to by x) incremented by i bytes, but rather is defined to be the base address incremented by i multiplied by the size of an element that x points to. Thus, x[i] designates the i+1th element of the array.
|
174 |
+
|
175 |
+
Furthermore, in most expression contexts (a notable exception is as operand of sizeof), the name of an array is automatically converted to a pointer to the array's first element. This implies that an array is never copied as a whole when named as an argument to a function, but rather only the address of its first element is passed. Therefore, although function calls in C use pass-by-value semantics, arrays are in effect passed by reference.
|
176 |
+
|
177 |
+
The size of an element can be determined by applying the operator sizeof to any dereferenced element of x, as in n = sizeof *x or n = sizeof x[0], and the number of elements in a declared array A can be determined as sizeof A / sizeof A[0]. The latter only applies to array names: variables declared with subscripts (int A[20]). Due to the semantics of C, it is not possible to determine the entire size of arrays through pointers to arrays or those created by dynamic allocation (malloc); code such as sizeof arr / sizeof arr[0] (where arr designates a pointer) will not work since the compiler assumes the size of the pointer itself is being requested.[33][34] Since array name arguments to sizeof are not converted to pointers, they do not exhibit such ambiguity. However, arrays created by dynamic allocation are accessed by pointers rather than true array variables, so they suffer from the same sizeof issues as array pointers.
|
178 |
+
|
179 |
+
Thus, despite this apparent equivalence between array and pointer variables, there is still a distinction to be made between them. Even though the name of an array is, in most expression contexts, converted into a pointer (to its first element), this pointer does not itself occupy any storage; the array name is not an l-value, and its address is a constant, unlike a pointer variable. Consequently, what an array "points to" cannot be changed, and it is impossible to assign a new address to an array name. Array contents may be copied, however, by using the memcpy function, or by accessing the individual elements.
|
180 |
+
|
181 |
+
One of the most important functions of a programming language is to provide facilities for managing memory and the objects that are stored in memory. C provides three distinct ways to allocate memory for objects:[27]
|
182 |
+
|
183 |
+
These three approaches are appropriate in different situations and have various trade-offs. For example, static memory allocation has little allocation overhead, automatic allocation may involve slightly more overhead, and dynamic memory allocation can potentially have a great deal of overhead for both allocation and deallocation. The persistent nature of static objects is useful for maintaining state information across function calls, automatic allocation is easy to use but stack space is typically much more limited and transient than either static memory or heap space, and dynamic memory allocation allows convenient allocation of objects whose size is known only at run-time. Most C programs make extensive use of all three.
|
184 |
+
|
185 |
+
Where possible, automatic or static allocation is usually simplest because the storage is managed by the compiler, freeing the programmer of the potentially error-prone chore of manually allocating and releasing storage. However, many data structures can change in size at runtime, and since static allocations (and automatic allocations before C99) must have a fixed size at compile-time, there are many situations in which dynamic allocation is necessary.[27] Prior to the C99 standard, variable-sized arrays were a common example of this. (See the article on malloc for an example of dynamically allocated arrays.) Unlike automatic allocation, which can fail at run time with uncontrolled consequences, the dynamic allocation functions return an indication (in the form of a null pointer value) when the required storage cannot be allocated. (Static allocation that is too large is usually detected by the linker or loader, before the program can even begin execution.)
|
186 |
+
|
187 |
+
Unless otherwise specified, static objects contain zero or null pointer values upon program startup. Automatically and dynamically allocated objects are initialized only if an initial value is explicitly specified; otherwise they initially have indeterminate values (typically, whatever bit pattern happens to be present in the storage, which might not even represent a valid value for that type). If the program attempts to access an uninitialized value, the results are undefined. Many modern compilers try to detect and warn about this problem, but both false positives and false negatives can occur.
|
188 |
+
|
189 |
+
Another issue is that heap memory allocation has to be synchronized with its actual usage in any program in order for it to be reused as much as possible. For example, if the only pointer to a heap memory allocation goes out of scope or has its value overwritten before free() is called, then that memory cannot be recovered for later reuse and is essentially lost to the program, a phenomenon known as a memory leak. Conversely, it is possible for memory to be freed but continue to be referenced, leading to unpredictable results. Typically, the symptoms will appear in a portion of the program far removed from the actual error, making it difficult to track down the problem. (Such issues are ameliorated in languages with automatic garbage collection.)
|
190 |
+
|
191 |
+
The C programming language uses libraries as its primary method of extension. In C, a library is a set of functions contained within a single "archive" file. Each library typically has a header file, which contains the prototypes of the functions contained within the library that may be used by a program, and declarations of special data types and macro symbols used with these functions. In order for a program to use a library, it must include the library's header file, and the library must be linked with the program, which in many cases requires compiler flags (e.g., -lm, shorthand for "link the math library").[27]
|
192 |
+
|
193 |
+
The most common C library is the C standard library, which is specified by the ISO and ANSI C standards and comes with every C implementation (implementations which target limited environments such as embedded systems may provide only a subset of the standard library). This library supports stream input and output, memory allocation, mathematics, character strings, and time values. Several separate standard headers (for example, stdio.h) specify the interfaces for these and other standard library facilities.
|
194 |
+
|
195 |
+
Another common set of C library functions are those used by applications specifically targeted for Unix and Unix-like systems, especially functions which provide an interface to the kernel. These functions are detailed in various standards such as POSIX and the Single UNIX Specification.
|
196 |
+
|
197 |
+
Since many programs have been written in C, there are a wide variety of other libraries available. Libraries are often written in C because C compilers generate efficient object code; programmers then create interfaces to the library so that the routines can be used from higher-level languages like Java, Perl, and Python.[27]
|
198 |
+
|
199 |
+
File input and output (I/O) is not part of the C language itself but instead is handled by libraries (such as the C standard library) and their associated header files (e.g. stdio.h). File handling is generally implemented through high-level I/O which works through streams. A stream is from this perspective a data flow that is independent of devices, while a file is a concrete device. The high level I/O is done through the association of a stream to a file. In the C standard library, a buffer (a memory area or queue) is temporarily used to store data before it's sent to the final destination. This reduces the time spent waiting for slower devices, for example a hard drive or solid state drive. Low-level I/O functions are not part of the standard C library but are generally part of "bare metal" programming (programming that's independent of any operating system such as most but not all embedded programming). With few exceptions, implementations include low-level I/O.
|
200 |
+
|
201 |
+
A number of tools have been developed to help C programmers find and fix statements with undefined behavior or possibly erroneous expressions, with greater rigor than that provided by the compiler. The tool lint was the first such, leading to many others.
|
202 |
+
|
203 |
+
Automated source code checking and auditing are beneficial in any language, and for C many such tools exist, such as Lint. A common practice is to use Lint to detect questionable code when a program is first written. Once a program passes Lint, it is then compiled using the C compiler. Also, many compilers can optionally warn about syntactically valid constructs that are likely to actually be errors. MISRA C is a proprietary set of guidelines to avoid such questionable code, developed for embedded systems.[35]
|
204 |
+
|
205 |
+
There are also compilers, libraries, and operating system level mechanisms for performing actions that are not a standard part of C, such as bounds checking for arrays, detection of buffer overflow, serialization, dynamic memory tracking, and automatic garbage collection.
|
206 |
+
|
207 |
+
Tools such as Purify or Valgrind and linking with libraries containing special versions of the memory allocation functions can help uncover runtime errors in memory usage.
|
208 |
+
|
209 |
+
C is widely used for systems programming in implementing operating systems and embedded system applications,[37] because C code, when written for portability, can be used for most purposes, yet when needed, system-specific code can be used to access specific hardware addresses and to perform type punning to match externally imposed interface requirements, with a low run-time demand on system resources.
|
210 |
+
|
211 |
+
C can also be used for website programming using CGI as a "gateway" for information between the Web application, the server, and the browser.[38] C is often chosen over interpreted languages because of its speed, stability, and near-universal availability.[39]
|
212 |
+
|
213 |
+
One consequence of C's wide availability and efficiency is that compilers, libraries and interpreters of other programming languages are often implemented in C. The reference implementations of Python, Perl and PHP, for example, are all written in C.
|
214 |
+
|
215 |
+
Because the layer of abstraction is thin and the overhead is low, C enables programmers to create efficient implementations of algorithms and data structures, useful for computationally intense programs. For example, the GNU Multiple Precision Arithmetic Library, the GNU Scientific Library, Mathematica, and MATLAB are completely or partially written in C.
|
216 |
+
|
217 |
+
C is sometimes used as an intermediate language by implementations of other languages. This approach may be used for portability or convenience; by using C as an intermediate language, additional machine-specific code generators are not necessary. C has some features, such as line-number preprocessor directives and optional superfluous commas at the end of initializer lists, that support compilation of generated code. However, some of C's shortcomings have prompted the development of other C-based languages specifically designed for use as intermediate languages, such as C--.
|
218 |
+
|
219 |
+
C has also been widely used to implement end-user applications. However, such applications can also be written in newer, higher-level languages.
|
220 |
+
|
221 |
+
C has both directly and indirectly influenced many later languages such as C#, D, Go, Java, JavaScript, Limbo, LPC, Perl, PHP, Python, and Unix's C shell.[40] The most pervasive influence has been syntactical, all of the languages mentioned combine the statement and (more or less recognizably) expression syntax of C with type systems, data models and/or large-scale program structures that differ from those of C, sometimes radically.
|
222 |
+
|
223 |
+
Several C or near-C interpreters exist, including Ch and CINT, which can also be used for scripting.
|
224 |
+
|
225 |
+
When object-oriented languages became popular, C++ and Objective-C were two different extensions of C that provided object-oriented capabilities. Both languages were originally implemented as source-to-source compilers; source code was translated into C, and then compiled with a C compiler.[41]
|
226 |
+
|
227 |
+
The C++ programming language was devised by Bjarne Stroustrup as an approach to providing object-oriented functionality with a C-like syntax.[42] C++ adds greater typing strength, scoping, and other tools useful in object-oriented programming, and permits generic programming via templates. Nearly a superset of C, C++ now supports most of C, with a few exceptions.
|
228 |
+
|
229 |
+
Objective-C was originally a very "thin" layer on top of C, and remains a strict superset of C that permits object-oriented programming using a hybrid dynamic/static typing paradigm. Objective-C derives its syntax from both C and Smalltalk: syntax that involves preprocessing, expressions, function declarations, and function calls is inherited from C, while the syntax for object-oriented features was originally taken from Smalltalk.
|
230 |
+
|
231 |
+
In addition to C++ and Objective-C, Ch, Cilk and Unified Parallel C are nearly supersets of C.
|
en/1161.html.txt
ADDED
@@ -0,0 +1,165 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Plucked
|
2 |
+
|
3 |
+
The clarinet is a family of woodwind instruments. It has a single-reed mouthpiece, a straight, cylindrical tube with an almost cylindrical bore, and a flared bell. A person who plays a clarinet is called a clarinetist (sometimes spelled clarinettist).
|
4 |
+
|
5 |
+
While the similarity in sound between the earliest clarinets and the trumpet may hold a clue to its name, other factors may have been involved. During the Late Baroque era, composers such as Bach and Handel were making new demands on the skills of their trumpeters, who were often required to play difficult melodic passages in the high, or as it came to be called, clarion register. Since the trumpets of this time had no valves or pistons, melodic passages would often require the use of the highest part of the trumpet's range, where the harmonics were close enough together to produce scales of adjacent notes as opposed to the gapped scales or arpeggios of the lower register. The trumpet parts that required this specialty were known by the term clarino and this in turn came to apply to the musicians themselves. It is probable that the term clarinet may stem from the diminutive version of the 'clarion' or 'clarino' and it has been suggested that clarino players may have helped themselves out by playing particularly difficult passages on these newly developed "mock trumpets".[1]
|
6 |
+
|
7 |
+
Johann Christoph Denner is generally believed to have invented the clarinet in Germany around the year 1700 by adding a register key to the earlier chalumeau, usually in the key of C. Over time, additional keywork and airtight pads were added to improve the tone and playability.[2]
|
8 |
+
|
9 |
+
In modern times, the most common clarinet is the B♭ clarinet. However, the clarinet in A, just a semitone lower, is regularly used in orchestral, chamber and solo music. An orchestral clarinetist must own both a clarinet in A and B♭ since the repertoire is divided fairly evenly between the two. Since the middle of the 19th century, the bass clarinet (nowadays invariably in B♭ but with extra keys to extend the register down to low written C3) has become an essential addition to the orchestra. The clarinet family ranges from the (extremely rare) BBB♭ octo-contrabass to the A♭ piccolo clarinet. The clarinet has proved to be an exceptionally flexible instrument, used in the classical repertoire as in concert bands, military bands, marching bands, klezmer, jazz, and other styles.
|
10 |
+
|
11 |
+
The word clarinet may have entered the English language via the French clarinette (the feminine diminutive of Old French clarin or clarion), or from Provençal clarin, "oboe".[3]
|
12 |
+
|
13 |
+
It would seem, however, that its real roots are to be found among some of the various names for trumpets used around the Renaissance and Baroque eras. Clarion, clarin, and the Italian clarino are all derived from the medieval term claro, which referred to an early form of trumpet.[4] This is probably the origin of the Italian clarinetto, itself a diminutive of clarino, and consequently of the European equivalents such as clarinette in French or the German Klarinette. According to Johann Gottfried Walther, writing in 1732, the reason for the name is that "it sounded from far off not unlike a trumpet". The English form clarinet is found as early as 1733, and the now-archaic clarionet appears from 1784 until the early years of the 20th century.[5]
|
14 |
+
|
15 |
+
The cylindrical bore is primarily responsible for the clarinet's distinctive timbre, which varies between its three main registers, known as the chalumeau, clarion, and altissimo. The tone quality can vary greatly with the clarinetist, music, instrument, mouthpiece, and reed. The differences in instruments and geographical isolation of clarinetists led to the development from the last part of the 18th century onwards of several different schools of playing. The most prominent were the German/Viennese traditions and French school. The latter was centered on the clarinetists of the Conservatoire de Paris.[6] The proliferation of recorded music has made examples of different styles of playing available. The modern clarinetist has a diverse palette of "acceptable" tone qualities to choose from.
|
16 |
+
|
17 |
+
The A and B♭ clarinets have nearly the same bore and use the same mouthpiece.[7] Orchestral clarinetists using the A and B♭ instruments in a concert could use the same mouthpiece (and often the same barrel) (see 'usage' below). The A and B♭ have nearly identical tonal quality, although the A typically has a slightly warmer sound. The tone of the E♭ clarinet is brighter and can be heard even through loud orchestral or concert band textures.[8] The bass clarinet has a characteristically deep, mellow sound, while the alto clarinet is similar in tone to the bass (though not as dark).[9]
|
18 |
+
|
19 |
+
Clarinets have the largest pitch range of common woodwinds.[10] The intricate key organization that makes this possible can make the playability of some passages awkward. The bottom of the clarinet's written range is defined by the keywork on each instrument, standard keywork schemes allowing a low E on the common B♭ clarinet. The lowest concert pitch depends on the transposition of the instrument in question. The nominal highest note of the B♭ clarinet is a semitone higher than the highest note of the oboe but this depends on the setup and skill of the player. Since the clarinet has a wider range of notes, the lowest note of the B♭ clarinet is significantly deeper (a minor or major sixth) than the lowest note of the oboe.[11]
|
20 |
+
|
21 |
+
Nearly all soprano and piccolo clarinets have keywork enabling them to play the E below middle C as their lowest written note (in scientific pitch notation that sounds D3 on a soprano clarinet or C4, i.e. concert middle C, on a piccolo clarinet), though some B♭ clarinets go down to E♭3 to enable them to match the range of the A clarinet.[12] On the B♭ soprano clarinet, the concert pitch of the lowest note is D3, a whole tone lower than the written pitch. Most alto and bass clarinets have an extra key to allow a (written) E♭3. Modern professional-quality bass clarinets generally have additional keywork to written C3.[13] Among the less commonly encountered members of the clarinet family, contra-alto and contrabass clarinets may have keywork to written E♭3, D3, or C3;[14] the basset clarinet and basset horn generally go to low C3.[15]
|
22 |
+
|
23 |
+
Defining the top end of a clarinet's range is difficult, since many advanced players can produce notes well above the highest notes commonly found in method books. G6 is usually the highest note clarinetists encounter in classical repertoire.[16] The C above that (C7 i.e. resting on the fifth ledger line above the treble staff) is attainable by advanced players and is shown on many fingering charts,[16] and fingerings as high as A7 exist.[17][18]
|
24 |
+
|
25 |
+
The range of a clarinet can be divided into three distinct registers:
|
26 |
+
|
27 |
+
All three registers have characteristically different sounds. The chalumeau register is rich and dark. The clarion register is brighter and sweet, like a trumpet (clarion) heard from afar. The altissimo register can be piercing and sometimes shrill.
|
28 |
+
|
29 |
+
Sound is a wave that propagates through the air as a result of a local variation in air pressure. The production of sound by a clarinet follows these steps:[20]
|
30 |
+
|
31 |
+
The cycle repeats at a frequency relative to how long it takes a wave to travel to the first open hole and back twice (i.e. four times the length of the pipe). For example: when all the holes bar the very top one are open (i.e. the trill 'B' key is pressed), the note A4 (440 Hz) is produced. This represents a repeat of the cycle 440 times per second.[22]
|
32 |
+
|
33 |
+
In addition to this primary compression wave, other waves, known as harmonics, are created. Harmonics are caused by factors including the imperfect wobbling and shaking of the reed, the reed sealing the mouthpiece opening for part of the wave cycle (which creates a flattened section of the sound wave), and imperfections (bumps and holes) in the bore. A wide variety of compression waves are created, but only some (primarily the odd harmonics) are reinforced. These extra waves are what gives the clarinet its characteristic tone.[23]
|
34 |
+
|
35 |
+
The bore is cylindrical for most of the tube with an inner bore diameter between 14 and 15.5 millimetres (0.55 and 0.61 in), but there is a subtle hourglass shape, with the thinnest part below the junction between the upper and lower joint.[24] The reduction is 1 to 3 millimetres (0.039 to 0.118 in) depending on the maker. This hourglass shape, although invisible to the naked eye, helps to correct the pitch/scale discrepancy between the chalumeau and clarion registers (perfect twelfth).[24] The diameter of the bore affects characteristics such as available harmonics, timbre, and pitch stability (how far the player can bend a note in the manner required in jazz and other music). The bell at the bottom of the clarinet flares out to improve the tone and tuning of the lowest notes.
|
36 |
+
|
37 |
+
Most modern clarinets have "undercut" tone holes that improve intonation and sound. Undercutting means chamfering the bottom edge of tone holes inside the bore. Acoustically, this makes the tone hole function as if it were larger, but its main function is to allow the air column to follow the curve up through the tone hole (surface tension) instead of "blowing past" it under the increasingly directional frequencies of the upper registers.[25]
|
38 |
+
|
39 |
+
The fixed reed and fairly uniform diameter of the clarinet give the instrument an acoustical behavior approximating that of a cylindrical stopped pipe.[20] Recorders use a tapered internal bore to overblow at the octave when the thumb/register hole is pinched open, while the clarinet, with its cylindrical bore, overblows at the twelfth. Adjusting the angle of the bore taper controls the frequencies of the overblown notes (harmonics).[20] Changing the mouthpiece's tip opening and the length of the reed changes aspects of the harmonic timbre or voice of the clarinet because this changes the speed of reed vibrations.[20] Generally, the goal of the clarinetist when producing a sound is to make as much of the reed vibrate as possible, making the sound fuller, warmer, and potentially louder.
|
40 |
+
|
41 |
+
The lip position and pressure, shaping of the vocal tract, choice of reed and mouthpiece, amount of air pressure created, and evenness of the airflow account for most of the clarinetist's ability to control the tone of a clarinet.[26] A highly skilled clarinetist will provide the ideal lip and air pressure for each frequency (note) being produced. They will have an embouchure which places an even pressure across the reed by carefully controlling their lip muscles. The airflow will also be carefully controlled by using the strong stomach muscles (as opposed to the weaker and erratic chest muscles) and they will use the diaphragm to oppose the stomach muscles to achieve a tone softer than a forte rather than weakening the stomach muscle tension to lower air pressure.[27] Their vocal tract will be shaped to resonate at frequencies associated with the tone being produced.
|
42 |
+
|
43 |
+
Covering or uncovering the tone holes varies the length of the pipe, changing the resonant frequencies of the enclosed air column and hence the pitch.[20] A clarinetist moves between the chalumeau and clarion registers through use of the register key; clarinetists call the change from chalumeau register to clarion register "the break".[28] The open register key stops the fundamental frequency from being reinforced, and the reed is forced to vibrate at three times the speed it was originally. This produces a note a twelfth above the original note.
|
44 |
+
|
45 |
+
Most instruments overblow at two times the speed of the fundamental frequency (the octave), but as the clarinet acts as a closed pipe system, the reed cannot vibrate at twice its original speed because it would be creating a 'puff' of air at the time the previous 'puff' is returning as a rarefaction. This means it cannot be reinforced and so would die away. The chalumeau register plays fundamentals, whereas the clarion register, aided by the register key, plays third harmonics (a perfect twelfth higher than the fundamentals). The first several notes of the altissimo range, aided by the register key and venting with the first left-hand hole, play fifth harmonics (a major seventeenth, a perfect twelfth plus a major sixth, above the fundamentals). The clarinet is therefore said to overblow at the twelfth and, when moving to the altissimo register, seventeenth.
|
46 |
+
|
47 |
+
By contrast, nearly all other woodwind instruments overblow at the octave or (like the ocarina and tonette) do not overblow at all. A clarinet must have holes and keys for nineteen notes, a chromatic octave and a half from bottom E to B♭, in its lowest register to play the chromatic scale. This overblowing behavior explains the clarinet's great range and complex fingering system. The fifth and seventh harmonics are also available, sounding a further sixth and fourth (a flat, diminished fifth) higher respectively; these are the notes of the altissimo register.[20] This is also why the inner "waist" measurement is so critical to these harmonic frequencies.
|
48 |
+
|
49 |
+
The highest notes can have a shrill, piercing quality and can be difficult to tune accurately. Different instruments often play differently in this respect due to the sensitivity of the bore and reed measurements. Using alternate fingerings and adjusting the embouchure helps correct the pitch of these notes.
|
50 |
+
|
51 |
+
Since approximately 1850, clarinets have been nominally tuned according to twelve-tone equal temperament. Older clarinets were nominally tuned to meantone. Skilled performers can use their embouchures to considerably alter the tuning of individual notes or produce vibrato, a pulsating change of pitch often employed in jazz.[29] Vibrato is rare in classical or concert band literature; however, certain clarinetists, such as Richard Stoltzman, use vibrato in classical music. Special fingerings may be used to play quarter tones and other microtonal intervals.[30]
|
52 |
+
|
53 |
+
Around 1900, Dr. Richard H. Stein, a Berlin musicologist, made a quarter-tone clarinet, which was soon abandoned.[31][32] Years later, another German, Fritz Schüller of Markneukirchen, built a quarter tone clarinet, with two parallel bores of slightly different lengths whose tone holes are operated using the same keywork and a valve to switch from one bore to the other.
|
54 |
+
|
55 |
+
Clarinet bodies have been made from a variety of materials including wood, plastic, hard rubber, metal, resin, and ivory.[33] The vast majority of clarinets used by professionals are made from African hardwood, mpingo (African Blackwood) or grenadilla, rarely (because of diminishing supplies) Honduran rosewood, and sometimes even cocobolo.[34] Historically other woods, notably boxwood, were used.[34] Most inexpensive clarinets are made of plastic resin, such as ABS.[34] Resonite is Selmer's trademark name for its type of plastic. Metal soprano clarinets were popular in the early 20th century until plastic instruments supplanted them;[35] metal construction is still used for the bodies of some contra-alto and contrabass clarinets and the necks and bells of nearly all alto and larger clarinets.[36] Ivory was used for a few 18th-century clarinets, but it tends to crack and does not keep its shape well.[37] Buffet Crampon's Greenline clarinets are made from a composite of grenadilla wood powder and carbon fiber.[38] Such clarinets are less affected by humidity and temperature changes than wooden instruments but are heavier. Hard rubber, such as ebonite, has been used for clarinets since the 1860s, although few modern clarinets are made of it. Clarinet designers Alastair Hanson and Tom Ridenour are strong advocates of hard rubber.[39][40] The Hanson Clarinet Company manufactures clarinets using a grenadilla compound reinforced with ebonite, known as BTR (bithermal-reinforced) grenadilla. This material is also not affected by humidity, and the weight is the same as that of a wooden clarinet.
|
56 |
+
|
57 |
+
Mouthpieces are generally made of hard rubber, although some inexpensive mouthpieces may be made of plastic. Other materials such as crystal/glass, wood, ivory, and metal have also been used.[41] Ligatures are often made of metal and plated in nickel, silver, or gold. Other materials include wire, wire mesh, plastic, naugahyde, string, or leather.[28]
|
58 |
+
|
59 |
+
The clarinet uses a single reed made from the cane of Arundo donax, a type of grass.[42] Reeds may also be manufactured from synthetic materials. The ligature fastens the reed to the mouthpiece. When air is blown through the opening between the reed and the mouthpiece facing, the reed vibrates and produces the clarinet's sound.
|
60 |
+
|
61 |
+
Basic reed measurements are as follows: tip, 12 millimetres (0.47 in) wide; lay, 15 millimetres (0.59 in) long (distance from the place where the reed touches the mouthpiece to the tip); gap, 1 millimetre (0.039 in) (distance between the underside of the reed tip and the mouthpiece). Adjustment to these measurements is one method of affecting tone color.[24]
|
62 |
+
|
63 |
+
Most clarinetists buy manufactured reeds, although many make adjustments to these reeds, and some make their own reeds from cane "blanks".[43] Reeds come in varying degrees of hardness, generally indicated on a scale from one (soft) through five (hard). This numbering system is not standardized—reeds with the same number often vary in hardness across manufacturers and models.[28] Reed and mouthpiece characteristics work together to determine ease of playability, pitch stability, and tonal characteristics.[28]
|
64 |
+
|
65 |
+
Note: A Böhm system soprano clarinet is shown in the photos illustrating this section. However, all modern clarinets have similar components.
|
66 |
+
|
67 |
+
The reed is attached to the mouthpiece by the ligature, and the top half-inch or so of this assembly is held in the player's mouth. In the past, clarinetists used to wrap a string around the mouthpiece and reed instead of using a ligature. The formation of the mouth around the mouthpiece and reed is called the embouchure.
|
68 |
+
|
69 |
+
The reed is on the underside of the mouthpiece, pressing against the player's lower lip, while the top teeth normally contact the top of the mouthpiece (some players roll the upper lip under the top teeth to form what is called a 'double-lip' embouchure).[44] Adjustments in the strength and shape of the embouchure change the tone and intonation (tuning). It is not uncommon for clarinetists to employ methods to relieve the pressure on the upper teeth and inner lower lip by attaching pads to the top of the mouthpiece or putting (temporary) padding on the front lower teeth, commonly from folded paper.[45]
|
70 |
+
|
71 |
+
Next is the short barrel; this part of the instrument may be extended to fine-tune the clarinet. As the pitch of the clarinet is fairly temperature-sensitive, some instruments have interchangeable barrels whose lengths vary slightly. Additional compensation for pitch variation and tuning can be made by pulling out the barrel and thus increasing the instrument's length, particularly common in group playing in which clarinets are tuned to other instruments (such as in an orchestra or concert band). Some performers use a plastic barrel with a thumbwheel that adjusts the barrel length. On basset horns and lower clarinets, the barrel is normally replaced by a curved metal neck.
|
72 |
+
|
73 |
+
The main body of most clarinets is divided into the upper joint, the holes and most keys of which are operated by the left hand, and the lower joint with holes and most keys operated by the right hand. Some clarinets have a single joint: on some basset horns and larger clarinets the two joints are held together with a screw clamp and are usually not disassembled for storage. The left thumb operates both a tone hole and the register key. On some models of clarinet, such as many Albert system clarinets and increasingly some higher-end Böhm system clarinets, the register key is a 'wraparound' key, with the key on the back of the clarinet and the pad on the front. Advocates of the wraparound register key say it improves sound, and it is harder for moisture to accumulate in the tube beneath the pad.[46] Nevertheless, there is a consensus among repair techs that this type of register key is harder to keep in adjustment, i.e., it is hard to have enough spring pressure to close the hole securely.[47]
|
74 |
+
|
75 |
+
The body of a modern soprano clarinet is equipped with numerous tone holes of which seven (six front, one back) are covered with the fingertips, and the rest are opened or closed using a set of keys. These tone holes let the player produce every note of the chromatic scale. On alto and larger clarinets, and a few soprano clarinets, key-covered holes replace some or all finger holes. The most common system of keys was named the Böhm system by its designer Hyacinthe Klosé in honour of flute designer Theobald Böhm, but it is not the same as the Böhm system used on flutes.[48] The other main system of keys is called the Oehler system and is used mostly in Germany and Austria (see History).[49] The related Albert system is used by some jazz, klezmer, and eastern European folk musicians.[50] The Albert and Oehler systems are both based on the early Mueller system.
|
76 |
+
|
77 |
+
The cluster of keys at the bottom of the upper joint (protruding slightly beyond the cork of the joint) are known as the trill keys and are operated by the right hand.[28] These give the player alternative fingerings that make it easy to play ornaments and trills.[28] The entire weight of the smaller clarinets is supported by the right thumb behind the lower joint on what is called the thumb-rest.[51] Basset horns and larger clarinets are supported with a neck strap or a floor peg.
|
78 |
+
|
79 |
+
Finally, the flared end is known as the bell. Contrary to popular belief, the bell does not amplify the sound; rather, it improves the uniformity of the instrument's tone for the lowest notes in each register.[20] For the other notes, the sound is produced almost entirely at the tone holes, and the bell is irrelevant.[20] On basset horns and larger clarinets, the bell curves up and forward and is usually made of metal.[52]
|
80 |
+
|
81 |
+
Theobald Böhm did not directly invent the key system of the clarinet. Böhm was a flautist who created the key system that is now used for the transverse flute. Klosé and Buffet applied Böhm's system to the clarinet. Although the credit goes to those people, Böhm's name was given to that key system because it was based on that used for the flute.[53]
|
82 |
+
|
83 |
+
The current Böhm key system consists of generally 6 rings, on the thumb, 1st, 2nd, 4th, 5th, and 6th holes, and a register key just above the thumb hole, easily accessible with the thumb. Above the 1st hole, there is a key that lifts two covers creating the note A in the throat register (high part of low register) of the clarinet. A key at the side of the instrument at the same height as the A key lifts only one of the two covers, producing G♯, a semitone lower. The A key can be used in conjunction solely with the register key to produce A♯/B♭.
|
84 |
+
|
85 |
+
The clarinet has its roots in the early single-reed instruments or hornpipes used in Ancient Greece, Ancient Egypt,[54] Middle East, and Europe since the Middle Ages, such as the albogue, alboka, and double clarinet.[55]
|
86 |
+
|
87 |
+
The modern clarinet developed from a Baroque instrument called the chalumeau. This instrument was similar to a recorder, but with a single-reed mouthpiece and a cylindrical bore.[56] Lacking a register key, it was played mainly in its fundamental register, with a limited range of about one and a half octaves.[56] It had eight finger holes, like a recorder, and two keys for its two highest notes.[56] At this time, contrary to modern practice, the reed was placed in contact with the upper lip.[56]
|
88 |
+
|
89 |
+
Around the turn of the 18th century, the chalumeau was modified by converting one of its keys into a register key to produce the first clarinet. This development is usually attributed to German instrument maker Johann Christoph Denner, though some have suggested his son Jacob Denner was the inventor.[57] This instrument played well in the middle register with a loud, shrill sound, so it was given the name clarinetto meaning "little trumpet" (from clarino + -etto). Early clarinets did not play well in the lower register, so players continued to play the chalumeaux for low notes.[56] As clarinets improved, the chalumeau fell into disuse, and these notes became known as the chalumeau register. Original Denner clarinets had two keys, and could play a chromatic scale, but various makers added more keys to get improved tuning, easier fingerings, and a slightly larger range.[56] The classical clarinet of Mozart's day typically had eight finger holes and five keys.
|
90 |
+
|
91 |
+
Clarinets were soon accepted into orchestras. Later models had a mellower tone than the originals. Mozart (d. 1791) liked the sound of the clarinet (he considered its tone the closest in quality to the human voice) and wrote numerous pieces for the instrument.,[58] and by the time of Beethoven (c. 1800–1820), the clarinet was a standard fixture in the orchestra.
|
92 |
+
|
93 |
+
The next major development in the history of clarinet was the invention of the modern pad.[59] Because early clarinets used felt pads to cover the tone holes, they leaked air. This required pad-covered holes to be kept to a minimum, restricting the number of notes the clarinet could play with good tone.[59] In 1812, Iwan Müller, a Baltic German community-born clarinetist and inventor, developed a new type of pad that was covered in leather or fish bladder.[31] It was airtight and let makers increase the number of pad-covered holes. Müller designed a new type of clarinet with seven finger holes and thirteen keys.[31] This allowed the instrument to play in any key with near-equal ease. Over the course of the 19th-century, makers made many enhancements to Müller's clarinet, such as the Albert system and the Baermann system, all keeping the same basic design. Modern instruments may also have cork or synthetic pads.[60]
|
94 |
+
|
95 |
+
The final development in the modern design of the clarinet used in most of the world today was introduced by Hyacinthe Klosé in 1839.[61] He devised a different arrangement of keys and finger holes, which allow simpler fingering. It was inspired by the Boehm system developed for flutes by Theobald Böhm. Klosé was so impressed by Böhm's invention that he named his own system for clarinets the Boehm system, although it is different from the one used on flutes.[61] This new system was slow to gain popularity but gradually became the standard, and today the Boehm system is used everywhere in the world except Germany and Austria. These countries still use a direct descendant of the Mueller clarinet known as the Oehler system clarinet.[62][63] Also, some contemporary Dixieland players continue to use Albert system clarinets.[64]
|
96 |
+
|
97 |
+
Other key systems have been developed, many built around modifications to the basic Böhm system: Full Böhm,[65] Mazzeo,[66] McIntyre,[67] Benade NX,[68] and the Reform Boehm system [69] for example. Each of these addressed—and often improved—issues of particular "weak" tones, or simplified awkward fingerings, but none has caught on widely among players, and the Boehm system remains the standard, to date.
|
98 |
+
|
99 |
+
The modern orchestral standard of using soprano clarinets in B♭ and A has to do partly with the history of the instrument and partly with acoustics, aesthetics, and economics. Before about 1800, due to the lack of airtight pads (see History), practical woodwinds could have only a few keys to control accidentals (notes outside their diatonic home scales).[59] The low (chalumeau) register of the clarinet spans a twelfth (an octave plus a perfect fifth), so the clarinet needs keys/holes to produce all nineteen notes in this range. This involves more keywork than on instruments that "overblow" at the octave—oboes, flutes, bassoons, and saxophones, for example, which need only twelve notes before overblowing. Clarinets with few keys cannot therefore easily play chromatically, limiting any such instrument to a few closely related keys.[70] For example, an eighteenth-century clarinet in C could be played in F, C, and G (and their relative minors) with good intonation, but with progressive difficulty and poorer intonation as the key moved away from this range.[70] In contrast, for octave-overblowing instruments, an instrument in C with few keys could much more readily be played in any key. This problem was overcome by using three clarinets—in A, B♭, and C—so that early 19th-century music, which rarely strayed into the remote keys (five or six sharps or flats), could be played as follows: music in 5 to 2 sharps (B major to D major concert pitch) on A clarinet (D major to F major for the player), music in 1 sharp to 1 flat (G to F) on C clarinet, and music in 2 flats to 4 flats (B♭ to A♭) on the B♭ clarinet (C to B♭ for the clarinetist). Difficult key signatures and numerous accidentals were thus largely avoided.
|
100 |
+
|
101 |
+
With the invention of the airtight pad, and as key technology improved and more keys were added to woodwinds, the need for clarinets in multiple keys was reduced.[71] However, the use of multiple instruments in different keys persisted, with the three instruments in C, B♭, and A all used as specified by the composer.
|
102 |
+
|
103 |
+
The lower-pitched clarinets sound "mellower" (less bright), and the C clarinet—being the highest and therefore brightest of the three—fell out of favour as the other two could cover its range and their sound was considered better.[70] While the clarinet in C began to fall out of general use around 1850, some composers continued to write C parts after this date, e.g., Bizet's Symphony in C (1855), Tchaikovsky's Symphony No. 2 (1872), Smetana's overture to The Bartered Bride (1866) and Má Vlast (1874), Dvořák's Slavonic Dance Op. 46, No. 1 (1878), Brahms' Symphony No. 4 (1885), Mahler's Symphony No. 6 (1906), and Richard Strauss deliberately reintroduced it[clarification needed] to take advantage of its brighter tone, as in Der Rosenkavalier (1911).[72]
|
104 |
+
|
105 |
+
While technical improvements and an equal-tempered scale reduced the need for two clarinets, the technical difficulty of playing in remote keys persisted, and the A has thus remained a standard orchestral instrument. In addition, by the late 19th century, the orchestral clarinet repertoire contained so much music for clarinet in A that the disuse of this instrument was not practical.[71] Attempts were made to standardise to the B♭ instrument between 1930 and 1950 (e.g., tutors recommended learning routine transposition of orchestral A parts on the B♭ clarinet, including solos written for A clarinet, and some manufacturers provided a low E♭ on the B♭ to match the range of the A), but this failed in the orchestral sphere.
|
106 |
+
|
107 |
+
Similarly there have been E♭ and D instruments in the upper soprano range, B♭, A, and C instruments in the bass range, and so forth; but over time the E♭ and B♭ instruments have become predominant.[73] The B♭ instrument remains dominant in concert bands and jazz. B♭ and C instruments are used in some ethnic traditions, such as klezmer.
|
108 |
+
|
109 |
+
In classical music, clarinets are part of standard orchestral and concert band instrumentation.
|
110 |
+
The orchestra frequently includes two clarinetists playing individual parts—each player is usually equipped with a pair of standard clarinets in B♭ and A, and clarinet parts commonly alternate between B♭ and A instruments several times over the course of a piece, or less commonly, a movement (e.g., 1st movement Brahms' 3rd symphony).[74] Clarinet sections grew larger during the last few decades of the 19th century, often employing a third clarinetist, an E♭ or a bass clarinet. In the 20th century, composers such as Igor Stravinsky, Richard Strauss, Gustav Mahler, and Olivier Messiaen enlarged the clarinet section on occasion to up to nine players, employing many different clarinets including the E♭ or D soprano clarinets, basset horn, alto clarinet, bass clarinet, and/or contrabass clarinet.
|
111 |
+
|
112 |
+
In concert bands, clarinets are an important part of the instrumentation. The E♭ clarinet, B♭ clarinet, alto clarinet, bass clarinet, and contra-alto/contrabass clarinet are commonly used in concert bands. Concert bands generally have multiple B♭ clarinets; there are commonly 3 B♭ clarinet parts with 2–3 players per part. There is generally only one player per part on the other clarinets. There are not always E♭ clarinet, alto clarinet, and contra-alto clarinets/contrabass clarinet parts in concert band music, but all three are quite common.
|
113 |
+
|
114 |
+
This practice of using a variety of clarinets to achieve coloristic variety was common in 20th-century classical music and continues today. However, many clarinetists and conductors prefer to play parts originally written for obscure instruments on B♭ or E♭ clarinets, which are often of better quality and more prevalent and accessible.[74]
|
115 |
+
|
116 |
+
The clarinet is widely used as a solo instrument. The relatively late evolution of the clarinet (when compared to other orchestral woodwinds) has left solo repertoire from the Classical period and later, but few works from the Baroque era.[73] Many clarinet concertos have been written to showcase the instrument, with the concerti by Mozart, Copland, and Weber being well known.
|
117 |
+
|
118 |
+
Many works of chamber music have also been written for the clarinet. Common combinations are:
|
119 |
+
|
120 |
+
The clarinet was originally a central instrument in jazz, beginning with the New Orleans players in the 1910s. It remained a signature instrument of jazz music through much of the big band era into the 1940s.[73] American players Alphonse Picou, Larry Shields, Jimmie Noone, Johnny Dodds, and Sidney Bechet were all pioneers of the instrument in jazz. The B♭ soprano was the most common instrument, but a few early jazz musicians such as Louis Nelson Delisle and Alcide Nunez preferred the C soprano, and many New Orleans jazz brass bands have used E♭ soprano.[73]
|
121 |
+
|
122 |
+
Swing clarinetists such as Benny Goodman, Artie Shaw, and Woody Herman led successful big bands and smaller groups from the 1930s onward.[81] Duke Ellington, active from the 1920s to the 1970s, used the clarinet as lead instrument in his works, with several players of the instrument (Barney Bigard, Jimmy Hamilton, and Russell Procope) spending a significant portion of their careers in his orchestra. Harry Carney, primarily Ellington's baritone saxophonist, occasionally doubled on bass clarinet. Meanwhile, Pee Wee Russell had a long and successful career in small groups.
|
123 |
+
|
124 |
+
With the decline of the big bands' popularity in the late 1940s, the clarinet faded from its prominent position in jazz. By that time, an interest in Dixieland or traditional New Orleans jazz had revived; Pete Fountain was one of the best known performers in this genre.[82] Bob Wilber, active since the 1950s, is a more eclectic jazz clarinetist, playing in several classic jazz styles.[83] During the 1950s and 1960s, Britain underwent a surge in the popularity of what was termed 'Trad jazz'. In 1956 the British clarinetist Acker Bilk founded his own ensemble.[84] Several singles recorded by Bilk reached the British pop charts, including the ballad "Stranger on the Shore".
|
125 |
+
|
126 |
+
The clarinet's place in the jazz ensemble was usurped by the saxophone, which projects a more powerful sound and uses a less complicated fingering system.[85] The requirement for an increased speed of execution in modern jazz also did not favour the clarinet, but the clarinet did not entirely disappear. The clarinetist Stan Hasselgård made a transition from swing to bebop in the mid-1940s. A few players such as Buddy DeFranco, Tony Scott, and Jimmy Giuffre emerged during the 1950s playing bebop or other styles. A little later, Eric Dolphy (on bass clarinet), Perry Robinson, John Carter, Theo Jörgensmann, and others used the clarinet in free jazz. The French composer and clarinetist Jean-Christian Michel initiated a jazz-classical cross-over on the clarinet with the drummer Kenny Clarke.
|
127 |
+
|
128 |
+
In the U.S., the prominent players on the instrument since the 1980s have included Eddie Daniels, Don Byron, Marty Ehrlich, Ken Peplowski, and others playing the clarinet in more contemporary contexts.[86]
|
129 |
+
|
130 |
+
The clarinet is uncommon, but not unheard of, in rock music. Jerry Martini played clarinet on Sly and the Family Stone's 1968 hit, "Dance to the Music"; Don Byron, a founder of the Black Rock Coalition who was a member of hard rock guitarist Vernon Reid's band, plays clarinet on the Mistaken Identity album (1996). The Beatles, Pink Floyd, Radiohead, Aerosmith, Billy Joel, and Tom Waits have also all used clarinet on occasion.[87] A clarinet is prominently featured for two different solos in "Breakfast in America", the title song from the Supertramp album of the same name.[88]
|
131 |
+
|
132 |
+
Clarinets feature prominently in klezmer music, which entails a distinctive style of playing.[89] The use of quarter-tones requires a different embouchure.[73] Some klezmer musicians prefer Albert system clarinets.[37]
|
133 |
+
|
134 |
+
The popular Brazilian music styles of choro and samba use the clarinet.[90] Prominent contemporary players include Paulo Moura, Naylor 'Proveta' Azevedo, Paulo Sérgio dos Santos, and Cuban born Paquito D'Rivera.
|
135 |
+
|
136 |
+
Even though it has been adopted recently in Albanian folklore (around the 18th century), the clarinet, or gërneta as it is called, is one of the most important instruments in Albania, especially in the central and southern areas.[91] The clarinet plays a crucial role in saze (folk) ensembles that perform in weddings and other celebrations.[92] It is worth mentioning that the kaba (an instrumental Albanian Isopolyphony included in UNESCO's intangible cultural heritage list[93]) is characteristic of these ensembles.[94] Prominent Albanian clarinet players include Selim Leskoviku, Gaqo Lena, Remzi Lela (Çobani), Laver Bariu (Ustai),[95] and Nevruz Nure (Lulushi i Korçës).[96]
|
137 |
+
|
138 |
+
The clarinet is prominent in Bulgarian wedding music also; it is an offshoot of Roma/Romani traditional music.[97] Ivo Papazov is a well-known clarinetist in this genre. In Moravian dulcimer bands, the clarinet is usually the only wind instrument among string instruments.[98]
|
139 |
+
|
140 |
+
In old-town folk music in North Macedonia (called čalgija ("чалгија")), the clarinet has the most important role in wedding music; clarinet solos mark the high point of dancing euphoria.[99][100] One of the most renowned Macedonian clarinet players is Tale Ognenovski, who gained worldwide fame for his virtuosity.[101]
|
141 |
+
|
142 |
+
In Greece, the clarinet (usually referred to as "κλαρίνο"—"clarino") is prominent in traditional music, especially in central, northwest, and northern Greece (Thessaly, Epirus, and Macedonia).[102] The double-reed zurna was the dominant woodwind instrument before the clarinet arrived in the country, although many Greeks regard the clarinet as a native instrument.[37] Traditional dance music, wedding music, and laments include a clarinet soloist and quite often improvisations.[102] Petroloukas Chalkias is a famous clarinetist in this genre.
|
143 |
+
|
144 |
+
The instrument is equally famous in Turkey, especially the lower-pitched clarinet in G. The western European clarinet crossed via Turkey to Arabic music, where it is widely used in Arabic pop, especially if the intention of the arranger is to imitate the Turkish style.[37]
|
145 |
+
|
146 |
+
Also in Turkish folk music, a clarinet-like woodwind instrument, the sipsi, is used. However, it is far more rare than the soprano clarinet and is mainly limited to folk music of the Aegean Region.
|
147 |
+
|
148 |
+
Groups of clarinets playing together have become increasingly popular among clarinet enthusiasts in recent years. Common forms are:
|
149 |
+
|
150 |
+
Clarinet choirs and quartets often play arrangements of both classical and popular music, in addition to a body of literature specially written for a combination of clarinets by composers such as Arnold Cooke, Alfred Uhl, Lucien Caillet, and Václav Nelhýbel.[105]
|
151 |
+
|
152 |
+
There is a family of many differently pitched clarinet types, some of which are very rare. The following are the most important sizes, from highest to lowest:
|
153 |
+
|
154 |
+
(Sopranino clarinet in E♭)
|
155 |
+
|
156 |
+
(Sopranino clarinet in D)
|
157 |
+
|
158 |
+
(Soprano clarinet in C)
|
159 |
+
|
160 |
+
(Soprano clarinet in B♭)
|
161 |
+
|
162 |
+
(Soprano clarinet in A)
|
163 |
+
|
164 |
+
EEE♭ and BBB♭ octocontra-alto and octocontrabass clarinets have also been built.[113] There have also been soprano clarinets in C, A, and B♭ with curved barrels and bells marketed under the names saxonette, claribel, and clariphon.
|
165 |
+
|
en/1162.html.txt
ADDED
@@ -0,0 +1,70 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
In biology, taxonomy (from Ancient Greek τάξις (taxis), meaning 'arrangement', and -νομία (-nomia), meaning 'method') is the science of naming, defining (circumscribing) and classifying groups of biological organisms on the basis of shared characteristics. Organisms are grouped together into taxa (singular: taxon) and these groups are given a taxonomic rank; groups of a given rank can be aggregated to form a super-group of higher rank, thus creating a taxonomic hierarchy. The principal ranks in modern use are domain, kingdom, phylum (division is sometimes used in botany in place of phylum), class, order, family, genus, and species. The Swedish botanist Carl Linnaeus is regarded as the founder of the current system of taxonomy, as he developed a system known as Linnaean taxonomy for categorizing organisms and binomial nomenclature for naming organisms.
|
4 |
+
|
5 |
+
With the advent of such fields of study as phylogenetics, cladistics, and systematics, the Linnaean system has progressed to a system of modern biological classification based on the evolutionary relationships between organisms, both living and extinct.
|
6 |
+
|
7 |
+
The exact definition of taxonomy varies from source to source, but the core of the discipline remains: the conception, naming, and classification of groups of organisms.[1] As points of reference, recent definitions of taxonomy are presented below:
|
8 |
+
|
9 |
+
The varied definitions either place taxonomy as a sub-area of systematics (definition 2), invert that relationship (definition 6), or appear to consider the two terms synonymous. There is some disagreement as to whether biological nomenclature is considered a part of taxonomy (definitions 1 and 2), or a part of systematics outside taxonomy.[8] For example, definition 6 is paired with the following definition of systematics that places nomenclature outside taxonomy:[6]
|
10 |
+
|
11 |
+
A whole set of terms including taxonomy, systematic biology, systematics, biosystematics, scientific classification, biological classification, and phylogenetics have at times had overlapping meanings – sometimes the same, sometimes slightly different, but always related and intersecting.[1][9] The broadest meaning of "taxonomy" is used here. The term itself was introduced in 1813 by de Candolle, in his Théorie élémentaire de la botanique.[10]
|
12 |
+
|
13 |
+
A taxonomic revision or taxonomic review is a novel analysis of the variation patterns in a particular taxon. This analysis may be executed on the basis of any combination of the various available kinds of characters, such as morphological, anatomical, palynological, biochemical and genetic. A monograph or complete revision is a revision that is comprehensive for a taxon for the information given at a particular time, and for the entire world. Other (partial) revisions may be restricted in the sense that they may only use some of the available character sets or have a limited spatial scope. A revision results in a conformation of or new insights in the relationships between the subtaxa within the taxon under study, which may result in a change in the classification of these subtaxa, the identification of new subtaxa, or the merger of previous subtaxa.[11]
|
14 |
+
|
15 |
+
The term "alpha taxonomy" is primarily used today to refer to the discipline of finding, describing, and naming taxa, particularly species.[12] In earlier literature, the term had a different meaning, referring to morphological taxonomy, and the products of research through the end of the 19th century.[13]
|
16 |
+
|
17 |
+
William Bertram Turrill introduced the term "alpha taxonomy" in a series of papers published in 1935 and 1937 in which he discussed the philosophy and possible future directions of the discipline of taxonomy.[14]
|
18 |
+
|
19 |
+
… there is an increasing desire amongst taxonomists to consider their problems from wider viewpoints, to investigate the possibilities of closer co-operation with their cytological, ecological and genetics colleagues and to acknowledge that some revision or expansion, perhaps of a drastic nature, of their aims and methods, may be desirable … Turrill (1935) has suggested that while accepting the older invaluable taxonomy, based on structure, and conveniently designated "alpha", it is possible to glimpse a far-distant taxonomy built upon as wide a basis of morphological and physiological facts as possible, and one in which "place is found for all observational and experimental data relating, even if indirectly, to the constitution, subdivision, origin, and behaviour of species and other taxonomic groups". Ideals can, it may be said, never be completely realized. They have, however, a great value of acting as permanent stimulants, and if we have some, even vague, ideal of an "omega" taxonomy we may progress a little way down the Greek alphabet. Some of us please ourselves by thinking we are now groping in a "beta" taxonomy.[14]
|
20 |
+
|
21 |
+
Turrill thus explicitly excludes from alpha taxonomy various areas of study that he includes within taxonomy as a whole, such as ecology, physiology, genetics, and cytology. He further excludes phylogenetic reconstruction from alpha taxonomy (pp. 365–366).
|
22 |
+
|
23 |
+
Later authors have used the term in a different sense, to mean the delimitation of species (not subspecies or taxa of other ranks), using whatever investigative techniques are available, and including sophisticated computational or laboratory techniques.[15][12] Thus, Ernst Mayr in 1968 defined "beta taxonomy" as the classification of ranks higher than species.[16]
|
24 |
+
|
25 |
+
An understanding of the biological meaning of variation and of the evolutionary origin of groups of related species is even more important for the second stage of taxonomic activity, the sorting of species into groups of relatives ("taxa") and their arrangement in a hierarchy of higher categories. This activity is what the term classification denotes; it is also referred to as "beta taxonomy".
|
26 |
+
|
27 |
+
How species should be defined in a particular group of organisms gives rise to practical and theoretical problems that are referred to as the species problem. The scientific work of deciding how to define species has been called microtaxonomy.[17][18][12] By extension, macrotaxonomy is the study of groups at the higher taxonomic ranks subgenus and above.[12]
|
28 |
+
|
29 |
+
While some descriptions of taxonomic history attempt to date taxonomy to ancient civilizations, a truly scientific attempt to classify organisms did not occur until the 18th century. Earlier works were primarily descriptive and focused on plants that were useful in agriculture or medicine. There are a number of stages in this scientific thinking. Early taxonomy was based on arbitrary criteria, the so-called "artificial systems", including Linnaeus's system of sexual classification. Later came systems based on a more complete consideration of the characteristics of taxa, referred to as "natural systems", such as those of de Jussieu (1789), de Candolle (1813) and Bentham and Hooker (1862–1863). These were pre-evolutionary in thinking. The publication of Charles Darwin's On the Origin of Species (1859) led to new ways of thinking about classification based on evolutionary relationships. This was the concept of phyletic systems, from 1883 onwards. This approach was typified by those of Eichler (1883) and Engler (1886–1892). The advent of molecular genetics and statistical methodology allowed the creation of the modern era of "phylogenetic systems" based on cladistics, rather than morphology alone.[19][page needed][20][page needed][21][page needed]
|
30 |
+
|
31 |
+
Naming and classifying our surroundings has probably been taking place as long as mankind has been able to communicate. It would always have been important to know the names of poisonous and edible plants and animals in order to communicate this information to other members of the family or group. Medicinal plant illustrations show up in Egyptian wall paintings from c. 1500 BC, indicating that the uses of different species were understood and that a basic taxonomy was in place.[22]
|
32 |
+
|
33 |
+
Organisms were first classified by Aristotle (Greece, 384–322 BC) during his stay on the Island of Lesbos.[23][24][25] He classified beings by their parts, or in modern terms attributes, such as having live birth, having four legs, laying eggs, having blood, or being warm-bodied.[26] He divided all living things into two groups: plants and animals.[24] Some of his groups of animals, such as Anhaima (animals without blood, translated as invertebrates) and Enhaima (animals with blood, roughly the vertebrates), as well as groups like the sharks and cetaceans, are still commonly used today.[27] His student Theophrastus (Greece, 370–285 BC) carried on this tradition, mentioning some 500 plants and their uses in his Historia Plantarum. Again, several plant groups currently still recognized can be traced back to Theophrastus, such as Cornus, Crocus, and Narcissus.[24]
|
34 |
+
|
35 |
+
Taxonomy in the Middle Ages was largely based on the Aristotelian system,[26] with additions concerning the philosophical and existential order of creatures. This included concepts such as the Great chain of being in the Western scholastic tradition,[26] again deriving ultimately from Aristotle. Aristotelian system did not classify plants or fungi, due to the lack of microscope at the time,[25] as his ideas were based on arranging the complete world in a single continuum, as per the scala naturae (the Natural Ladder).[24] This, as well, was taken into consideration in the Great chain of being.[24] Advances were made by scholars such as Procopius, Timotheos of Gaza, Demetrios Pepagomenos, and Thomas Aquinas. Medieval thinkers used abstract philosophical and logical categorizations more suited to abstract philosophy than to pragmatic taxonomy.[24]
|
36 |
+
|
37 |
+
During the Renaissance, the Age of Reason, and the Enlightenment, categorizing organisms became more prevalent,[24]
|
38 |
+
and taxonomic works became ambitious enough to replace the ancient texts. This is sometimes credited to the development of sophisticated optical lenses, which allowed the morphology of organisms to be studied in much greater detail. One of the earliest authors to take advantage of this leap in technology was the Italian physician Andrea Cesalpino (1519–1603), who has been called "the first taxonomist".[28] His magnum opus De Plantis came out in 1583, and described more than 1500 plant species.[29][30] Two large plant families that he first recognized are still in use today: the Asteraceae and Brassicaceae.[31] Then in the 17th century John Ray (England, 1627–1705) wrote many important taxonomic works.[25] Arguably his greatest accomplishment was Methodus Plantarum Nova (1682),[32] in which he published details of over 18,000 plant species. At the time, his classifications were perhaps the most complex yet produced by any taxonomist, as he based his taxa on many combined characters. The next major taxonomic works were produced by Joseph Pitton de Tournefort (France, 1656–1708).[33] His work from 1700, Institutiones Rei Herbariae, included more than 9000 species in 698 genera, which directly influenced Linnaeus, as it was the text he used as a young student.[22]
|
39 |
+
|
40 |
+
The Swedish botanist Carl Linnaeus (1707–1778)[26] ushered in a new era of taxonomy. With his major works Systema Naturae 1st Edition in 1735,[34] Species Plantarum in 1753,[35] and Systema Naturae 10th Edition,[36] he revolutionized modern taxonomy. His works implemented a standardized binomial naming system for animal and plant species,[37] which proved to be an elegant solution to a chaotic and disorganized taxonomic literature. He not only introduced the standard of class, order, genus, and species, but also made it possible to identify plants and animals from his book, by using the smaller parts of the flower.[37] Thus the Linnaean system was born, and is still used in essentially the same way today as it was in the 18th century.[37] Currently, plant and animal taxonomists regard Linnaeus' work as the "starting point" for valid names (at 1753 and 1758 respectively).[38] Names published before these dates are referred to as "pre-Linnaean", and not considered valid (with the exception of spiders published in Svenska Spindlar[39]). Even taxonomic names published by Linnaeus himself before these dates are considered pre-Linnaean.[22]
|
41 |
+
|
42 |
+
Whereas Linnaeus aimed simply to create readily identifiable taxa, the idea of the Linnaean taxonomy as translating into a sort of dendrogram of the animal and plant kingdoms was formulated toward the end of the 18th century, well before On the Origin of Species was published.[25] Among early works exploring the idea of a transmutation of species were Erasmus Darwin's 1796 Zoönomia and Jean-Baptiste Lamarck's Philosophie Zoologique of 1809.[12] The idea was popularized in the Anglophone world by the speculative but widely read Vestiges of the Natural History of Creation, published anonymously by Robert Chambers in 1844.[40]
|
43 |
+
|
44 |
+
With Darwin's theory, a general acceptance quickly appeared that a classification should reflect the Darwinian principle of common descent.[41] Tree of life representations became popular in scientific works, with known fossil groups incorporated. One of the first modern groups tied to fossil ancestors was birds.[42] Using the then newly discovered fossils of Archaeopteryx and Hesperornis, Thomas Henry Huxley pronounced that they had evolved from dinosaurs, a group formally named by Richard Owen in 1842.[43][44] The resulting description, that of dinosaurs "giving rise to" or being "the ancestors of" birds, is the essential hallmark of evolutionary taxonomic thinking. As more and more fossil groups were found and recognized in the late 19th and early 20th centuries, palaeontologists worked to understand the history of animals through the ages by linking together known groups.[45] With the modern evolutionary synthesis of the early 1940s, an essentially modern understanding of the evolution of the major groups was in place. As evolutionary taxonomy is based on Linnaean taxonomic ranks, the two terms are largely interchangeable in modern use.[46]
|
45 |
+
|
46 |
+
The cladistic method has emerged since the 1960s.[41] In 1958, Julian Huxley used the term clade.[12] Later, in 1960, Cain and Harrison introduced the term cladistic.[12] The salient feature is arranging taxa in a hierarchical evolutionary tree, ignoring ranks.[41] A taxon is called monophyletic, if it includes all the descendants of an ancestral form.[47][48] Groups that have descendant groups removed from them are termed paraphyletic,[47] while groups representing more than one branch from the tree of life are called polyphyletic.[47][48] The International Code of Phylogenetic Nomenclature or PhyloCode is intended to regulate the formal naming of clades.[49][50] Linnaean ranks will be optional under the PhyloCode, which is intended to coexist with the current, rank-based codes.[50]
|
47 |
+
|
48 |
+
Well before Linnaeus, plants and animals were considered separate Kingdoms.[51] Linnaeus used this as the top rank, dividing the physical world into the plant, animal and mineral kingdoms. As advances in microscopy made classification of microorganisms possible, the number of kingdoms increased, five- and six-kingdom systems being the most common.
|
49 |
+
|
50 |
+
Domains are a relatively new grouping. First proposed in 1977, Carl Woese's three-domain system was not generally accepted until later.[52] One main characteristic of the three-domain method is the separation of Archaea and Bacteria, previously grouped into the single kingdom Bacteria (a kingdom also sometimes called Monera),[51] with the Eukaryota for all organisms whose cells contain a nucleus.[53] A small number of scientists include a sixth kingdom, Archaea, but do not accept the domain method.[51]
|
51 |
+
|
52 |
+
Thomas Cavalier-Smith, who has published extensively on the classification of protists, has recently proposed that the Neomura, the clade that groups together the Archaea and Eucarya, would have evolved from Bacteria, more precisely from Actinobacteria. His 2004 classification treated the archaeobacteria as part of a subkingdom of the kingdom Bacteria, i.e., he rejected the three-domain system entirely.[54] Stefan Luketa in 2012 proposed a five "dominion" system, adding Prionobiota (acellular and without nucleic acid) and Virusobiota (acellular but with nucleic acid) to the traditional three domains.[55]
|
53 |
+
|
54 |
+
Partial classifications exist for many individual groups of organisms and are revised and replaced as new information becomes available; however, comprehensive, published treatments of most or all life are rarer; recent examples are that of Adl et al., 2012 and 2019,[63][64] which covers eukaryotes only with an emphasis on protists, and Ruggiero et al., 2015,[65] covering both eukaryotes and prokaryotes to the rank of Order, although both exclude fossil representatives.[65] A separate compilation (Ruggiero, 2014)[66] covers extant taxa to the rank of family. Other, database-driven treatments include the Encyclopedia of Life, the Global Biodiversity Information Facility, the NCBI taxonomy database, the Interim Register of Marine and Nonmarine Genera, the Open Tree of Life, and the Catalogue of Life. The Paleobiology Database is a resource for fossils.
|
55 |
+
|
56 |
+
Biological taxonomy is a sub-discipline of biology, and is generally practiced by biologists known as "taxonomists", though enthusiastic naturalists are also frequently involved in the publication of new taxa.[67] Because taxonomy aims to describe and organize life, the work conducted by taxonomists is essential for the study of biodiversity and the resulting field of conservation biology.[68][69]
|
57 |
+
|
58 |
+
Biological classification is a critical component of the taxonomic process. As a result, it informs the user as to what the relatives of the taxon are hypothesized to be. Biological classification uses taxonomic ranks, including among others (in order from most inclusive to least inclusive): Domain, Kingdom, Phylum, Class, Order, Family, Genus, Species, and Strain.[70][note 1]
|
59 |
+
|
60 |
+
The "definition" of a taxon is encapsulated by its description or its diagnosis or by both combined. There are no set rules governing the definition of taxa, but the naming and publication of new taxa is governed by sets of rules.[8] In zoology, the nomenclature for the more commonly used ranks (superfamily to subspecies), is regulated by the International Code of Zoological Nomenclature (ICZN Code).[71] In the fields of phycology, mycology, and botany, the naming of taxa is governed by the International Code of Nomenclature for algae, fungi, and plants (ICN).[72]
|
61 |
+
|
62 |
+
The initial description of a taxon involves five main requirements:[73]
|
63 |
+
|
64 |
+
However, often much more information is included, like the geographic range of the taxon, ecological notes, chemistry, behavior, etc. How researchers arrive at their taxa varies: depending on the available data, and resources, methods vary from simple quantitative or qualitative comparisons of striking features, to elaborate computer analyses of large amounts of DNA sequence data.[74]
|
65 |
+
|
66 |
+
An "authority" may be placed after a scientific name.[75] The authority is the name of the scientist or scientists who first validly published the name.[75] For example, in 1758 Linnaeus gave the Asian elephant the scientific name Elephas maximus, so the name is sometimes written as "Elephas maximus Linnaeus, 1758".[76] The names of authors are frequently abbreviated: the abbreviation L., for Linnaeus, is commonly used. In botany, there is, in fact, a regulated list of standard abbreviations (see list of botanists by author abbreviation).[77] The system for assigning authorities differs slightly between botany and zoology.[8] However, it is standard that if the genus of a species has been changed since the original description, the original authority's name is placed in parentheses.[78]
|
67 |
+
|
68 |
+
In phenetics, also known as taximetrics, or numerical taxonomy, organisms are classified based on overall similarity, regardless of their phylogeny or evolutionary relationships.[12] It results in a measure of evolutionary "distance" between taxa. Phenetic methods have become relatively rare in modern times, largely superseded by cladistic analyses, as phenetic methods do not distinguish common ancestral (or plesiomorphic) traits from new common (or apomorphic) traits.[79] However, certain phenetic methods, such as neighbor joining, have found their way into cladistics, as a reasonable approximation of phylogeny when more advanced methods (such as Bayesian inference) are too computationally expensive.[80]
|
69 |
+
|
70 |
+
Modern taxonomy uses database technologies to search and catalogue classifications and their documentation.[81] While there is no commonly used database, there are comprehensive databases such as the Catalogue of Life, which attempts to list every documented species.[82] The catalogue listed 1.64 million species for all kingdoms as of April 2016, claiming coverage of more than three quarters of the estimated species known to modern science.[83]
|
en/1163.html.txt
ADDED
@@ -0,0 +1,70 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
In biology, taxonomy (from Ancient Greek τάξις (taxis), meaning 'arrangement', and -νομία (-nomia), meaning 'method') is the science of naming, defining (circumscribing) and classifying groups of biological organisms on the basis of shared characteristics. Organisms are grouped together into taxa (singular: taxon) and these groups are given a taxonomic rank; groups of a given rank can be aggregated to form a super-group of higher rank, thus creating a taxonomic hierarchy. The principal ranks in modern use are domain, kingdom, phylum (division is sometimes used in botany in place of phylum), class, order, family, genus, and species. The Swedish botanist Carl Linnaeus is regarded as the founder of the current system of taxonomy, as he developed a system known as Linnaean taxonomy for categorizing organisms and binomial nomenclature for naming organisms.
|
4 |
+
|
5 |
+
With the advent of such fields of study as phylogenetics, cladistics, and systematics, the Linnaean system has progressed to a system of modern biological classification based on the evolutionary relationships between organisms, both living and extinct.
|
6 |
+
|
7 |
+
The exact definition of taxonomy varies from source to source, but the core of the discipline remains: the conception, naming, and classification of groups of organisms.[1] As points of reference, recent definitions of taxonomy are presented below:
|
8 |
+
|
9 |
+
The varied definitions either place taxonomy as a sub-area of systematics (definition 2), invert that relationship (definition 6), or appear to consider the two terms synonymous. There is some disagreement as to whether biological nomenclature is considered a part of taxonomy (definitions 1 and 2), or a part of systematics outside taxonomy.[8] For example, definition 6 is paired with the following definition of systematics that places nomenclature outside taxonomy:[6]
|
10 |
+
|
11 |
+
A whole set of terms including taxonomy, systematic biology, systematics, biosystematics, scientific classification, biological classification, and phylogenetics have at times had overlapping meanings – sometimes the same, sometimes slightly different, but always related and intersecting.[1][9] The broadest meaning of "taxonomy" is used here. The term itself was introduced in 1813 by de Candolle, in his Théorie élémentaire de la botanique.[10]
|
12 |
+
|
13 |
+
A taxonomic revision or taxonomic review is a novel analysis of the variation patterns in a particular taxon. This analysis may be executed on the basis of any combination of the various available kinds of characters, such as morphological, anatomical, palynological, biochemical and genetic. A monograph or complete revision is a revision that is comprehensive for a taxon for the information given at a particular time, and for the entire world. Other (partial) revisions may be restricted in the sense that they may only use some of the available character sets or have a limited spatial scope. A revision results in a conformation of or new insights in the relationships between the subtaxa within the taxon under study, which may result in a change in the classification of these subtaxa, the identification of new subtaxa, or the merger of previous subtaxa.[11]
|
14 |
+
|
15 |
+
The term "alpha taxonomy" is primarily used today to refer to the discipline of finding, describing, and naming taxa, particularly species.[12] In earlier literature, the term had a different meaning, referring to morphological taxonomy, and the products of research through the end of the 19th century.[13]
|
16 |
+
|
17 |
+
William Bertram Turrill introduced the term "alpha taxonomy" in a series of papers published in 1935 and 1937 in which he discussed the philosophy and possible future directions of the discipline of taxonomy.[14]
|
18 |
+
|
19 |
+
… there is an increasing desire amongst taxonomists to consider their problems from wider viewpoints, to investigate the possibilities of closer co-operation with their cytological, ecological and genetics colleagues and to acknowledge that some revision or expansion, perhaps of a drastic nature, of their aims and methods, may be desirable … Turrill (1935) has suggested that while accepting the older invaluable taxonomy, based on structure, and conveniently designated "alpha", it is possible to glimpse a far-distant taxonomy built upon as wide a basis of morphological and physiological facts as possible, and one in which "place is found for all observational and experimental data relating, even if indirectly, to the constitution, subdivision, origin, and behaviour of species and other taxonomic groups". Ideals can, it may be said, never be completely realized. They have, however, a great value of acting as permanent stimulants, and if we have some, even vague, ideal of an "omega" taxonomy we may progress a little way down the Greek alphabet. Some of us please ourselves by thinking we are now groping in a "beta" taxonomy.[14]
|
20 |
+
|
21 |
+
Turrill thus explicitly excludes from alpha taxonomy various areas of study that he includes within taxonomy as a whole, such as ecology, physiology, genetics, and cytology. He further excludes phylogenetic reconstruction from alpha taxonomy (pp. 365–366).
|
22 |
+
|
23 |
+
Later authors have used the term in a different sense, to mean the delimitation of species (not subspecies or taxa of other ranks), using whatever investigative techniques are available, and including sophisticated computational or laboratory techniques.[15][12] Thus, Ernst Mayr in 1968 defined "beta taxonomy" as the classification of ranks higher than species.[16]
|
24 |
+
|
25 |
+
An understanding of the biological meaning of variation and of the evolutionary origin of groups of related species is even more important for the second stage of taxonomic activity, the sorting of species into groups of relatives ("taxa") and their arrangement in a hierarchy of higher categories. This activity is what the term classification denotes; it is also referred to as "beta taxonomy".
|
26 |
+
|
27 |
+
How species should be defined in a particular group of organisms gives rise to practical and theoretical problems that are referred to as the species problem. The scientific work of deciding how to define species has been called microtaxonomy.[17][18][12] By extension, macrotaxonomy is the study of groups at the higher taxonomic ranks subgenus and above.[12]
|
28 |
+
|
29 |
+
While some descriptions of taxonomic history attempt to date taxonomy to ancient civilizations, a truly scientific attempt to classify organisms did not occur until the 18th century. Earlier works were primarily descriptive and focused on plants that were useful in agriculture or medicine. There are a number of stages in this scientific thinking. Early taxonomy was based on arbitrary criteria, the so-called "artificial systems", including Linnaeus's system of sexual classification. Later came systems based on a more complete consideration of the characteristics of taxa, referred to as "natural systems", such as those of de Jussieu (1789), de Candolle (1813) and Bentham and Hooker (1862–1863). These were pre-evolutionary in thinking. The publication of Charles Darwin's On the Origin of Species (1859) led to new ways of thinking about classification based on evolutionary relationships. This was the concept of phyletic systems, from 1883 onwards. This approach was typified by those of Eichler (1883) and Engler (1886–1892). The advent of molecular genetics and statistical methodology allowed the creation of the modern era of "phylogenetic systems" based on cladistics, rather than morphology alone.[19][page needed][20][page needed][21][page needed]
|
30 |
+
|
31 |
+
Naming and classifying our surroundings has probably been taking place as long as mankind has been able to communicate. It would always have been important to know the names of poisonous and edible plants and animals in order to communicate this information to other members of the family or group. Medicinal plant illustrations show up in Egyptian wall paintings from c. 1500 BC, indicating that the uses of different species were understood and that a basic taxonomy was in place.[22]
|
32 |
+
|
33 |
+
Organisms were first classified by Aristotle (Greece, 384–322 BC) during his stay on the Island of Lesbos.[23][24][25] He classified beings by their parts, or in modern terms attributes, such as having live birth, having four legs, laying eggs, having blood, or being warm-bodied.[26] He divided all living things into two groups: plants and animals.[24] Some of his groups of animals, such as Anhaima (animals without blood, translated as invertebrates) and Enhaima (animals with blood, roughly the vertebrates), as well as groups like the sharks and cetaceans, are still commonly used today.[27] His student Theophrastus (Greece, 370–285 BC) carried on this tradition, mentioning some 500 plants and their uses in his Historia Plantarum. Again, several plant groups currently still recognized can be traced back to Theophrastus, such as Cornus, Crocus, and Narcissus.[24]
|
34 |
+
|
35 |
+
Taxonomy in the Middle Ages was largely based on the Aristotelian system,[26] with additions concerning the philosophical and existential order of creatures. This included concepts such as the Great chain of being in the Western scholastic tradition,[26] again deriving ultimately from Aristotle. Aristotelian system did not classify plants or fungi, due to the lack of microscope at the time,[25] as his ideas were based on arranging the complete world in a single continuum, as per the scala naturae (the Natural Ladder).[24] This, as well, was taken into consideration in the Great chain of being.[24] Advances were made by scholars such as Procopius, Timotheos of Gaza, Demetrios Pepagomenos, and Thomas Aquinas. Medieval thinkers used abstract philosophical and logical categorizations more suited to abstract philosophy than to pragmatic taxonomy.[24]
|
36 |
+
|
37 |
+
During the Renaissance, the Age of Reason, and the Enlightenment, categorizing organisms became more prevalent,[24]
|
38 |
+
and taxonomic works became ambitious enough to replace the ancient texts. This is sometimes credited to the development of sophisticated optical lenses, which allowed the morphology of organisms to be studied in much greater detail. One of the earliest authors to take advantage of this leap in technology was the Italian physician Andrea Cesalpino (1519–1603), who has been called "the first taxonomist".[28] His magnum opus De Plantis came out in 1583, and described more than 1500 plant species.[29][30] Two large plant families that he first recognized are still in use today: the Asteraceae and Brassicaceae.[31] Then in the 17th century John Ray (England, 1627–1705) wrote many important taxonomic works.[25] Arguably his greatest accomplishment was Methodus Plantarum Nova (1682),[32] in which he published details of over 18,000 plant species. At the time, his classifications were perhaps the most complex yet produced by any taxonomist, as he based his taxa on many combined characters. The next major taxonomic works were produced by Joseph Pitton de Tournefort (France, 1656–1708).[33] His work from 1700, Institutiones Rei Herbariae, included more than 9000 species in 698 genera, which directly influenced Linnaeus, as it was the text he used as a young student.[22]
|
39 |
+
|
40 |
+
The Swedish botanist Carl Linnaeus (1707–1778)[26] ushered in a new era of taxonomy. With his major works Systema Naturae 1st Edition in 1735,[34] Species Plantarum in 1753,[35] and Systema Naturae 10th Edition,[36] he revolutionized modern taxonomy. His works implemented a standardized binomial naming system for animal and plant species,[37] which proved to be an elegant solution to a chaotic and disorganized taxonomic literature. He not only introduced the standard of class, order, genus, and species, but also made it possible to identify plants and animals from his book, by using the smaller parts of the flower.[37] Thus the Linnaean system was born, and is still used in essentially the same way today as it was in the 18th century.[37] Currently, plant and animal taxonomists regard Linnaeus' work as the "starting point" for valid names (at 1753 and 1758 respectively).[38] Names published before these dates are referred to as "pre-Linnaean", and not considered valid (with the exception of spiders published in Svenska Spindlar[39]). Even taxonomic names published by Linnaeus himself before these dates are considered pre-Linnaean.[22]
|
41 |
+
|
42 |
+
Whereas Linnaeus aimed simply to create readily identifiable taxa, the idea of the Linnaean taxonomy as translating into a sort of dendrogram of the animal and plant kingdoms was formulated toward the end of the 18th century, well before On the Origin of Species was published.[25] Among early works exploring the idea of a transmutation of species were Erasmus Darwin's 1796 Zoönomia and Jean-Baptiste Lamarck's Philosophie Zoologique of 1809.[12] The idea was popularized in the Anglophone world by the speculative but widely read Vestiges of the Natural History of Creation, published anonymously by Robert Chambers in 1844.[40]
|
43 |
+
|
44 |
+
With Darwin's theory, a general acceptance quickly appeared that a classification should reflect the Darwinian principle of common descent.[41] Tree of life representations became popular in scientific works, with known fossil groups incorporated. One of the first modern groups tied to fossil ancestors was birds.[42] Using the then newly discovered fossils of Archaeopteryx and Hesperornis, Thomas Henry Huxley pronounced that they had evolved from dinosaurs, a group formally named by Richard Owen in 1842.[43][44] The resulting description, that of dinosaurs "giving rise to" or being "the ancestors of" birds, is the essential hallmark of evolutionary taxonomic thinking. As more and more fossil groups were found and recognized in the late 19th and early 20th centuries, palaeontologists worked to understand the history of animals through the ages by linking together known groups.[45] With the modern evolutionary synthesis of the early 1940s, an essentially modern understanding of the evolution of the major groups was in place. As evolutionary taxonomy is based on Linnaean taxonomic ranks, the two terms are largely interchangeable in modern use.[46]
|
45 |
+
|
46 |
+
The cladistic method has emerged since the 1960s.[41] In 1958, Julian Huxley used the term clade.[12] Later, in 1960, Cain and Harrison introduced the term cladistic.[12] The salient feature is arranging taxa in a hierarchical evolutionary tree, ignoring ranks.[41] A taxon is called monophyletic, if it includes all the descendants of an ancestral form.[47][48] Groups that have descendant groups removed from them are termed paraphyletic,[47] while groups representing more than one branch from the tree of life are called polyphyletic.[47][48] The International Code of Phylogenetic Nomenclature or PhyloCode is intended to regulate the formal naming of clades.[49][50] Linnaean ranks will be optional under the PhyloCode, which is intended to coexist with the current, rank-based codes.[50]
|
47 |
+
|
48 |
+
Well before Linnaeus, plants and animals were considered separate Kingdoms.[51] Linnaeus used this as the top rank, dividing the physical world into the plant, animal and mineral kingdoms. As advances in microscopy made classification of microorganisms possible, the number of kingdoms increased, five- and six-kingdom systems being the most common.
|
49 |
+
|
50 |
+
Domains are a relatively new grouping. First proposed in 1977, Carl Woese's three-domain system was not generally accepted until later.[52] One main characteristic of the three-domain method is the separation of Archaea and Bacteria, previously grouped into the single kingdom Bacteria (a kingdom also sometimes called Monera),[51] with the Eukaryota for all organisms whose cells contain a nucleus.[53] A small number of scientists include a sixth kingdom, Archaea, but do not accept the domain method.[51]
|
51 |
+
|
52 |
+
Thomas Cavalier-Smith, who has published extensively on the classification of protists, has recently proposed that the Neomura, the clade that groups together the Archaea and Eucarya, would have evolved from Bacteria, more precisely from Actinobacteria. His 2004 classification treated the archaeobacteria as part of a subkingdom of the kingdom Bacteria, i.e., he rejected the three-domain system entirely.[54] Stefan Luketa in 2012 proposed a five "dominion" system, adding Prionobiota (acellular and without nucleic acid) and Virusobiota (acellular but with nucleic acid) to the traditional three domains.[55]
|
53 |
+
|
54 |
+
Partial classifications exist for many individual groups of organisms and are revised and replaced as new information becomes available; however, comprehensive, published treatments of most or all life are rarer; recent examples are that of Adl et al., 2012 and 2019,[63][64] which covers eukaryotes only with an emphasis on protists, and Ruggiero et al., 2015,[65] covering both eukaryotes and prokaryotes to the rank of Order, although both exclude fossil representatives.[65] A separate compilation (Ruggiero, 2014)[66] covers extant taxa to the rank of family. Other, database-driven treatments include the Encyclopedia of Life, the Global Biodiversity Information Facility, the NCBI taxonomy database, the Interim Register of Marine and Nonmarine Genera, the Open Tree of Life, and the Catalogue of Life. The Paleobiology Database is a resource for fossils.
|
55 |
+
|
56 |
+
Biological taxonomy is a sub-discipline of biology, and is generally practiced by biologists known as "taxonomists", though enthusiastic naturalists are also frequently involved in the publication of new taxa.[67] Because taxonomy aims to describe and organize life, the work conducted by taxonomists is essential for the study of biodiversity and the resulting field of conservation biology.[68][69]
|
57 |
+
|
58 |
+
Biological classification is a critical component of the taxonomic process. As a result, it informs the user as to what the relatives of the taxon are hypothesized to be. Biological classification uses taxonomic ranks, including among others (in order from most inclusive to least inclusive): Domain, Kingdom, Phylum, Class, Order, Family, Genus, Species, and Strain.[70][note 1]
|
59 |
+
|
60 |
+
The "definition" of a taxon is encapsulated by its description or its diagnosis or by both combined. There are no set rules governing the definition of taxa, but the naming and publication of new taxa is governed by sets of rules.[8] In zoology, the nomenclature for the more commonly used ranks (superfamily to subspecies), is regulated by the International Code of Zoological Nomenclature (ICZN Code).[71] In the fields of phycology, mycology, and botany, the naming of taxa is governed by the International Code of Nomenclature for algae, fungi, and plants (ICN).[72]
|
61 |
+
|
62 |
+
The initial description of a taxon involves five main requirements:[73]
|
63 |
+
|
64 |
+
However, often much more information is included, like the geographic range of the taxon, ecological notes, chemistry, behavior, etc. How researchers arrive at their taxa varies: depending on the available data, and resources, methods vary from simple quantitative or qualitative comparisons of striking features, to elaborate computer analyses of large amounts of DNA sequence data.[74]
|
65 |
+
|
66 |
+
An "authority" may be placed after a scientific name.[75] The authority is the name of the scientist or scientists who first validly published the name.[75] For example, in 1758 Linnaeus gave the Asian elephant the scientific name Elephas maximus, so the name is sometimes written as "Elephas maximus Linnaeus, 1758".[76] The names of authors are frequently abbreviated: the abbreviation L., for Linnaeus, is commonly used. In botany, there is, in fact, a regulated list of standard abbreviations (see list of botanists by author abbreviation).[77] The system for assigning authorities differs slightly between botany and zoology.[8] However, it is standard that if the genus of a species has been changed since the original description, the original authority's name is placed in parentheses.[78]
|
67 |
+
|
68 |
+
In phenetics, also known as taximetrics, or numerical taxonomy, organisms are classified based on overall similarity, regardless of their phylogeny or evolutionary relationships.[12] It results in a measure of evolutionary "distance" between taxa. Phenetic methods have become relatively rare in modern times, largely superseded by cladistic analyses, as phenetic methods do not distinguish common ancestral (or plesiomorphic) traits from new common (or apomorphic) traits.[79] However, certain phenetic methods, such as neighbor joining, have found their way into cladistics, as a reasonable approximation of phylogeny when more advanced methods (such as Bayesian inference) are too computationally expensive.[80]
|
69 |
+
|
70 |
+
Modern taxonomy uses database technologies to search and catalogue classifications and their documentation.[81] While there is no commonly used database, there are comprehensive databases such as the Catalogue of Life, which attempts to list every documented species.[82] The catalogue listed 1.64 million species for all kingdoms as of April 2016, claiming coverage of more than three quarters of the estimated species known to modern science.[83]
|
en/1164.html.txt
ADDED
@@ -0,0 +1,70 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
In biology, taxonomy (from Ancient Greek τάξις (taxis), meaning 'arrangement', and -νομία (-nomia), meaning 'method') is the science of naming, defining (circumscribing) and classifying groups of biological organisms on the basis of shared characteristics. Organisms are grouped together into taxa (singular: taxon) and these groups are given a taxonomic rank; groups of a given rank can be aggregated to form a super-group of higher rank, thus creating a taxonomic hierarchy. The principal ranks in modern use are domain, kingdom, phylum (division is sometimes used in botany in place of phylum), class, order, family, genus, and species. The Swedish botanist Carl Linnaeus is regarded as the founder of the current system of taxonomy, as he developed a system known as Linnaean taxonomy for categorizing organisms and binomial nomenclature for naming organisms.
|
4 |
+
|
5 |
+
With the advent of such fields of study as phylogenetics, cladistics, and systematics, the Linnaean system has progressed to a system of modern biological classification based on the evolutionary relationships between organisms, both living and extinct.
|
6 |
+
|
7 |
+
The exact definition of taxonomy varies from source to source, but the core of the discipline remains: the conception, naming, and classification of groups of organisms.[1] As points of reference, recent definitions of taxonomy are presented below:
|
8 |
+
|
9 |
+
The varied definitions either place taxonomy as a sub-area of systematics (definition 2), invert that relationship (definition 6), or appear to consider the two terms synonymous. There is some disagreement as to whether biological nomenclature is considered a part of taxonomy (definitions 1 and 2), or a part of systematics outside taxonomy.[8] For example, definition 6 is paired with the following definition of systematics that places nomenclature outside taxonomy:[6]
|
10 |
+
|
11 |
+
A whole set of terms including taxonomy, systematic biology, systematics, biosystematics, scientific classification, biological classification, and phylogenetics have at times had overlapping meanings – sometimes the same, sometimes slightly different, but always related and intersecting.[1][9] The broadest meaning of "taxonomy" is used here. The term itself was introduced in 1813 by de Candolle, in his Théorie élémentaire de la botanique.[10]
|
12 |
+
|
13 |
+
A taxonomic revision or taxonomic review is a novel analysis of the variation patterns in a particular taxon. This analysis may be executed on the basis of any combination of the various available kinds of characters, such as morphological, anatomical, palynological, biochemical and genetic. A monograph or complete revision is a revision that is comprehensive for a taxon for the information given at a particular time, and for the entire world. Other (partial) revisions may be restricted in the sense that they may only use some of the available character sets or have a limited spatial scope. A revision results in a conformation of or new insights in the relationships between the subtaxa within the taxon under study, which may result in a change in the classification of these subtaxa, the identification of new subtaxa, or the merger of previous subtaxa.[11]
|
14 |
+
|
15 |
+
The term "alpha taxonomy" is primarily used today to refer to the discipline of finding, describing, and naming taxa, particularly species.[12] In earlier literature, the term had a different meaning, referring to morphological taxonomy, and the products of research through the end of the 19th century.[13]
|
16 |
+
|
17 |
+
William Bertram Turrill introduced the term "alpha taxonomy" in a series of papers published in 1935 and 1937 in which he discussed the philosophy and possible future directions of the discipline of taxonomy.[14]
|
18 |
+
|
19 |
+
… there is an increasing desire amongst taxonomists to consider their problems from wider viewpoints, to investigate the possibilities of closer co-operation with their cytological, ecological and genetics colleagues and to acknowledge that some revision or expansion, perhaps of a drastic nature, of their aims and methods, may be desirable … Turrill (1935) has suggested that while accepting the older invaluable taxonomy, based on structure, and conveniently designated "alpha", it is possible to glimpse a far-distant taxonomy built upon as wide a basis of morphological and physiological facts as possible, and one in which "place is found for all observational and experimental data relating, even if indirectly, to the constitution, subdivision, origin, and behaviour of species and other taxonomic groups". Ideals can, it may be said, never be completely realized. They have, however, a great value of acting as permanent stimulants, and if we have some, even vague, ideal of an "omega" taxonomy we may progress a little way down the Greek alphabet. Some of us please ourselves by thinking we are now groping in a "beta" taxonomy.[14]
|
20 |
+
|
21 |
+
Turrill thus explicitly excludes from alpha taxonomy various areas of study that he includes within taxonomy as a whole, such as ecology, physiology, genetics, and cytology. He further excludes phylogenetic reconstruction from alpha taxonomy (pp. 365–366).
|
22 |
+
|
23 |
+
Later authors have used the term in a different sense, to mean the delimitation of species (not subspecies or taxa of other ranks), using whatever investigative techniques are available, and including sophisticated computational or laboratory techniques.[15][12] Thus, Ernst Mayr in 1968 defined "beta taxonomy" as the classification of ranks higher than species.[16]
|
24 |
+
|
25 |
+
An understanding of the biological meaning of variation and of the evolutionary origin of groups of related species is even more important for the second stage of taxonomic activity, the sorting of species into groups of relatives ("taxa") and their arrangement in a hierarchy of higher categories. This activity is what the term classification denotes; it is also referred to as "beta taxonomy".
|
26 |
+
|
27 |
+
How species should be defined in a particular group of organisms gives rise to practical and theoretical problems that are referred to as the species problem. The scientific work of deciding how to define species has been called microtaxonomy.[17][18][12] By extension, macrotaxonomy is the study of groups at the higher taxonomic ranks subgenus and above.[12]
|
28 |
+
|
29 |
+
While some descriptions of taxonomic history attempt to date taxonomy to ancient civilizations, a truly scientific attempt to classify organisms did not occur until the 18th century. Earlier works were primarily descriptive and focused on plants that were useful in agriculture or medicine. There are a number of stages in this scientific thinking. Early taxonomy was based on arbitrary criteria, the so-called "artificial systems", including Linnaeus's system of sexual classification. Later came systems based on a more complete consideration of the characteristics of taxa, referred to as "natural systems", such as those of de Jussieu (1789), de Candolle (1813) and Bentham and Hooker (1862–1863). These were pre-evolutionary in thinking. The publication of Charles Darwin's On the Origin of Species (1859) led to new ways of thinking about classification based on evolutionary relationships. This was the concept of phyletic systems, from 1883 onwards. This approach was typified by those of Eichler (1883) and Engler (1886–1892). The advent of molecular genetics and statistical methodology allowed the creation of the modern era of "phylogenetic systems" based on cladistics, rather than morphology alone.[19][page needed][20][page needed][21][page needed]
|
30 |
+
|
31 |
+
Naming and classifying our surroundings has probably been taking place as long as mankind has been able to communicate. It would always have been important to know the names of poisonous and edible plants and animals in order to communicate this information to other members of the family or group. Medicinal plant illustrations show up in Egyptian wall paintings from c. 1500 BC, indicating that the uses of different species were understood and that a basic taxonomy was in place.[22]
|
32 |
+
|
33 |
+
Organisms were first classified by Aristotle (Greece, 384–322 BC) during his stay on the Island of Lesbos.[23][24][25] He classified beings by their parts, or in modern terms attributes, such as having live birth, having four legs, laying eggs, having blood, or being warm-bodied.[26] He divided all living things into two groups: plants and animals.[24] Some of his groups of animals, such as Anhaima (animals without blood, translated as invertebrates) and Enhaima (animals with blood, roughly the vertebrates), as well as groups like the sharks and cetaceans, are still commonly used today.[27] His student Theophrastus (Greece, 370–285 BC) carried on this tradition, mentioning some 500 plants and their uses in his Historia Plantarum. Again, several plant groups currently still recognized can be traced back to Theophrastus, such as Cornus, Crocus, and Narcissus.[24]
|
34 |
+
|
35 |
+
Taxonomy in the Middle Ages was largely based on the Aristotelian system,[26] with additions concerning the philosophical and existential order of creatures. This included concepts such as the Great chain of being in the Western scholastic tradition,[26] again deriving ultimately from Aristotle. Aristotelian system did not classify plants or fungi, due to the lack of microscope at the time,[25] as his ideas were based on arranging the complete world in a single continuum, as per the scala naturae (the Natural Ladder).[24] This, as well, was taken into consideration in the Great chain of being.[24] Advances were made by scholars such as Procopius, Timotheos of Gaza, Demetrios Pepagomenos, and Thomas Aquinas. Medieval thinkers used abstract philosophical and logical categorizations more suited to abstract philosophy than to pragmatic taxonomy.[24]
|
36 |
+
|
37 |
+
During the Renaissance, the Age of Reason, and the Enlightenment, categorizing organisms became more prevalent,[24]
|
38 |
+
and taxonomic works became ambitious enough to replace the ancient texts. This is sometimes credited to the development of sophisticated optical lenses, which allowed the morphology of organisms to be studied in much greater detail. One of the earliest authors to take advantage of this leap in technology was the Italian physician Andrea Cesalpino (1519–1603), who has been called "the first taxonomist".[28] His magnum opus De Plantis came out in 1583, and described more than 1500 plant species.[29][30] Two large plant families that he first recognized are still in use today: the Asteraceae and Brassicaceae.[31] Then in the 17th century John Ray (England, 1627–1705) wrote many important taxonomic works.[25] Arguably his greatest accomplishment was Methodus Plantarum Nova (1682),[32] in which he published details of over 18,000 plant species. At the time, his classifications were perhaps the most complex yet produced by any taxonomist, as he based his taxa on many combined characters. The next major taxonomic works were produced by Joseph Pitton de Tournefort (France, 1656–1708).[33] His work from 1700, Institutiones Rei Herbariae, included more than 9000 species in 698 genera, which directly influenced Linnaeus, as it was the text he used as a young student.[22]
|
39 |
+
|
40 |
+
The Swedish botanist Carl Linnaeus (1707–1778)[26] ushered in a new era of taxonomy. With his major works Systema Naturae 1st Edition in 1735,[34] Species Plantarum in 1753,[35] and Systema Naturae 10th Edition,[36] he revolutionized modern taxonomy. His works implemented a standardized binomial naming system for animal and plant species,[37] which proved to be an elegant solution to a chaotic and disorganized taxonomic literature. He not only introduced the standard of class, order, genus, and species, but also made it possible to identify plants and animals from his book, by using the smaller parts of the flower.[37] Thus the Linnaean system was born, and is still used in essentially the same way today as it was in the 18th century.[37] Currently, plant and animal taxonomists regard Linnaeus' work as the "starting point" for valid names (at 1753 and 1758 respectively).[38] Names published before these dates are referred to as "pre-Linnaean", and not considered valid (with the exception of spiders published in Svenska Spindlar[39]). Even taxonomic names published by Linnaeus himself before these dates are considered pre-Linnaean.[22]
|
41 |
+
|
42 |
+
Whereas Linnaeus aimed simply to create readily identifiable taxa, the idea of the Linnaean taxonomy as translating into a sort of dendrogram of the animal and plant kingdoms was formulated toward the end of the 18th century, well before On the Origin of Species was published.[25] Among early works exploring the idea of a transmutation of species were Erasmus Darwin's 1796 Zoönomia and Jean-Baptiste Lamarck's Philosophie Zoologique of 1809.[12] The idea was popularized in the Anglophone world by the speculative but widely read Vestiges of the Natural History of Creation, published anonymously by Robert Chambers in 1844.[40]
|
43 |
+
|
44 |
+
With Darwin's theory, a general acceptance quickly appeared that a classification should reflect the Darwinian principle of common descent.[41] Tree of life representations became popular in scientific works, with known fossil groups incorporated. One of the first modern groups tied to fossil ancestors was birds.[42] Using the then newly discovered fossils of Archaeopteryx and Hesperornis, Thomas Henry Huxley pronounced that they had evolved from dinosaurs, a group formally named by Richard Owen in 1842.[43][44] The resulting description, that of dinosaurs "giving rise to" or being "the ancestors of" birds, is the essential hallmark of evolutionary taxonomic thinking. As more and more fossil groups were found and recognized in the late 19th and early 20th centuries, palaeontologists worked to understand the history of animals through the ages by linking together known groups.[45] With the modern evolutionary synthesis of the early 1940s, an essentially modern understanding of the evolution of the major groups was in place. As evolutionary taxonomy is based on Linnaean taxonomic ranks, the two terms are largely interchangeable in modern use.[46]
|
45 |
+
|
46 |
+
The cladistic method has emerged since the 1960s.[41] In 1958, Julian Huxley used the term clade.[12] Later, in 1960, Cain and Harrison introduced the term cladistic.[12] The salient feature is arranging taxa in a hierarchical evolutionary tree, ignoring ranks.[41] A taxon is called monophyletic, if it includes all the descendants of an ancestral form.[47][48] Groups that have descendant groups removed from them are termed paraphyletic,[47] while groups representing more than one branch from the tree of life are called polyphyletic.[47][48] The International Code of Phylogenetic Nomenclature or PhyloCode is intended to regulate the formal naming of clades.[49][50] Linnaean ranks will be optional under the PhyloCode, which is intended to coexist with the current, rank-based codes.[50]
|
47 |
+
|
48 |
+
Well before Linnaeus, plants and animals were considered separate Kingdoms.[51] Linnaeus used this as the top rank, dividing the physical world into the plant, animal and mineral kingdoms. As advances in microscopy made classification of microorganisms possible, the number of kingdoms increased, five- and six-kingdom systems being the most common.
|
49 |
+
|
50 |
+
Domains are a relatively new grouping. First proposed in 1977, Carl Woese's three-domain system was not generally accepted until later.[52] One main characteristic of the three-domain method is the separation of Archaea and Bacteria, previously grouped into the single kingdom Bacteria (a kingdom also sometimes called Monera),[51] with the Eukaryota for all organisms whose cells contain a nucleus.[53] A small number of scientists include a sixth kingdom, Archaea, but do not accept the domain method.[51]
|
51 |
+
|
52 |
+
Thomas Cavalier-Smith, who has published extensively on the classification of protists, has recently proposed that the Neomura, the clade that groups together the Archaea and Eucarya, would have evolved from Bacteria, more precisely from Actinobacteria. His 2004 classification treated the archaeobacteria as part of a subkingdom of the kingdom Bacteria, i.e., he rejected the three-domain system entirely.[54] Stefan Luketa in 2012 proposed a five "dominion" system, adding Prionobiota (acellular and without nucleic acid) and Virusobiota (acellular but with nucleic acid) to the traditional three domains.[55]
|
53 |
+
|
54 |
+
Partial classifications exist for many individual groups of organisms and are revised and replaced as new information becomes available; however, comprehensive, published treatments of most or all life are rarer; recent examples are that of Adl et al., 2012 and 2019,[63][64] which covers eukaryotes only with an emphasis on protists, and Ruggiero et al., 2015,[65] covering both eukaryotes and prokaryotes to the rank of Order, although both exclude fossil representatives.[65] A separate compilation (Ruggiero, 2014)[66] covers extant taxa to the rank of family. Other, database-driven treatments include the Encyclopedia of Life, the Global Biodiversity Information Facility, the NCBI taxonomy database, the Interim Register of Marine and Nonmarine Genera, the Open Tree of Life, and the Catalogue of Life. The Paleobiology Database is a resource for fossils.
|
55 |
+
|
56 |
+
Biological taxonomy is a sub-discipline of biology, and is generally practiced by biologists known as "taxonomists", though enthusiastic naturalists are also frequently involved in the publication of new taxa.[67] Because taxonomy aims to describe and organize life, the work conducted by taxonomists is essential for the study of biodiversity and the resulting field of conservation biology.[68][69]
|
57 |
+
|
58 |
+
Biological classification is a critical component of the taxonomic process. As a result, it informs the user as to what the relatives of the taxon are hypothesized to be. Biological classification uses taxonomic ranks, including among others (in order from most inclusive to least inclusive): Domain, Kingdom, Phylum, Class, Order, Family, Genus, Species, and Strain.[70][note 1]
|
59 |
+
|
60 |
+
The "definition" of a taxon is encapsulated by its description or its diagnosis or by both combined. There are no set rules governing the definition of taxa, but the naming and publication of new taxa is governed by sets of rules.[8] In zoology, the nomenclature for the more commonly used ranks (superfamily to subspecies), is regulated by the International Code of Zoological Nomenclature (ICZN Code).[71] In the fields of phycology, mycology, and botany, the naming of taxa is governed by the International Code of Nomenclature for algae, fungi, and plants (ICN).[72]
|
61 |
+
|
62 |
+
The initial description of a taxon involves five main requirements:[73]
|
63 |
+
|
64 |
+
However, often much more information is included, like the geographic range of the taxon, ecological notes, chemistry, behavior, etc. How researchers arrive at their taxa varies: depending on the available data, and resources, methods vary from simple quantitative or qualitative comparisons of striking features, to elaborate computer analyses of large amounts of DNA sequence data.[74]
|
65 |
+
|
66 |
+
An "authority" may be placed after a scientific name.[75] The authority is the name of the scientist or scientists who first validly published the name.[75] For example, in 1758 Linnaeus gave the Asian elephant the scientific name Elephas maximus, so the name is sometimes written as "Elephas maximus Linnaeus, 1758".[76] The names of authors are frequently abbreviated: the abbreviation L., for Linnaeus, is commonly used. In botany, there is, in fact, a regulated list of standard abbreviations (see list of botanists by author abbreviation).[77] The system for assigning authorities differs slightly between botany and zoology.[8] However, it is standard that if the genus of a species has been changed since the original description, the original authority's name is placed in parentheses.[78]
|
67 |
+
|
68 |
+
In phenetics, also known as taximetrics, or numerical taxonomy, organisms are classified based on overall similarity, regardless of their phylogeny or evolutionary relationships.[12] It results in a measure of evolutionary "distance" between taxa. Phenetic methods have become relatively rare in modern times, largely superseded by cladistic analyses, as phenetic methods do not distinguish common ancestral (or plesiomorphic) traits from new common (or apomorphic) traits.[79] However, certain phenetic methods, such as neighbor joining, have found their way into cladistics, as a reasonable approximation of phylogeny when more advanced methods (such as Bayesian inference) are too computationally expensive.[80]
|
69 |
+
|
70 |
+
Modern taxonomy uses database technologies to search and catalogue classifications and their documentation.[81] While there is no commonly used database, there are comprehensive databases such as the Catalogue of Life, which attempts to list every documented species.[82] The catalogue listed 1.64 million species for all kingdoms as of April 2016, claiming coverage of more than three quarters of the estimated species known to modern science.[83]
|
en/1165.html.txt
ADDED
@@ -0,0 +1,182 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
The periodic table, also known as the periodic table of elements, is a tabular display of the chemical elements, which are arranged by atomic number, electron configuration, and recurring chemical properties. The structure of the table shows periodic trends. The seven rows of the table, called periods, generally have metals on the left and nonmetals on the right. The columns, called groups, contain elements with similar chemical behaviours. Six groups have accepted names as well as assigned numbers: for example, group 17 elements are the halogens; and group 18 are the noble gases. Also displayed are four simple rectangular areas or blocks associated with the filling of different atomic orbitals.
|
6 |
+
|
7 |
+
The elements from atomic numbers 1 (hydrogen) through 118 (oganesson) have all been discovered or synthesized, completing seven full rows of the periodic table.[1][2] The first 94 elements, hydrogen through plutonium, all occur naturally, though some are found only in trace amounts and a few were discovered in nature only after having first been synthesized.[n 1] Elements 95 to 118 have only been synthesized in laboratories, nuclear reactors, or nuclear explosions.[3] The synthesis of elements having higher atomic numbers is currently being pursued: these elements would begin an eighth row, and theoretical work has been done to suggest possible candidates for this extension. Numerous synthetic radioisotopes of naturally occurring elements have also been produced in laboratories.
|
8 |
+
|
9 |
+
The organization of the periodic table can be used to derive relationships between the various element properties, and also to predict chemical properties and behaviours of undiscovered or newly synthesized elements. Russian chemist Dmitri Mendeleev published the first recognizable periodic table in 1869, developed mainly to illustrate periodic trends of the then-known elements. He also predicted some properties of unidentified elements that were expected to fill gaps within the table. Most of his forecasts proved to be correct. Mendeleev's idea has been slowly expanded and refined with the discovery or synthesis of further new elements and the development of new theoretical models to explain chemical behaviour. The modern periodic table now provides a useful framework for analyzing chemical reactions, and continues to be widely used in chemistry, nuclear physics and other sciences. Some discussion remains ongoing regarding the placement and categorisation of specific elements, the future extension and limits of the table, and whether there is an optimal form of the table.
|
10 |
+
|
11 |
+
1
|
12 |
+
|
13 |
+
1 (red)=Gas 3 (black)=Solid 80 (green)=Liquid 109 (gray)=Unknown Color of the atomic number shows state of matter (at 0 °C and 1 atm)
|
14 |
+
|
15 |
+
Background color shows subcategory in the metal–metalloid–nonmetal trend:
|
16 |
+
|
17 |
+
Each chemical element has a unique atomic number (Z) representing the number of protons in its nucleus.[n 2] Most elements have differing numbers of neutrons among different atoms, with these variants being referred to as isotopes. For example, carbon has three naturally occurring isotopes: all of its atoms have six protons and most have six neutrons as well, but about one per cent have seven neutrons, and a very small fraction have eight neutrons. Isotopes are never separated in the periodic table; they are always grouped together under a single element. Elements with no stable isotopes have the atomic masses of their most stable isotopes, where such masses are shown, listed in parentheses.[7]
|
18 |
+
|
19 |
+
In the standard periodic table, the elements are listed in order of increasing atomic number Z. A new row (period) is started when a new electron shell has its first electron. Columns (groups) are determined by the electron configuration of the atom; elements with the same number of electrons in a particular subshell fall into the same columns (e.g. oxygen and selenium are in the same column because they both have four electrons in the outermost p-subshell). Elements with similar chemical properties generally fall into the same group in the periodic table, although in the f-block, and to some respect in the d-block, the elements in the same period tend to have similar properties, as well. Thus, it is relatively easy to predict the chemical properties of an element if one knows the properties of the elements around it.[8]
|
20 |
+
|
21 |
+
Since 2016, the periodic table has 118 confirmed elements, from element 1 (hydrogen) to 118 (oganesson). Elements 113, 115, 117 and 118, the most recent discoveries, were officially confirmed by the International Union of Pure and Applied Chemistry (IUPAC) in December 2015. Their proposed names, nihonium (Nh), moscovium (Mc), tennessine (Ts) and oganesson (Og) respectively, were made official in November 2016 by IUPAC.[9][10][11][12]
|
22 |
+
|
23 |
+
The first 94 elements occur naturally; the remaining 24, americium to oganesson (95–118), occur only when synthesized in laboratories. Of the 94 naturally occurring elements, 83 are primordial and 11 occur only in decay chains of primordial elements.[3] No element heavier than einsteinium (element 99) has ever been observed in macroscopic quantities in its pure form, nor has astatine (element 85); francium (element 87) has been only photographed in the form of light emitted from microscopic quantities (300,000 atoms).[13]
|
24 |
+
|
25 |
+
A group or family is a vertical column in the periodic table. Groups usually have more significant periodic trends than periods and blocks, explained below. Modern quantum mechanical theories of atomic structure explain group trends by proposing that elements within the same group generally have the same electron configurations in their valence shell.[14] Consequently, elements in the same group tend to have a shared chemistry and exhibit a clear trend in properties with increasing atomic number.[15] In some parts of the periodic table, such as the d-block and the f-block, horizontal similarities can be as important as, or more pronounced than, vertical similarities.[16][17][18]
|
26 |
+
|
27 |
+
Under an international naming convention, the groups are numbered numerically from 1 to 18 from the leftmost column (the alkali metals) to the rightmost column (the noble gases).[19] Previously, they were known by roman numerals. In America, the roman numerals were followed by either an "A" if the group was in the s- or p-block, or a "B" if the group was in the d-block. The roman numerals used correspond to the last digit of today's naming convention (e.g. the group 4 elements were group IVB, and the group 14 elements were group IVA). In Europe, the lettering was similar, except that "A" was used if the group was before group 10, and "B" was used for groups including and after group 10. In addition, groups 8, 9 and 10 used to be treated as one triple-sized group, known collectively in both notations as group VIII. In 1988, the new IUPAC naming system was put into use, and the old group names were deprecated.[20]
|
28 |
+
|
29 |
+
Some of these groups have been given trivial (unsystematic) names, as seen in the table below, although some are rarely used. Groups 3–10 have no trivial names and are referred to simply by their group numbers or by the name of the first member of their group (such as "the scandium group" for group 3),[19] since they display fewer similarities and/or vertical trends.
|
30 |
+
|
31 |
+
Elements in the same group tend to show patterns in atomic radius, ionization energy, and electronegativity. From top to bottom in a group, the atomic radii of the elements increase. Since there are more filled energy levels, valence electrons are found farther from the nucleus. From the top, each successive element has a lower ionization energy because it is easier to remove an electron since the atoms are less tightly bound. Similarly, a group has a top-to-bottom decrease in electronegativity due to an increasing distance between valence electrons and the nucleus.[21] There are exceptions to these trends: for example, in group 11, electronegativity increases farther down the group.[22]
|
32 |
+
|
33 |
+
A period is a horizontal row in the periodic table. Although groups generally have more significant periodic trends, there are regions where horizontal trends are more significant than vertical group trends, such as the f-block, where the lanthanides and actinides form two substantial horizontal series of elements.[24]
|
34 |
+
|
35 |
+
Elements in the same period show trends in atomic radius, ionization energy, electron affinity, and electronegativity. Moving left to right across a period, atomic radius usually decreases. This occurs because each successive element has an added proton and electron, which causes the electron to be drawn closer to the nucleus.[25] This decrease in atomic radius also causes the ionization energy to increase when moving from left to right across a period. The more tightly bound an element is, the more energy is required to remove an electron. Electronegativity increases in the same manner as ionization energy because of the pull exerted on the electrons by the nucleus.[21] Electron affinity also shows a slight trend across a period. Metals (left side of a period) generally have a lower electron affinity than nonmetals (right side of a period), with the exception of the noble gases.[26]
|
36 |
+
|
37 |
+
Specific regions of the periodic table can be referred to as blocks in recognition of the sequence in which the electron shells of the elements are filled. Elements are assigned to blocks by what orbitals their valence electrons or vacancies lie in.[27] The s-block comprises the first two groups (alkali metals and alkaline earth metals) as well as hydrogen and helium. The p-block comprises the last six groups, which are groups 13 to 18 in IUPAC group numbering (3A to 8A in American group numbering) and contains, among other elements, all of the metalloids. The d-block comprises groups 3 to 12 (or 3B to 2B in American group numbering) and contains all of the transition metals. The f-block, often offset below the rest of the periodic table, has no group numbers and comprises most of the lanthanides and actinides. A hypothetical g-block is expected to begin around element 121, a few elements away from what is currently known.[28]
|
38 |
+
|
39 |
+
According to their shared physical and chemical properties, the elements can be classified into the major categories of metals, metalloids and nonmetals. Metals are generally shiny, highly conducting solids that form alloys with one another and salt-like ionic compounds with nonmetals (other than noble gases). A majority of nonmetals are coloured or colourless insulating gases; nonmetals that form compounds with other nonmetals feature covalent bonding. In between metals and nonmetals are metalloids, which have intermediate or mixed properties.[29]
|
40 |
+
|
41 |
+
Metal and nonmetals can be further classified into subcategories that show a gradation from metallic to non-metallic properties, when going left to right in the rows. The metals may be subdivided into the highly reactive alkali metals, through the less reactive alkaline earth metals, lanthanides and actinides, via the archetypal transition metals, and ending in the physically and chemically weak post-transition metals. Nonmetals may be simply subdivided into the polyatomic nonmetals, being nearer to the metalloids and show some incipient metallic character; the essentially nonmetallic diatomic nonmetals, nonmetallic and the almost completely inert, monatomic noble gases. Specialized groupings such as refractory metals and noble metals, are examples of subsets of transition metals, also known[30] and occasionally denoted.[31]
|
42 |
+
|
43 |
+
Placing elements into categories and subcategories based just on shared properties is imperfect. There is a large disparity of properties within each category with notable overlaps at the boundaries, as is the case with most classification schemes.[32] Beryllium, for example, is classified as an alkaline earth metal although its amphoteric chemistry and tendency to mostly form covalent compounds are both attributes of a chemically weak or post-transition metal. Radon is classified as a nonmetallic noble gas yet has some cationic chemistry that is characteristic of metals. Other classification schemes are possible such as the division of the elements into mineralogical occurrence categories, or crystalline structures. Categorizing the elements in this fashion dates back to at least 1869 when Hinrichs[33] wrote that simple boundary lines could be placed on the periodic table to show elements having shared properties, such as metals, nonmetals, or gaseous elements.
|
44 |
+
|
45 |
+
The electron configuration or organisation of electrons orbiting neutral atoms shows a recurring pattern or periodicity. The electrons occupy a series of electron shells (numbered 1, 2, and so on). Each shell consists of one or more subshells (named s, p, d, f and g). As atomic number increases, electrons progressively fill these shells and subshells more or less according to the Madelung rule or energy ordering rule, as shown in the diagram. The electron configuration for neon, for example, is 1s2 2s2 2p6. With an atomic number of ten, neon has two electrons in the first shell, and eight electrons in the second shell; there are two electrons in the s subshell and six in the p subshell. In periodic table terms, the first time an electron occupies a new shell corresponds to the start of each new period, these positions being occupied by hydrogen and the alkali metals.[34][35]
|
46 |
+
|
47 |
+
Since the properties of an element are mostly determined by its electron configuration, the properties of the elements likewise show recurring patterns or periodic behaviour, some examples of which are shown in the diagrams below for atomic radii, ionization energy and electron affinity. It is this periodicity of properties, manifestations of which were noticed well before the underlying theory was developed, that led to the establishment of the periodic law (the properties of the elements recur at varying intervals) and the formulation of the first periodic tables.[34][35] The periodic law may then be successively clarified as: depending on atomic weight; depending on atomic number; and depending on the total number of s, p, d, and f electrons in each atom. The cycles last 2, 6, 10, and 14 elements respectively.[36]
|
48 |
+
|
49 |
+
There is additionally an internal "double periodicity" that splits the shells in half; this arises because the first half of the electrons going into a particular type of subshell fill unoccupied orbitals, but the second half have to fill already occupied orbitals, following Hund's rule of maximum multiplicity. The second half thus suffer additional repulsion that causes the trend to spit between first-half and second-half elements; this is for example evident when observing the ionisation energies of the 2p elements, in which the triads B-C-N and O-F-Ne show increases, but oxygen actually has a first ionisation slightly lower than that of nitrogen as it is easier to remove the extra, paired electron.[36]
|
50 |
+
|
51 |
+
Atomic radii vary in a predictable and explainable manner across the periodic table. For instance, the radii generally decrease along each period of the table, from the alkali metals to the noble gases; and increase down each group. The radius increases sharply between the noble gas at the end of each period and the alkali metal at the beginning of the next period. These trends of the atomic radii (and of various other chemical and physical properties of the elements) can be explained by the electron shell theory of the atom; they provided important evidence for the development and confirmation of quantum theory.[37]
|
52 |
+
|
53 |
+
The electrons in the 4f-subshell, which is progressively filled from lanthanum (element 57) to ytterbium (element 70),[n 4] are not particularly effective at shielding the increasing nuclear charge from the sub-shells further out. The elements immediately following the lanthanides have atomic radii that are smaller than would be expected and that are almost identical to the atomic radii of the elements immediately above them.[39] Hence lutetium has virtually the same atomic radius (and chemistry) as yttrium, hafnium has virtually the same atomic radius (and chemistry) as zirconium, and tantalum has an atomic radius similar to niobium, and so forth. This is an effect of the lanthanide contraction: a similar actinide contraction also exists. The effect of the lanthanide contraction is noticeable up to platinum (element 78), after which it is masked by a relativistic effect known as the inert pair effect.[40] The d-block contraction, which is a similar effect between the d-block and p-block, is less pronounced than the lanthanide contraction but arises from a similar cause.[39]
|
54 |
+
|
55 |
+
Such contractions exist throughout the table, but are chemically most relevant for the lanthanides with their almost constant +3 oxidation state.[41]
|
56 |
+
|
57 |
+
The first ionization energy is the energy it takes to remove one electron from an atom, the second ionization energy is the energy it takes to remove a second electron from the atom, and so on. For a given atom, successive ionization energies increase with the degree of ionization. For magnesium as an example, the first ionization energy is 738 kJ/mol and the second is 1450 kJ/mol. Electrons in the closer orbitals experience greater forces of electrostatic attraction; thus, their removal requires increasingly more energy. Ionization energy becomes greater up and to the right of the periodic table.[40]
|
58 |
+
|
59 |
+
Large jumps in the successive molar ionization energies occur when removing an electron from a noble gas (complete electron shell) configuration. For magnesium again, the first two molar ionization energies of magnesium given above correspond to removing the two 3s electrons, and the third ionization energy is a much larger 7730 kJ/mol, for the removal of a 2p electron from the very stable neon-like configuration of Mg2+. Similar jumps occur in the ionization energies of other third-row atoms.[40]
|
60 |
+
|
61 |
+
Electronegativity is the tendency of an atom to attract a shared pair of electrons.[42] An atom's electronegativity is affected by both its atomic number and the distance between the valence electrons and the nucleus. The higher its electronegativity, the more an element attracts electrons. It was first proposed by Linus Pauling in 1932.[43] In general, electronegativity increases on passing from left to right along a period, and decreases on descending a group. Hence, fluorine is the most electronegative of the elements,[n 5] while caesium is the least, at least of those elements for which substantial data is available.[22]
|
62 |
+
|
63 |
+
There are some exceptions to this general rule. Gallium and germanium have higher electronegativities than aluminium and silicon respectively because of the d-block contraction. Elements of the fourth period immediately after the first row of the transition metals have unusually small atomic radii because the 3d-electrons are not effective at shielding the increased nuclear charge, and smaller atomic size correlates with higher electronegativity.[22] The anomalously high electronegativity of lead, particularly when compared to thallium and bismuth, is an artifact of electronegativity varying with oxidation state: its electronegativity conforms better to trends if it is quoted for the +2 state instead of the +4 state.[44]
|
64 |
+
|
65 |
+
The electron affinity of an atom is the amount of energy released when an electron is added to a neutral atom to form a negative ion. Although electron affinity varies greatly, some patterns emerge. Generally, nonmetals have more positive electron affinity values than metals. Chlorine most strongly attracts an extra electron. The electron affinities of the noble gases have not been measured conclusively, so they may or may not have slightly negative values.[47]
|
66 |
+
|
67 |
+
Electron affinity generally increases across a period. This is caused by the filling of the valence shell of the atom; a group 17 atom releases more energy than a group 1 atom on gaining an electron because it obtains a filled valence shell and is therefore more stable.[47]
|
68 |
+
|
69 |
+
A trend of decreasing electron affinity going down groups would be expected. The additional electron will be entering an orbital farther away from the nucleus. As such this electron would be less attracted to the nucleus and would release less energy when added. In going down a group, around one-third of elements are anomalous, with heavier elements having higher electron affinities than their next lighter congenors. Largely, this is due to the poor shielding by d and f electrons. A uniform decrease in electron affinity only applies to group 1 atoms.[48]
|
70 |
+
|
71 |
+
The lower the values of ionization energy, electronegativity and electron affinity, the more metallic character the element has. Conversely, nonmetallic character increases with higher values of these properties.[49] Given the periodic trends of these three properties, metallic character tends to decrease going across a period (or row) and, with some irregularities (mostly) due to poor screening of the nucleus by d and f electrons, and relativistic effects,[50] tends to increase going down a group (or column or family). Thus, the most metallic elements (such as caesium) are found at the bottom left of traditional periodic tables and the most nonmetallic elements (such as neon) at the top right. The combination of horizontal and vertical trends in metallic character explains the stair-shaped dividing line between metals and nonmetals found on some periodic tables, and the practice of sometimes categorizing several elements adjacent to that line, or elements adjacent to those elements, as metalloids.[51][52]
|
72 |
+
|
73 |
+
With some minor exceptions, oxidation numbers among the elements show four main trends according to their periodic table geographic location: left; middle; right; and south. On the left (groups 1 to 4, not including the f-block elements, and also niobium, tantalum, and probably dubnium in group 5), the highest most stable oxidation number is the group number, with lower oxidation states being less stable. In the middle (groups 3 to 11), higher oxidation states become more stable going down each group. Group 12 is an exception to this trend; they behave as if they were located on the left side of the table. On the right, higher oxidation states tend to become less stable going down a group.[53] The shift between these trends is continuous: for example, group 3 also has lower oxidation states most stable in its lightest member (scandium, with CsScCl3 for example known in the +2 state),[54] and group 12 is predicted to have copernicium more readily showing oxidation states above +2.[55]
|
74 |
+
|
75 |
+
The lanthanides positioned along the south of the table are distinguished by having the +3 oxidation state in common; this is their most stable state. The early actinides show a pattern of oxidation states somewhat similar to those of their period 6 and 7 transition metal congeners; the later actinides are more similar to the lanthanides, though the last ones (excluding lawrencium) have an increasingly important +2 oxidation state that becomes the most stable state for nobelium.[56]
|
76 |
+
|
77 |
+
From left to right across the four blocks of the long- or 32-column form of the periodic table are a series of linking or bridging groups of elements, located approximately between each block. In general, groups at the peripheries of blocks display similarities to the groups of the neighbouring blocks as well as to the other groups in their own blocks, as expected as most periodic trends are continuous.[57] These groups, like the metalloids, show properties in between, or that are a mixture of, groups to either side. Chemically, the group 3 elements, lanthanides, and heavy group 4 and 5 elements show some behaviour similar to the alkaline earth metals[58] or, more generally, s block metals[59][60][61] but have some of the physical properties of d block transition metals.[62] In fact, the metals all the way up to group 6 are united by being class-A cations ("hard" acids) that form more stable complexes with ligands whose donor atoms are the most electronegative nonmetals nitrogen, oxygen, and fluorine; metals later in the table form a transition to class-B cations ("soft" acids) that form more stable complexes with ligands whose donor atoms are the less electronegative heavier elements of groups 15 through 17.[63]
|
78 |
+
|
79 |
+
Meanwhile, lutetium behaves chemically as a lanthanide (with which it is often classified) but shows a mix of lanthanide and transition metal physical properties (as does yttrium).[64][65] Lawrencium, as an analogue of lutetium, would presumably display like characteristics.[n 6] The coinage metals in group 11 (copper, silver, and gold) are chemically capable of acting as either transition metals or main group metals.[68] The volatile group 12 metals, zinc, cadmium and mercury are sometimes regarded as linking the d block to the p block. Notionally they are d block elements but they have few transition metal properties and are more like their p block neighbors in group 13.[69][70] The relatively inert noble gases, in group 18, bridge the most reactive groups of elements in the periodic table—the halogens in group 17 and the alkali metals in group 1.[57]
|
80 |
+
|
81 |
+
The 1s, 2p, 3d, 4f, and 5g shells are each the first to have their value of ℓ, the azimuthal quantum number that determines a subshell's orbital angular momentum. This gives them some special properties,[71] that has been referred to as kainosymmetry (from Greek καινός "new").[36][72] Elements filling these orbitals are usually less metallic than their heavier homologues, prefer lower oxidation states, and have smaller atomic and ionic radii.[72]
|
82 |
+
|
83 |
+
The above contractions may also be considered to be a general incomplete shielding effect in terms of how they impact the properties of the succeeding elements. The 2p, 3d, or 4f shells have no radial nodes and are smaller than expected. They therefore screen the nuclear charge incompletely, and therefore the valence electrons that fill immediately after the completion of such a core subshell are more tightly bound by the nucleus than would be expected. 1s is an exception, providing nearly complete shielding. This is in particular the reason why sodium has a first ionisation energy of 495.8 kJ/mol that is only slightly smaller than that of lithium, 520.2 kJ/mol, and why lithium acts as less electronegative than sodium in simple σ-bonded alkali metal compounds; sodium suffers an incomplete shielding effect from the preceding 2p elements, but lithium essentially does not.[71]
|
84 |
+
|
85 |
+
Kainosymmetry also explains the specific properties of the 2p, 3d, and 4f elements. The 2p subshell is small and of a similar radial extent as the 2s subshell, which facilitates orbital hybridisation. This does not work as well for the heavier p elements: for example, silicon in silane (SiH4) shows approximate sp2 hybridisation, whereas carbon in methane (CH4) shows an almost ideal sp3 hybridisation. The bonding in these nonorthogonal heavy p element hydrides is weakened; this situation worsens with more electronegative substituents as they magnify the difference in energy between the s and p subshells. The heavier p elements are often more stable in their higher oxidation states in organometallic compounds than in compounds with electronegative ligands. This follows Bent's rule: s character is concentrated in the bonds to the more electropositive substituents, while p character is concentrated in the bonds to the more electronegative substituents. Furthermore, the 2p elements prefer to participate in multiple bonding (observed in O=O and N≡N) to eliminate Pauli repulsion from the otherwise close s and p lone pairs: their π bonds are stronger and their single bonds weaker. The small size of the 2p shell is also responsible for the extremely high electronegativities of the 2p elements.[71]
|
86 |
+
|
87 |
+
The 3d elements show the opposite effect; the 3d orbitals are smaller than would be expected, with a radial extent similar to the 3p core shell, which weakens bonding to ligands because they cannot overlap with the ligands' orbitals well enough. These bonds are therefore stretched and therefore weaker compared to the homologous ones of the 4d and 5d elements (the 5d elements show an additional d-expansion due to relativistic effects). This also leads to low-lying excited states, which is probably related to the well-known fact that 3d compounds are often coloured (the light absorbed is visible). This also explains why the 3d contraction has a stronger effect on the following elements than the 4d or 5d ones do. As for the 4f elements, the difficulty that 4f has in being used for chemistry is also related to this, as are the strong incomplete screening effects; the 5g elements may show a similar contraction, but it is likely that relativistic effects will partly counteract this, as they would tend to cause expansion of the 5g shell.[71]
|
88 |
+
|
89 |
+
Another consequence is the increased metallicity of the following elements in a block after the first kainosymmetric orbital, along with a preference for higher oxidation states. This is visible comparing H and He (1s) with Li and Be (2s); N–F (2p) with P–Cl (3p); Fe and Co (3d) with Ru and Rh (4d); and Nd–Dy (4f) with U–Cf (5f). As kainosymmetric orbitals appear in the even rows (except for 1s), this creates an even–odd difference between periods from period 2 onwards: elements in even periods are smaller and have more oxidising higher oxidation states (if they exist), whereas elements in odd periods differ in the opposite direction.[72]
|
90 |
+
|
91 |
+
In 1789, Antoine Lavoisier published a list of 33 chemical elements, grouping them into gases, metals, nonmetals, and earths.[73] Chemists spent the following century searching for a more precise classification scheme. In 1829, Johann Wolfgang Döbereiner observed that many of the elements could be grouped into triads based on their chemical properties. Lithium, sodium, and potassium, for example, were grouped together in a triad as soft, reactive metals. Döbereiner also observed that, when arranged by atomic weight, the second member of each triad was roughly the average of the first and the third.[74] This became known as the Law of Triads.[75] German chemist Leopold Gmelin worked with this system, and by 1843 he had identified ten triads, three groups of four, and one group of five. Jean-Baptiste Dumas published work in 1857 describing relationships between various groups of metals. Although various chemists were able to identify relationships between small groups of elements, they had yet to build one scheme that encompassed them all.[74] In 1857, German chemist August Kekulé observed that carbon often has four other atoms bonded to it. Methane, for example, has one carbon atom and four hydrogen atoms.[76] This concept eventually became known as valency, where different elements bond with different numbers of atoms.[77]
|
92 |
+
|
93 |
+
In 1862, the French geologist Alexandre-Émile Béguyer de Chancourtois published an early form of the periodic table, which he called the telluric helix or screw. He was the first person to notice the periodicity of the elements. With the elements arranged in a spiral on a cylinder by order of increasing atomic weight, de Chancourtois showed that elements with similar properties seemed to occur at regular intervals. His chart included some ions and compounds in addition to elements. His paper also used geological rather than chemical terms and did not include a diagram. As a result, it received little attention until the work of Dmitri Mendeleev.[78]
|
94 |
+
|
95 |
+
In 1864, Julius Lothar Meyer, a German chemist, published a table with 28 elements. Realizing that an arrangement according to atomic weight did not exactly fit the observed periodicity in chemical properties he gave valency priority over minor differences in atomic weight. A missing element between Si and Sn was predicted with atomic weight 73 and valency 4.[79] Concurrently, English chemist William Odling published an arrangement of 57 elements, ordered on the basis of their atomic weights. With some irregularities and gaps, he noticed what appeared to be a periodicity of atomic weights among the elements and that this accorded with "their usually received groupings".[80] Odling alluded to the idea of a periodic law but did not pursue it.[81] He subsequently proposed (in 1870) a valence-based classification of the elements.[82]
|
96 |
+
|
97 |
+
English chemist John Newlands produced a series of papers from 1863 to 1866 noting that when the elements were listed in order of increasing atomic weight, similar physical and chemical properties recurred at intervals of eight. He likened such periodicity to the octaves of music.[83][84] This so termed Law of Octaves was ridiculed by Newlands' contemporaries, and the Chemical Society refused to publish his work.[85] Newlands was nonetheless able to draft a table of the elements and used it to predict the existence of missing elements, such as germanium.[86] The Chemical Society only acknowledged the significance of his discoveries five years after they credited Mendeleev.[87]
|
98 |
+
|
99 |
+
In 1867, Gustavus Hinrichs, a Danish born academic chemist based in America, published a spiral periodic system based on atomic spectra and weights, and chemical similarities. His work was regarded as idiosyncratic, ostentatious and labyrinthine and this may have militated against its recognition and acceptance.[88][89]
|
100 |
+
|
101 |
+
Russian chemistry professor Dmitri Mendeleev and German chemist Julius Lothar Meyer independently published their periodic tables in 1869 and 1870, respectively.[90] Mendeleev's table, dated March 1 [O.S. February 17] 1869,[91] was his first published version. That of Meyer was an expanded version of his (Meyer's) table of 1864.[92] They both constructed their tables by listing the elements in rows or columns in order of atomic weight and starting a new row or column when the characteristics of the elements began to repeat.[93]
|
102 |
+
|
103 |
+
The recognition and acceptance afforded to Mendeleev's table came from two decisions he made. The first was to leave gaps in the table when it seemed that the corresponding element had not yet been discovered.[94] Mendeleev was not the first chemist to do so, but he was the first to be recognized as using the trends in his periodic table to predict the properties of those missing elements, such as gallium and germanium.[95] The second decision was to occasionally ignore the order suggested by the atomic weights and switch adjacent elements, such as tellurium and iodine, to better classify them into chemical families.
|
104 |
+
|
105 |
+
Mendeleev published in 1869, using atomic weight to organize the elements, information determinable to fair precision in his time. Atomic weight worked well enough to allow Mendeleev to accurately predict the properties of missing elements.
|
106 |
+
|
107 |
+
Mendeleev took the unusual step of naming missing elements using the Sanskrit numerals eka (1), dvi (2), and tri (3) to indicate that the element in question was one, two, or three rows removed from a lighter congener. It has been suggested that Mendeleev, in doing so, was paying homage to ancient Sanskrit grammarians, in particular Pāṇini, who devised a periodic alphabet for the language.[96]
|
108 |
+
|
109 |
+
Following the discovery of the atomic nucleus by Ernest Rutherford in 1911, it was proposed that the integer count of the nuclear charge is identical to the sequential place of each element in the periodic table. In 1913, English physicist Henry Moseley using X-ray spectroscopy confirmed this proposal experimentally. Moseley determined the value of the nuclear charge of each element and showed that Mendeleev's ordering actually places the elements in sequential order by nuclear charge.[97] Nuclear charge is identical to proton count and determines the value of the atomic number (Z) of each element. Using atomic number gives a definitive, integer-based sequence for the elements. Moseley predicted, in 1913, that the only elements still missing between aluminium (Z = 13) and gold (Z = 79) were Z = 43, 61, 72, and 75, all of which were later discovered. The atomic number is the absolute definition of an element and gives a factual basis for the ordering of the periodic table.[98]
|
110 |
+
|
111 |
+
In 1871, Mendeleev published his periodic table in a new form, with groups of similar elements arranged in columns rather than in rows, and those columns numbered I to VIII corresponding with the element's oxidation state. He also gave detailed predictions for the properties of elements he had earlier noted were missing, but should exist.[99] These gaps were subsequently filled as chemists discovered additional naturally occurring elements.[100] It is often stated that the last naturally occurring element to be discovered was francium (referred to by Mendeleev as eka-caesium) in 1939, but it was technically only the last element to be discovered in nature as opposed to by synthesis.[101] Plutonium, produced synthetically in 1940, was identified in trace quantities as a naturally occurring element in 1971.[102]
|
112 |
+
|
113 |
+
The popular[103] periodic table layout, also known as the common or standard form (as shown at various other points in this article), is attributable to Horace Groves Deming. In 1923, Deming, an American chemist, published short (Mendeleev style) and medium (18-column) form periodic tables.[104][n 7] Merck and Company prepared a handout form of Deming's 18-column medium table, in 1928, which was widely circulated in American schools. By the 1930s Deming's table was appearing in handbooks and encyclopedias of chemistry. It was also distributed for many years by the Sargent-Welch Scientific Company.[105][106][107]
|
114 |
+
|
115 |
+
With the development of modern quantum mechanical theories of electron configurations within atoms, it became apparent that each period (row) in the table corresponded to the filling of a quantum shell of electrons. Larger atoms have more electron sub-shells, so later tables have required progressively longer periods.[108]
|
116 |
+
|
117 |
+
In 1945, Glenn Seaborg, an American scientist, made the suggestion that the actinide elements, like the lanthanides, were filling an f sub-level. Before this time the actinides were thought to be forming a fourth d-block row. Seaborg's colleagues advised him not to publish such a radical suggestion as it would most likely ruin his career. As Seaborg considered he did not then have a career to bring into disrepute, he published anyway. Seaborg's suggestion was found to be correct and he subsequently went on to win the 1951 Nobel Prize in chemistry for his work in synthesizing actinide elements.[109][110][n 8]
|
118 |
+
|
119 |
+
Although minute quantities of some transuranic elements occur naturally,[3] they were all first discovered in laboratories. Their production has expanded the periodic table significantly, the first of these being neptunium, synthesized in 1939.[111] Because many of the transuranic elements are highly unstable and decay quickly, they are challenging to detect and characterize when produced. There have been controversies concerning the acceptance of competing discovery claims for some elements, requiring independent review to determine which party has priority, and hence naming rights.[112] In 2010, a joint Russia–US collaboration at Dubna, Moscow Oblast, Russia, claimed to have synthesized six atoms of tennessine (element 117), making it the most recently claimed discovery. It, along with nihonium (element 113), moscovium (element 115), and oganesson (element 118), are the four most recently named elements, whose names all became official on 28 November 2016.[113]
|
120 |
+
|
121 |
+
The modern periodic table is sometimes expanded into its long or 32-column form by reinstating the footnoted f-block elements into their natural position between the s- and d-blocks, as proposed by Alfred Werner.[114] Unlike the 18-column form, this arrangement results in "no interruptions in the sequence of increasing atomic numbers".[115] The relationship of the f-block to the other blocks of the periodic table also becomes easier to see.[116] William B. Jensen advocates a form of table with 32 columns on the grounds that the lanthanides and actinides are otherwise relegated in the minds of students as dull, unimportant elements that can be quarantined and ignored.[117] Despite these advantages, the 32-column form is generally avoided by editors on account of its undue rectangular ratio compared to a book page ratio,[118] and the familiarity of chemists with the modern form, as introduced by Seaborg.[119]
|
122 |
+
|
123 |
+
1 (red)=Gas 3 (black)=Solid 80 (green)=Liquid 109 (gray)=Unknown Color of the atomic number shows state of matter (at 0 °C and 1 atm)
|
124 |
+
|
125 |
+
Background color shows subcategory in the metal–metalloid–nonmetal trend:
|
126 |
+
|
127 |
+
Within 100 years of the appearance of Mendeleev's table in 1869, Edward G. Mazurs had collected an estimated 700 different published versions of the periodic table.[117][122][123] As well as numerous rectangular variations, other periodic table formats have been shaped, for example,[n 9] like a circle, cube, cylinder, building, spiral, lemniscate,[124] octagonal prism, pyramid, sphere, or triangle. Such alternatives are often developed to highlight or emphasize chemical or physical properties of the elements that are not as apparent in traditional periodic tables.[123]
|
128 |
+
|
129 |
+
A popular[125] alternative structure is that of Otto Theodor Benfey (1960). The elements are arranged in a continuous spiral, with hydrogen at the centre and the transition metals, lanthanides, and actinides occupying peninsulas.[126]
|
130 |
+
|
131 |
+
Most periodic tables are two-dimensional;[3] three-dimensional tables are known to as far back as at least 1862 (pre-dating Mendeleev's two-dimensional table of 1869). More recent examples include Courtines' Periodic Classification (1925),[127] Wringley's Lamina System (1949),[128]
|
132 |
+
Giguère's Periodic helix (1965)[129] and Dufour's Periodic Tree (1996).[130] Going one further, Stowe's Physicist's Periodic Table (1989)[131] has been described as being four-dimensional (having three spatial dimensions and one colour dimension).[132]
|
133 |
+
|
134 |
+
The various forms of periodic tables can be thought of as lying on a chemistry–physics continuum.[133] Towards the chemistry end of the continuum can be found, as an example, Rayner-Canham's "unruly"[134] Inorganic Chemist's Periodic Table (2002),[135] which emphasizes trends and patterns, and unusual chemical relationships and properties. Near the physics end of the continuum is Janet's Left-Step Periodic Table (1928). This has a structure that shows a closer connection to the order of electron-shell filling and, by association, quantum mechanics.[136] A somewhat similar approach has been taken by Alper,[137] albeit criticized by Eric Scerri as disregarding the need to display chemical and physical periodicity.[138] Somewhere in the middle of the continuum is the ubiquitous common or standard form of periodic table. This is regarded as better expressing empirical trends in physical state, electrical and thermal conductivity, and oxidation numbers, and other properties easily inferred from traditional techniques of the chemical laboratory.[139] Its popularity is thought to be a result of this layout having a good balance of features in terms of ease of construction and size, and its depiction of atomic order and periodic trends.[81][140]
|
135 |
+
|
136 |
+
Simply following electron configurations, hydrogen (electronic configuration 1s1) and helium (1s2) should be placed in groups 1 and 2, above lithium (1s22s1) and beryllium (1s22s2).[141] While such a placement is common for hydrogen, it is rarely used for helium outside of the context of electron configurations: When the noble gases (then called "inert gases") were first discovered around 1900, they were known as "group 0", reflecting no chemical reactivity of these elements known at that point, and helium was placed on the top of that group, as it did share the extreme chemical inertness seen throughout the group. As the group changed its formal number, many authors continued to assign helium directly above neon, in group 18; one of the examples of such placing is the current IUPAC table.[142]
|
137 |
+
|
138 |
+
The position of hydrogen in group 1 is reasonably well settled. Its usual oxidation state is +1 as is the case for its heavier alkali metal congeners. Like lithium, it has a significant covalent chemistry.[143][144]
|
139 |
+
It can stand in for alkali metals in typical alkali metal structures.[145] It is capable of forming alloy-like hydrides, featuring metallic bonding, with some transition metals.[146]
|
140 |
+
|
141 |
+
Nevertheless, it is sometimes placed elsewhere. A common alternative is at the top of group 17[138] given hydrogen's strictly univalent and largely non-metallic chemistry, and the strictly univalent and non-metallic chemistry of fluorine (the element otherwise at the top of group 17). Sometimes, to show hydrogen has properties corresponding to both those of the alkali metals and the halogens, it is shown at the top of the two columns simultaneously.[147] Another suggestion is above carbon in group 14: placed that way, it fits well into the trends of increasing ionization potential values and electron affinity values, and is not too far from the electronegativity trend, even though hydrogen cannot show the tetravalence characteristic of the heavier group 14 elements.[148] Finally, hydrogen is sometimes placed separately from any group; this is based on its general properties being regarded as sufficiently different from those of the elements in any other group.
|
142 |
+
|
143 |
+
The other period 1 element, helium, is most often placed in group 18 with the other noble gases, as its extraordinary inertness is extremely close to that of the other light noble gases neon and argon.[149] Nevertheless, it is occasionally placed separately from any group as well.[150] The property that distinguishes helium from the rest of the noble gases is that in its closed electron shell, helium has only two electrons in the outermost electron orbital, while the rest of the noble gases have eight. Some authors, such as Henry Bent (the eponym of Bent's rule), Wojciech Grochala, and Felice Grandinetti, have argued that helium would be correctly placed in group 2, over beryllium; Charles Janet's left-step table also contains this assignment. The normalized ionization potentials and electron affinities show better trends with helium in group 2 than in group 18; helium is expected to be slightly more reactive than neon (which breaks the general trend of reactivity in the noble gases, where the heavier ones are more reactive); predicted helium compounds often lack neon analogues even theoretically, but sometimes have beryllium analogues; and helium over beryllium better follows the trend of first-row anomalies in the table (s >> p > d > f).[151][152][153]
|
144 |
+
|
145 |
+
Although scandium and yttrium are always the first two elements in group 3, the identity of the next two elements is not completely settled. They are commonly lanthanum and actinium, and less often lutetium and lawrencium. The two variants originate from historical difficulties in placing the lanthanides in the periodic table, and arguments as to where the f block elements start and end.[154][n 10][n 11] It has been claimed that such arguments are proof that, "it is a mistake to break the [periodic] system into sharply delimited blocks".[156] A third variant shows the two positions below yttrium as being occupied by the lanthanides and the actinides. A fourth variant shows group 3 bifurcating after Sc-Y, into an La-Ac branch, and an Lu-Lr branch.[29]
|
146 |
+
|
147 |
+
Chemical and physical arguments have been made in support of lutetium and lawrencium[157][158] but the majority of authors seem unconvinced.[159] Most working chemists are not aware there is any controversy.[160] In December 2015 an IUPAC project was established to make a recommendation on the matter.[161]
|
148 |
+
|
149 |
+
Lanthanum and actinium are commonly depicted as the remaining group 3 members.[162][n 12] It has been suggested that this layout originated in the 1940s, with the appearance of periodic tables relying on the electron configurations of the elements and the notion of the differentiating electron. The configurations of caesium, barium and lanthanum are [Xe]6s1, [Xe]6s2 and [Xe]5d16s2. Lanthanum thus has a 5d differentiating electron and this establishes it "in group 3 as the first member of the d-block for period 6".[163] A consistent set of electron configurations is then seen in group 3: scandium [Ar]3d14s2, yttrium [Kr]4d15s2 and lanthanum [Xe]5d16s2. Still in period 6, ytterbium was assigned an electron configuration of [Xe]4f135d16s2 and lutetium [Xe]4f145d16s2, "resulting in a 4f differentiating electron for lutetium and firmly establishing it as the last member of the f-block for period 6".[163] Later spectroscopic work found that the electron configuration of ytterbium was in fact [Xe]4f146s2. This meant that ytterbium and lutetium—the latter with [Xe]4f145d16s2—both had 14 f-electrons, "resulting in a d- rather than an f- differentiating electron" for lutetium and making it an "equally valid candidate" with [Xe]5d16s2 lanthanum, for the group 3 periodic table position below yttrium.[163] Lanthanum has the advantage of incumbency since the 5d1 electron appears for the first time in its structure whereas it appears for the third time in lutetium, having also made a brief second appearance in gadolinium.[164]
|
150 |
+
|
151 |
+
In terms of chemical behaviour,[165] and trends going down group 3 for properties such as melting point, electronegativity and ionic radius,[166][167] scandium, yttrium, lanthanum and actinium are similar to their group 1–2 counterparts. In this variant, the number of f electrons in the most common (trivalent) ions of the f-block elements consistently matches their position in the f-block.[168] For example, the f-electron counts for the trivalent ions of the first three f-block elements are Ce 1, Pr 2 and Nd 3.[169]
|
152 |
+
|
153 |
+
In other tables, lutetium and lawrencium are the remaining group 3 members.[n 13] Early techniques for chemically separating scandium, yttrium and lutetium relied on the fact that these elements occurred together in the so-called "yttrium group" whereas La and Ac occurred together in the "cerium group".[163] Accordingly, lutetium rather than lanthanum was assigned to group 3 by some chemists in the 1920s and 30s.[n 14] Several physicists in the 1950s and '60s favoured lutetium, in light of a comparison of several of its physical properties with those of lanthanum.[163] This arrangement, in which lanthanum is the first member of the f-block, is disputed by some authors since lanthanum lacks any f-electrons. It has been argued that this is not a valid concern given other periodic table anomalies—thorium, for example, has no f-electrons yet is part of the f-block.[170] As for lawrencium, its gas phase atomic electron configuration was confirmed in 2015 as [Rn]5f147s27p1. Such a configuration represents another periodic table anomaly, regardless of whether lawrencium is located in the f-block or the d-block, as the only potentially applicable p-block position has been reserved for nihonium with its predicted configuration of [Rn]5f146d107s27p1.[27][n 15]
|
154 |
+
|
155 |
+
Chemically, scandium, yttrium and lutetium (and presumably lawrencium) behave like trivalent versions of the group 1–2 metals.[172] On the other hand, trends going down the group for properties such as melting point, electronegativity and ionic radius, are similar to those found among their group 4–8 counterparts.[163] In this variant, the number of f electrons in the gaseous forms of the f-block atoms usually matches their position in the f-block. For example, the f-electron counts for the first five f-block elements are La 0, Ce 1, Pr 3, Nd 4 and Pm 5.[163]
|
156 |
+
|
157 |
+
A few authors position all thirty lanthanides and actinides in the two positions below yttrium (usually via footnote markers).
|
158 |
+
This variant, which is stated in the 2005 Red Book to be the IUPAC-agreed version as of 2005 (a number of later versions exist, and the last update is from 1 December 2018),[173][n 16] emphasizes similarities in the chemistry of the 15 lanthanide elements (La–Lu), possibly at the expense of ambiguity as to which elements occupy the two group 3 positions below yttrium, and a 15-column wide f block (there can only be 14 elements in any row of the f block).[n 17] However, this similarity does not extend to the 15 actinide elements (Ac–Lr), which show a much wider variety in their chemistries.[175] This form moreover reduces the f-block to a degenerate branch of group 3 of the d-block; it dates back to the 1920s when the lanthanides were thought to have their f electrons as core electrons, which is now known to be false. It is also false for the actinides, many of which show stable oxidation states above +3.[176]
|
159 |
+
|
160 |
+
In this variant, group 3 bifurcates after Sc-Y into a La-Ac branch, and a Lu-Lr branch. This arrangement is consistent with the hypothesis that arguments in favour of either Sc-Y-La-Ac or Sc-Y-Lu-Lr based on chemical and physical data are inconclusive.[177] As noted, trends going down Sc-Y-La-Ac match trends in groups 1−2[178] whereas trends going down Sc-Y-Lu-Lr better match trends in groups 4−10.[163]
|
161 |
+
|
162 |
+
The bifurcation of group 3 is a throwback to the Mendeleev eight column-form in which seven of the main groups each have two subgroups. Tables featuring a bifurcated group 3 have been periodically proposed since that time.[n 18]
|
163 |
+
|
164 |
+
The definition of a transition metal, as given by IUPAC in the Gold Book, is an element whose atom has an incomplete d sub-shell, or which can give rise to cations with an incomplete d sub-shell.[179] By this definition all of the elements in groups 3–11 are transition metals. The IUPAC definition therefore excludes group 12, comprising zinc, cadmium and mercury, from the transition metals category. However, the 2005 IUPAC nomenclature as codified in the Red Book gives both the group 3–11 and group 3–12 definitions of the transition metals as alternatives.
|
165 |
+
|
166 |
+
Some chemists treat the categories "d-block elements" and "transition metals" interchangeably, thereby including groups 3–12 among the transition metals. In this instance the group 12 elements are treated as a special case of transition metal in which the d electrons are not ordinarily given up for chemical bonding (they can sometimes contribute to the valence bonding orbitals even so, as in zinc fluoride).[180] The 2007 report of mercury(IV) fluoride (HgF4), a compound in which mercury would use its d electrons for bonding, has prompted some commentators to suggest that mercury can be regarded as a transition metal.[181] Other commentators, such as Jensen,[182] have argued that the formation of a compound like HgF4 can occur only under highly abnormal conditions; indeed, its existence is currently disputed. As such, mercury could not be regarded as a transition metal by any reasonable interpretation of the ordinary meaning of the term.[182]
|
167 |
+
|
168 |
+
Still other chemists further exclude the group 3 elements from the definition of a transition metal. They do so on the basis that the group 3 elements do not form any ions having a partially occupied d shell and do not therefore exhibit properties characteristic of transition metal chemistry.[183] In this case, only groups 4–11 are regarded as transition metals. This categorisation is however not one of the alternatives considered by IUPAC. Though the group 3 elements show few of the characteristic chemical properties of the transition metals, the same is true of the heavy members of groups 4 and 5, which also are mostly restricted to the group oxidation state in their chemistry. Moreover, the group 3 elements show characteristic physical properties of transition metals (on account of the presence in each atom of a single d electron).[62]
|
169 |
+
|
170 |
+
Although all elements up to oganesson have been discovered, of the elements above hassium (element 108), only copernicium (element 112), nihonium (element 113), and flerovium (element 114) have known chemical properties, and conclusive categorisation at present has not been reached.[55] Some of these may behave differently from what would be predicted by extrapolation, due to relativistic effects; for example, copernicium and flerovium have been predicted to possibly exhibit some noble-gas-like properties, even though neither is placed in group 18 with the other noble gases.[55][184] The current experimental evidence still leaves open the question of whether copernicium and flerovium behave more like metals or noble gases.[55][185] At the same time, oganesson (element 118) is expected to be a solid semiconductor at standard conditions, despite being in group 18.[186]
|
171 |
+
|
172 |
+
Currently, the periodic table has seven complete rows, with all spaces filled in with discovered elements. Future elements would have to begin an eighth row. Nevertheless, it is unclear whether new eighth-row elements will continue the pattern of the current periodic table, or require further adaptations or adjustments. Seaborg expected the eighth period to follow the previously established pattern exactly, so that it would include a two-element s-block for elements 119 and 120, a new g-block for the next 18 elements, and 30 additional elements continuing the current f-, d-, and p-blocks, culminating in element 168, the next noble gas.[188] More recently, physicists such as Pekka Pyykkö have theorized that these additional elements do not exactly follow the Madelung rule, which predicts how electron shells are filled and thus affects the appearance of the present periodic table. There are currently several competing theoretical models for the placement of the elements of atomic number less than or equal to 172. In all of these it is element 172, rather than element 168, that emerges as the next noble gas after oganesson, although these must be regarded as speculative as no complete calculations have been done beyond element 123.[189][190]
|
173 |
+
|
174 |
+
The number of possible elements is not known. A very early suggestion made by Elliot Adams in 1911, and based on the arrangement of elements in each horizontal periodic table row, was that elements of atomic weight greater than circa 256 (which would equate to between elements 99 and 100 in modern-day terms) did not exist.[191] A higher, more recent estimate is that the periodic table may end soon after the island of stability,[192] whose centre is predicted to lie between element 110 and element 126, as the extension of the periodic and nuclide tables is restricted by proton and neutron drip lines as well as decreasing stability towards spontaneous fission.[193][194] Other predictions of an end to the periodic table include at element 128 by John Emsley,[3] at element 137 by Richard Feynman,[195] at element 146 by Yogendra Gambhir,[196] and at element 155 by Albert Khazan.[3][n 19]
|
175 |
+
|
176 |
+
The Bohr model exhibits difficulty for atoms with atomic number greater than 137, as any element with an atomic number greater than 137 would require 1s electrons to be travelling faster than c, the speed of light.[197] Hence the non-relativistic Bohr model is inaccurate when applied to such an element.
|
177 |
+
|
178 |
+
The relativistic Dirac equation has problems for elements with more than 137 protons. For such elements, the wave function of the Dirac ground state is oscillatory rather than bound, and there is no gap between the positive and negative energy spectra, as in the Klein paradox.[198] More accurate calculations taking into account the effects of the finite size of the nucleus indicate that the binding energy first exceeds the limit for elements with more than 173 protons. For heavier elements, if the innermost orbital (1s) is not filled, the electric field of the nucleus will pull an electron out of the vacuum, resulting in the spontaneous emission of a positron.[199] This does not happen if the innermost orbital is filled, so that element 173 is not necessarily the end of the periodic table.[195]
|
179 |
+
|
180 |
+
The many different forms of periodic table have prompted the question of whether there is an optimal or definitive form of periodic table.[200] The answer to this question is thought to depend on whether the chemical periodicity seen to occur among the elements has an underlying truth, effectively hard-wired into the universe, or if any such periodicity is instead the product of subjective human interpretation, contingent upon the circumstances, beliefs and predilections of human observers. An objective basis for chemical periodicity would settle the questions about the location of hydrogen and helium, and the composition of group 3. Such an underlying truth, if it exists, is thought to have not yet been discovered. In its absence, the many different forms of periodic table can be regarded as variations on the theme of chemical periodicity, each of which explores and emphasizes different aspects, properties, perspectives and relationships of and among the elements.[n 20]
|
181 |
+
|
182 |
+
In celebration of the periodic table's 150th anniversary, the United Nations declared the year 2019 as the International Year of the Periodic Table, celebrating "one of the most significant achievements in science".[203]
|
en/1166.html.txt
ADDED
@@ -0,0 +1,70 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
In biology, taxonomy (from Ancient Greek τάξις (taxis), meaning 'arrangement', and -νομία (-nomia), meaning 'method') is the science of naming, defining (circumscribing) and classifying groups of biological organisms on the basis of shared characteristics. Organisms are grouped together into taxa (singular: taxon) and these groups are given a taxonomic rank; groups of a given rank can be aggregated to form a super-group of higher rank, thus creating a taxonomic hierarchy. The principal ranks in modern use are domain, kingdom, phylum (division is sometimes used in botany in place of phylum), class, order, family, genus, and species. The Swedish botanist Carl Linnaeus is regarded as the founder of the current system of taxonomy, as he developed a system known as Linnaean taxonomy for categorizing organisms and binomial nomenclature for naming organisms.
|
4 |
+
|
5 |
+
With the advent of such fields of study as phylogenetics, cladistics, and systematics, the Linnaean system has progressed to a system of modern biological classification based on the evolutionary relationships between organisms, both living and extinct.
|
6 |
+
|
7 |
+
The exact definition of taxonomy varies from source to source, but the core of the discipline remains: the conception, naming, and classification of groups of organisms.[1] As points of reference, recent definitions of taxonomy are presented below:
|
8 |
+
|
9 |
+
The varied definitions either place taxonomy as a sub-area of systematics (definition 2), invert that relationship (definition 6), or appear to consider the two terms synonymous. There is some disagreement as to whether biological nomenclature is considered a part of taxonomy (definitions 1 and 2), or a part of systematics outside taxonomy.[8] For example, definition 6 is paired with the following definition of systematics that places nomenclature outside taxonomy:[6]
|
10 |
+
|
11 |
+
A whole set of terms including taxonomy, systematic biology, systematics, biosystematics, scientific classification, biological classification, and phylogenetics have at times had overlapping meanings – sometimes the same, sometimes slightly different, but always related and intersecting.[1][9] The broadest meaning of "taxonomy" is used here. The term itself was introduced in 1813 by de Candolle, in his Théorie élémentaire de la botanique.[10]
|
12 |
+
|
13 |
+
A taxonomic revision or taxonomic review is a novel analysis of the variation patterns in a particular taxon. This analysis may be executed on the basis of any combination of the various available kinds of characters, such as morphological, anatomical, palynological, biochemical and genetic. A monograph or complete revision is a revision that is comprehensive for a taxon for the information given at a particular time, and for the entire world. Other (partial) revisions may be restricted in the sense that they may only use some of the available character sets or have a limited spatial scope. A revision results in a conformation of or new insights in the relationships between the subtaxa within the taxon under study, which may result in a change in the classification of these subtaxa, the identification of new subtaxa, or the merger of previous subtaxa.[11]
|
14 |
+
|
15 |
+
The term "alpha taxonomy" is primarily used today to refer to the discipline of finding, describing, and naming taxa, particularly species.[12] In earlier literature, the term had a different meaning, referring to morphological taxonomy, and the products of research through the end of the 19th century.[13]
|
16 |
+
|
17 |
+
William Bertram Turrill introduced the term "alpha taxonomy" in a series of papers published in 1935 and 1937 in which he discussed the philosophy and possible future directions of the discipline of taxonomy.[14]
|
18 |
+
|
19 |
+
… there is an increasing desire amongst taxonomists to consider their problems from wider viewpoints, to investigate the possibilities of closer co-operation with their cytological, ecological and genetics colleagues and to acknowledge that some revision or expansion, perhaps of a drastic nature, of their aims and methods, may be desirable … Turrill (1935) has suggested that while accepting the older invaluable taxonomy, based on structure, and conveniently designated "alpha", it is possible to glimpse a far-distant taxonomy built upon as wide a basis of morphological and physiological facts as possible, and one in which "place is found for all observational and experimental data relating, even if indirectly, to the constitution, subdivision, origin, and behaviour of species and other taxonomic groups". Ideals can, it may be said, never be completely realized. They have, however, a great value of acting as permanent stimulants, and if we have some, even vague, ideal of an "omega" taxonomy we may progress a little way down the Greek alphabet. Some of us please ourselves by thinking we are now groping in a "beta" taxonomy.[14]
|
20 |
+
|
21 |
+
Turrill thus explicitly excludes from alpha taxonomy various areas of study that he includes within taxonomy as a whole, such as ecology, physiology, genetics, and cytology. He further excludes phylogenetic reconstruction from alpha taxonomy (pp. 365–366).
|
22 |
+
|
23 |
+
Later authors have used the term in a different sense, to mean the delimitation of species (not subspecies or taxa of other ranks), using whatever investigative techniques are available, and including sophisticated computational or laboratory techniques.[15][12] Thus, Ernst Mayr in 1968 defined "beta taxonomy" as the classification of ranks higher than species.[16]
|
24 |
+
|
25 |
+
An understanding of the biological meaning of variation and of the evolutionary origin of groups of related species is even more important for the second stage of taxonomic activity, the sorting of species into groups of relatives ("taxa") and their arrangement in a hierarchy of higher categories. This activity is what the term classification denotes; it is also referred to as "beta taxonomy".
|
26 |
+
|
27 |
+
How species should be defined in a particular group of organisms gives rise to practical and theoretical problems that are referred to as the species problem. The scientific work of deciding how to define species has been called microtaxonomy.[17][18][12] By extension, macrotaxonomy is the study of groups at the higher taxonomic ranks subgenus and above.[12]
|
28 |
+
|
29 |
+
While some descriptions of taxonomic history attempt to date taxonomy to ancient civilizations, a truly scientific attempt to classify organisms did not occur until the 18th century. Earlier works were primarily descriptive and focused on plants that were useful in agriculture or medicine. There are a number of stages in this scientific thinking. Early taxonomy was based on arbitrary criteria, the so-called "artificial systems", including Linnaeus's system of sexual classification. Later came systems based on a more complete consideration of the characteristics of taxa, referred to as "natural systems", such as those of de Jussieu (1789), de Candolle (1813) and Bentham and Hooker (1862–1863). These were pre-evolutionary in thinking. The publication of Charles Darwin's On the Origin of Species (1859) led to new ways of thinking about classification based on evolutionary relationships. This was the concept of phyletic systems, from 1883 onwards. This approach was typified by those of Eichler (1883) and Engler (1886–1892). The advent of molecular genetics and statistical methodology allowed the creation of the modern era of "phylogenetic systems" based on cladistics, rather than morphology alone.[19][page needed][20][page needed][21][page needed]
|
30 |
+
|
31 |
+
Naming and classifying our surroundings has probably been taking place as long as mankind has been able to communicate. It would always have been important to know the names of poisonous and edible plants and animals in order to communicate this information to other members of the family or group. Medicinal plant illustrations show up in Egyptian wall paintings from c. 1500 BC, indicating that the uses of different species were understood and that a basic taxonomy was in place.[22]
|
32 |
+
|
33 |
+
Organisms were first classified by Aristotle (Greece, 384–322 BC) during his stay on the Island of Lesbos.[23][24][25] He classified beings by their parts, or in modern terms attributes, such as having live birth, having four legs, laying eggs, having blood, or being warm-bodied.[26] He divided all living things into two groups: plants and animals.[24] Some of his groups of animals, such as Anhaima (animals without blood, translated as invertebrates) and Enhaima (animals with blood, roughly the vertebrates), as well as groups like the sharks and cetaceans, are still commonly used today.[27] His student Theophrastus (Greece, 370–285 BC) carried on this tradition, mentioning some 500 plants and their uses in his Historia Plantarum. Again, several plant groups currently still recognized can be traced back to Theophrastus, such as Cornus, Crocus, and Narcissus.[24]
|
34 |
+
|
35 |
+
Taxonomy in the Middle Ages was largely based on the Aristotelian system,[26] with additions concerning the philosophical and existential order of creatures. This included concepts such as the Great chain of being in the Western scholastic tradition,[26] again deriving ultimately from Aristotle. Aristotelian system did not classify plants or fungi, due to the lack of microscope at the time,[25] as his ideas were based on arranging the complete world in a single continuum, as per the scala naturae (the Natural Ladder).[24] This, as well, was taken into consideration in the Great chain of being.[24] Advances were made by scholars such as Procopius, Timotheos of Gaza, Demetrios Pepagomenos, and Thomas Aquinas. Medieval thinkers used abstract philosophical and logical categorizations more suited to abstract philosophy than to pragmatic taxonomy.[24]
|
36 |
+
|
37 |
+
During the Renaissance, the Age of Reason, and the Enlightenment, categorizing organisms became more prevalent,[24]
|
38 |
+
and taxonomic works became ambitious enough to replace the ancient texts. This is sometimes credited to the development of sophisticated optical lenses, which allowed the morphology of organisms to be studied in much greater detail. One of the earliest authors to take advantage of this leap in technology was the Italian physician Andrea Cesalpino (1519–1603), who has been called "the first taxonomist".[28] His magnum opus De Plantis came out in 1583, and described more than 1500 plant species.[29][30] Two large plant families that he first recognized are still in use today: the Asteraceae and Brassicaceae.[31] Then in the 17th century John Ray (England, 1627–1705) wrote many important taxonomic works.[25] Arguably his greatest accomplishment was Methodus Plantarum Nova (1682),[32] in which he published details of over 18,000 plant species. At the time, his classifications were perhaps the most complex yet produced by any taxonomist, as he based his taxa on many combined characters. The next major taxonomic works were produced by Joseph Pitton de Tournefort (France, 1656–1708).[33] His work from 1700, Institutiones Rei Herbariae, included more than 9000 species in 698 genera, which directly influenced Linnaeus, as it was the text he used as a young student.[22]
|
39 |
+
|
40 |
+
The Swedish botanist Carl Linnaeus (1707–1778)[26] ushered in a new era of taxonomy. With his major works Systema Naturae 1st Edition in 1735,[34] Species Plantarum in 1753,[35] and Systema Naturae 10th Edition,[36] he revolutionized modern taxonomy. His works implemented a standardized binomial naming system for animal and plant species,[37] which proved to be an elegant solution to a chaotic and disorganized taxonomic literature. He not only introduced the standard of class, order, genus, and species, but also made it possible to identify plants and animals from his book, by using the smaller parts of the flower.[37] Thus the Linnaean system was born, and is still used in essentially the same way today as it was in the 18th century.[37] Currently, plant and animal taxonomists regard Linnaeus' work as the "starting point" for valid names (at 1753 and 1758 respectively).[38] Names published before these dates are referred to as "pre-Linnaean", and not considered valid (with the exception of spiders published in Svenska Spindlar[39]). Even taxonomic names published by Linnaeus himself before these dates are considered pre-Linnaean.[22]
|
41 |
+
|
42 |
+
Whereas Linnaeus aimed simply to create readily identifiable taxa, the idea of the Linnaean taxonomy as translating into a sort of dendrogram of the animal and plant kingdoms was formulated toward the end of the 18th century, well before On the Origin of Species was published.[25] Among early works exploring the idea of a transmutation of species were Erasmus Darwin's 1796 Zoönomia and Jean-Baptiste Lamarck's Philosophie Zoologique of 1809.[12] The idea was popularized in the Anglophone world by the speculative but widely read Vestiges of the Natural History of Creation, published anonymously by Robert Chambers in 1844.[40]
|
43 |
+
|
44 |
+
With Darwin's theory, a general acceptance quickly appeared that a classification should reflect the Darwinian principle of common descent.[41] Tree of life representations became popular in scientific works, with known fossil groups incorporated. One of the first modern groups tied to fossil ancestors was birds.[42] Using the then newly discovered fossils of Archaeopteryx and Hesperornis, Thomas Henry Huxley pronounced that they had evolved from dinosaurs, a group formally named by Richard Owen in 1842.[43][44] The resulting description, that of dinosaurs "giving rise to" or being "the ancestors of" birds, is the essential hallmark of evolutionary taxonomic thinking. As more and more fossil groups were found and recognized in the late 19th and early 20th centuries, palaeontologists worked to understand the history of animals through the ages by linking together known groups.[45] With the modern evolutionary synthesis of the early 1940s, an essentially modern understanding of the evolution of the major groups was in place. As evolutionary taxonomy is based on Linnaean taxonomic ranks, the two terms are largely interchangeable in modern use.[46]
|
45 |
+
|
46 |
+
The cladistic method has emerged since the 1960s.[41] In 1958, Julian Huxley used the term clade.[12] Later, in 1960, Cain and Harrison introduced the term cladistic.[12] The salient feature is arranging taxa in a hierarchical evolutionary tree, ignoring ranks.[41] A taxon is called monophyletic, if it includes all the descendants of an ancestral form.[47][48] Groups that have descendant groups removed from them are termed paraphyletic,[47] while groups representing more than one branch from the tree of life are called polyphyletic.[47][48] The International Code of Phylogenetic Nomenclature or PhyloCode is intended to regulate the formal naming of clades.[49][50] Linnaean ranks will be optional under the PhyloCode, which is intended to coexist with the current, rank-based codes.[50]
|
47 |
+
|
48 |
+
Well before Linnaeus, plants and animals were considered separate Kingdoms.[51] Linnaeus used this as the top rank, dividing the physical world into the plant, animal and mineral kingdoms. As advances in microscopy made classification of microorganisms possible, the number of kingdoms increased, five- and six-kingdom systems being the most common.
|
49 |
+
|
50 |
+
Domains are a relatively new grouping. First proposed in 1977, Carl Woese's three-domain system was not generally accepted until later.[52] One main characteristic of the three-domain method is the separation of Archaea and Bacteria, previously grouped into the single kingdom Bacteria (a kingdom also sometimes called Monera),[51] with the Eukaryota for all organisms whose cells contain a nucleus.[53] A small number of scientists include a sixth kingdom, Archaea, but do not accept the domain method.[51]
|
51 |
+
|
52 |
+
Thomas Cavalier-Smith, who has published extensively on the classification of protists, has recently proposed that the Neomura, the clade that groups together the Archaea and Eucarya, would have evolved from Bacteria, more precisely from Actinobacteria. His 2004 classification treated the archaeobacteria as part of a subkingdom of the kingdom Bacteria, i.e., he rejected the three-domain system entirely.[54] Stefan Luketa in 2012 proposed a five "dominion" system, adding Prionobiota (acellular and without nucleic acid) and Virusobiota (acellular but with nucleic acid) to the traditional three domains.[55]
|
53 |
+
|
54 |
+
Partial classifications exist for many individual groups of organisms and are revised and replaced as new information becomes available; however, comprehensive, published treatments of most or all life are rarer; recent examples are that of Adl et al., 2012 and 2019,[63][64] which covers eukaryotes only with an emphasis on protists, and Ruggiero et al., 2015,[65] covering both eukaryotes and prokaryotes to the rank of Order, although both exclude fossil representatives.[65] A separate compilation (Ruggiero, 2014)[66] covers extant taxa to the rank of family. Other, database-driven treatments include the Encyclopedia of Life, the Global Biodiversity Information Facility, the NCBI taxonomy database, the Interim Register of Marine and Nonmarine Genera, the Open Tree of Life, and the Catalogue of Life. The Paleobiology Database is a resource for fossils.
|
55 |
+
|
56 |
+
Biological taxonomy is a sub-discipline of biology, and is generally practiced by biologists known as "taxonomists", though enthusiastic naturalists are also frequently involved in the publication of new taxa.[67] Because taxonomy aims to describe and organize life, the work conducted by taxonomists is essential for the study of biodiversity and the resulting field of conservation biology.[68][69]
|
57 |
+
|
58 |
+
Biological classification is a critical component of the taxonomic process. As a result, it informs the user as to what the relatives of the taxon are hypothesized to be. Biological classification uses taxonomic ranks, including among others (in order from most inclusive to least inclusive): Domain, Kingdom, Phylum, Class, Order, Family, Genus, Species, and Strain.[70][note 1]
|
59 |
+
|
60 |
+
The "definition" of a taxon is encapsulated by its description or its diagnosis or by both combined. There are no set rules governing the definition of taxa, but the naming and publication of new taxa is governed by sets of rules.[8] In zoology, the nomenclature for the more commonly used ranks (superfamily to subspecies), is regulated by the International Code of Zoological Nomenclature (ICZN Code).[71] In the fields of phycology, mycology, and botany, the naming of taxa is governed by the International Code of Nomenclature for algae, fungi, and plants (ICN).[72]
|
61 |
+
|
62 |
+
The initial description of a taxon involves five main requirements:[73]
|
63 |
+
|
64 |
+
However, often much more information is included, like the geographic range of the taxon, ecological notes, chemistry, behavior, etc. How researchers arrive at their taxa varies: depending on the available data, and resources, methods vary from simple quantitative or qualitative comparisons of striking features, to elaborate computer analyses of large amounts of DNA sequence data.[74]
|
65 |
+
|
66 |
+
An "authority" may be placed after a scientific name.[75] The authority is the name of the scientist or scientists who first validly published the name.[75] For example, in 1758 Linnaeus gave the Asian elephant the scientific name Elephas maximus, so the name is sometimes written as "Elephas maximus Linnaeus, 1758".[76] The names of authors are frequently abbreviated: the abbreviation L., for Linnaeus, is commonly used. In botany, there is, in fact, a regulated list of standard abbreviations (see list of botanists by author abbreviation).[77] The system for assigning authorities differs slightly between botany and zoology.[8] However, it is standard that if the genus of a species has been changed since the original description, the original authority's name is placed in parentheses.[78]
|
67 |
+
|
68 |
+
In phenetics, also known as taximetrics, or numerical taxonomy, organisms are classified based on overall similarity, regardless of their phylogeny or evolutionary relationships.[12] It results in a measure of evolutionary "distance" between taxa. Phenetic methods have become relatively rare in modern times, largely superseded by cladistic analyses, as phenetic methods do not distinguish common ancestral (or plesiomorphic) traits from new common (or apomorphic) traits.[79] However, certain phenetic methods, such as neighbor joining, have found their way into cladistics, as a reasonable approximation of phylogeny when more advanced methods (such as Bayesian inference) are too computationally expensive.[80]
|
69 |
+
|
70 |
+
Modern taxonomy uses database technologies to search and catalogue classifications and their documentation.[81] While there is no commonly used database, there are comprehensive databases such as the Catalogue of Life, which attempts to list every documented species.[82] The catalogue listed 1.64 million species for all kingdoms as of April 2016, claiming coverage of more than three quarters of the estimated species known to modern science.[83]
|
en/1167.html.txt
ADDED
@@ -0,0 +1,70 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
In biology, taxonomy (from Ancient Greek τάξις (taxis), meaning 'arrangement', and -νομία (-nomia), meaning 'method') is the science of naming, defining (circumscribing) and classifying groups of biological organisms on the basis of shared characteristics. Organisms are grouped together into taxa (singular: taxon) and these groups are given a taxonomic rank; groups of a given rank can be aggregated to form a super-group of higher rank, thus creating a taxonomic hierarchy. The principal ranks in modern use are domain, kingdom, phylum (division is sometimes used in botany in place of phylum), class, order, family, genus, and species. The Swedish botanist Carl Linnaeus is regarded as the founder of the current system of taxonomy, as he developed a system known as Linnaean taxonomy for categorizing organisms and binomial nomenclature for naming organisms.
|
4 |
+
|
5 |
+
With the advent of such fields of study as phylogenetics, cladistics, and systematics, the Linnaean system has progressed to a system of modern biological classification based on the evolutionary relationships between organisms, both living and extinct.
|
6 |
+
|
7 |
+
The exact definition of taxonomy varies from source to source, but the core of the discipline remains: the conception, naming, and classification of groups of organisms.[1] As points of reference, recent definitions of taxonomy are presented below:
|
8 |
+
|
9 |
+
The varied definitions either place taxonomy as a sub-area of systematics (definition 2), invert that relationship (definition 6), or appear to consider the two terms synonymous. There is some disagreement as to whether biological nomenclature is considered a part of taxonomy (definitions 1 and 2), or a part of systematics outside taxonomy.[8] For example, definition 6 is paired with the following definition of systematics that places nomenclature outside taxonomy:[6]
|
10 |
+
|
11 |
+
A whole set of terms including taxonomy, systematic biology, systematics, biosystematics, scientific classification, biological classification, and phylogenetics have at times had overlapping meanings – sometimes the same, sometimes slightly different, but always related and intersecting.[1][9] The broadest meaning of "taxonomy" is used here. The term itself was introduced in 1813 by de Candolle, in his Théorie élémentaire de la botanique.[10]
|
12 |
+
|
13 |
+
A taxonomic revision or taxonomic review is a novel analysis of the variation patterns in a particular taxon. This analysis may be executed on the basis of any combination of the various available kinds of characters, such as morphological, anatomical, palynological, biochemical and genetic. A monograph or complete revision is a revision that is comprehensive for a taxon for the information given at a particular time, and for the entire world. Other (partial) revisions may be restricted in the sense that they may only use some of the available character sets or have a limited spatial scope. A revision results in a conformation of or new insights in the relationships between the subtaxa within the taxon under study, which may result in a change in the classification of these subtaxa, the identification of new subtaxa, or the merger of previous subtaxa.[11]
|
14 |
+
|
15 |
+
The term "alpha taxonomy" is primarily used today to refer to the discipline of finding, describing, and naming taxa, particularly species.[12] In earlier literature, the term had a different meaning, referring to morphological taxonomy, and the products of research through the end of the 19th century.[13]
|
16 |
+
|
17 |
+
William Bertram Turrill introduced the term "alpha taxonomy" in a series of papers published in 1935 and 1937 in which he discussed the philosophy and possible future directions of the discipline of taxonomy.[14]
|
18 |
+
|
19 |
+
… there is an increasing desire amongst taxonomists to consider their problems from wider viewpoints, to investigate the possibilities of closer co-operation with their cytological, ecological and genetics colleagues and to acknowledge that some revision or expansion, perhaps of a drastic nature, of their aims and methods, may be desirable … Turrill (1935) has suggested that while accepting the older invaluable taxonomy, based on structure, and conveniently designated "alpha", it is possible to glimpse a far-distant taxonomy built upon as wide a basis of morphological and physiological facts as possible, and one in which "place is found for all observational and experimental data relating, even if indirectly, to the constitution, subdivision, origin, and behaviour of species and other taxonomic groups". Ideals can, it may be said, never be completely realized. They have, however, a great value of acting as permanent stimulants, and if we have some, even vague, ideal of an "omega" taxonomy we may progress a little way down the Greek alphabet. Some of us please ourselves by thinking we are now groping in a "beta" taxonomy.[14]
|
20 |
+
|
21 |
+
Turrill thus explicitly excludes from alpha taxonomy various areas of study that he includes within taxonomy as a whole, such as ecology, physiology, genetics, and cytology. He further excludes phylogenetic reconstruction from alpha taxonomy (pp. 365–366).
|
22 |
+
|
23 |
+
Later authors have used the term in a different sense, to mean the delimitation of species (not subspecies or taxa of other ranks), using whatever investigative techniques are available, and including sophisticated computational or laboratory techniques.[15][12] Thus, Ernst Mayr in 1968 defined "beta taxonomy" as the classification of ranks higher than species.[16]
|
24 |
+
|
25 |
+
An understanding of the biological meaning of variation and of the evolutionary origin of groups of related species is even more important for the second stage of taxonomic activity, the sorting of species into groups of relatives ("taxa") and their arrangement in a hierarchy of higher categories. This activity is what the term classification denotes; it is also referred to as "beta taxonomy".
|
26 |
+
|
27 |
+
How species should be defined in a particular group of organisms gives rise to practical and theoretical problems that are referred to as the species problem. The scientific work of deciding how to define species has been called microtaxonomy.[17][18][12] By extension, macrotaxonomy is the study of groups at the higher taxonomic ranks subgenus and above.[12]
|
28 |
+
|
29 |
+
While some descriptions of taxonomic history attempt to date taxonomy to ancient civilizations, a truly scientific attempt to classify organisms did not occur until the 18th century. Earlier works were primarily descriptive and focused on plants that were useful in agriculture or medicine. There are a number of stages in this scientific thinking. Early taxonomy was based on arbitrary criteria, the so-called "artificial systems", including Linnaeus's system of sexual classification. Later came systems based on a more complete consideration of the characteristics of taxa, referred to as "natural systems", such as those of de Jussieu (1789), de Candolle (1813) and Bentham and Hooker (1862–1863). These were pre-evolutionary in thinking. The publication of Charles Darwin's On the Origin of Species (1859) led to new ways of thinking about classification based on evolutionary relationships. This was the concept of phyletic systems, from 1883 onwards. This approach was typified by those of Eichler (1883) and Engler (1886–1892). The advent of molecular genetics and statistical methodology allowed the creation of the modern era of "phylogenetic systems" based on cladistics, rather than morphology alone.[19][page needed][20][page needed][21][page needed]
|
30 |
+
|
31 |
+
Naming and classifying our surroundings has probably been taking place as long as mankind has been able to communicate. It would always have been important to know the names of poisonous and edible plants and animals in order to communicate this information to other members of the family or group. Medicinal plant illustrations show up in Egyptian wall paintings from c. 1500 BC, indicating that the uses of different species were understood and that a basic taxonomy was in place.[22]
|
32 |
+
|
33 |
+
Organisms were first classified by Aristotle (Greece, 384–322 BC) during his stay on the Island of Lesbos.[23][24][25] He classified beings by their parts, or in modern terms attributes, such as having live birth, having four legs, laying eggs, having blood, or being warm-bodied.[26] He divided all living things into two groups: plants and animals.[24] Some of his groups of animals, such as Anhaima (animals without blood, translated as invertebrates) and Enhaima (animals with blood, roughly the vertebrates), as well as groups like the sharks and cetaceans, are still commonly used today.[27] His student Theophrastus (Greece, 370–285 BC) carried on this tradition, mentioning some 500 plants and their uses in his Historia Plantarum. Again, several plant groups currently still recognized can be traced back to Theophrastus, such as Cornus, Crocus, and Narcissus.[24]
|
34 |
+
|
35 |
+
Taxonomy in the Middle Ages was largely based on the Aristotelian system,[26] with additions concerning the philosophical and existential order of creatures. This included concepts such as the Great chain of being in the Western scholastic tradition,[26] again deriving ultimately from Aristotle. Aristotelian system did not classify plants or fungi, due to the lack of microscope at the time,[25] as his ideas were based on arranging the complete world in a single continuum, as per the scala naturae (the Natural Ladder).[24] This, as well, was taken into consideration in the Great chain of being.[24] Advances were made by scholars such as Procopius, Timotheos of Gaza, Demetrios Pepagomenos, and Thomas Aquinas. Medieval thinkers used abstract philosophical and logical categorizations more suited to abstract philosophy than to pragmatic taxonomy.[24]
|
36 |
+
|
37 |
+
During the Renaissance, the Age of Reason, and the Enlightenment, categorizing organisms became more prevalent,[24]
|
38 |
+
and taxonomic works became ambitious enough to replace the ancient texts. This is sometimes credited to the development of sophisticated optical lenses, which allowed the morphology of organisms to be studied in much greater detail. One of the earliest authors to take advantage of this leap in technology was the Italian physician Andrea Cesalpino (1519–1603), who has been called "the first taxonomist".[28] His magnum opus De Plantis came out in 1583, and described more than 1500 plant species.[29][30] Two large plant families that he first recognized are still in use today: the Asteraceae and Brassicaceae.[31] Then in the 17th century John Ray (England, 1627–1705) wrote many important taxonomic works.[25] Arguably his greatest accomplishment was Methodus Plantarum Nova (1682),[32] in which he published details of over 18,000 plant species. At the time, his classifications were perhaps the most complex yet produced by any taxonomist, as he based his taxa on many combined characters. The next major taxonomic works were produced by Joseph Pitton de Tournefort (France, 1656–1708).[33] His work from 1700, Institutiones Rei Herbariae, included more than 9000 species in 698 genera, which directly influenced Linnaeus, as it was the text he used as a young student.[22]
|
39 |
+
|
40 |
+
The Swedish botanist Carl Linnaeus (1707–1778)[26] ushered in a new era of taxonomy. With his major works Systema Naturae 1st Edition in 1735,[34] Species Plantarum in 1753,[35] and Systema Naturae 10th Edition,[36] he revolutionized modern taxonomy. His works implemented a standardized binomial naming system for animal and plant species,[37] which proved to be an elegant solution to a chaotic and disorganized taxonomic literature. He not only introduced the standard of class, order, genus, and species, but also made it possible to identify plants and animals from his book, by using the smaller parts of the flower.[37] Thus the Linnaean system was born, and is still used in essentially the same way today as it was in the 18th century.[37] Currently, plant and animal taxonomists regard Linnaeus' work as the "starting point" for valid names (at 1753 and 1758 respectively).[38] Names published before these dates are referred to as "pre-Linnaean", and not considered valid (with the exception of spiders published in Svenska Spindlar[39]). Even taxonomic names published by Linnaeus himself before these dates are considered pre-Linnaean.[22]
|
41 |
+
|
42 |
+
Whereas Linnaeus aimed simply to create readily identifiable taxa, the idea of the Linnaean taxonomy as translating into a sort of dendrogram of the animal and plant kingdoms was formulated toward the end of the 18th century, well before On the Origin of Species was published.[25] Among early works exploring the idea of a transmutation of species were Erasmus Darwin's 1796 Zoönomia and Jean-Baptiste Lamarck's Philosophie Zoologique of 1809.[12] The idea was popularized in the Anglophone world by the speculative but widely read Vestiges of the Natural History of Creation, published anonymously by Robert Chambers in 1844.[40]
|
43 |
+
|
44 |
+
With Darwin's theory, a general acceptance quickly appeared that a classification should reflect the Darwinian principle of common descent.[41] Tree of life representations became popular in scientific works, with known fossil groups incorporated. One of the first modern groups tied to fossil ancestors was birds.[42] Using the then newly discovered fossils of Archaeopteryx and Hesperornis, Thomas Henry Huxley pronounced that they had evolved from dinosaurs, a group formally named by Richard Owen in 1842.[43][44] The resulting description, that of dinosaurs "giving rise to" or being "the ancestors of" birds, is the essential hallmark of evolutionary taxonomic thinking. As more and more fossil groups were found and recognized in the late 19th and early 20th centuries, palaeontologists worked to understand the history of animals through the ages by linking together known groups.[45] With the modern evolutionary synthesis of the early 1940s, an essentially modern understanding of the evolution of the major groups was in place. As evolutionary taxonomy is based on Linnaean taxonomic ranks, the two terms are largely interchangeable in modern use.[46]
|
45 |
+
|
46 |
+
The cladistic method has emerged since the 1960s.[41] In 1958, Julian Huxley used the term clade.[12] Later, in 1960, Cain and Harrison introduced the term cladistic.[12] The salient feature is arranging taxa in a hierarchical evolutionary tree, ignoring ranks.[41] A taxon is called monophyletic, if it includes all the descendants of an ancestral form.[47][48] Groups that have descendant groups removed from them are termed paraphyletic,[47] while groups representing more than one branch from the tree of life are called polyphyletic.[47][48] The International Code of Phylogenetic Nomenclature or PhyloCode is intended to regulate the formal naming of clades.[49][50] Linnaean ranks will be optional under the PhyloCode, which is intended to coexist with the current, rank-based codes.[50]
|
47 |
+
|
48 |
+
Well before Linnaeus, plants and animals were considered separate Kingdoms.[51] Linnaeus used this as the top rank, dividing the physical world into the plant, animal and mineral kingdoms. As advances in microscopy made classification of microorganisms possible, the number of kingdoms increased, five- and six-kingdom systems being the most common.
|
49 |
+
|
50 |
+
Domains are a relatively new grouping. First proposed in 1977, Carl Woese's three-domain system was not generally accepted until later.[52] One main characteristic of the three-domain method is the separation of Archaea and Bacteria, previously grouped into the single kingdom Bacteria (a kingdom also sometimes called Monera),[51] with the Eukaryota for all organisms whose cells contain a nucleus.[53] A small number of scientists include a sixth kingdom, Archaea, but do not accept the domain method.[51]
|
51 |
+
|
52 |
+
Thomas Cavalier-Smith, who has published extensively on the classification of protists, has recently proposed that the Neomura, the clade that groups together the Archaea and Eucarya, would have evolved from Bacteria, more precisely from Actinobacteria. His 2004 classification treated the archaeobacteria as part of a subkingdom of the kingdom Bacteria, i.e., he rejected the three-domain system entirely.[54] Stefan Luketa in 2012 proposed a five "dominion" system, adding Prionobiota (acellular and without nucleic acid) and Virusobiota (acellular but with nucleic acid) to the traditional three domains.[55]
|
53 |
+
|
54 |
+
Partial classifications exist for many individual groups of organisms and are revised and replaced as new information becomes available; however, comprehensive, published treatments of most or all life are rarer; recent examples are that of Adl et al., 2012 and 2019,[63][64] which covers eukaryotes only with an emphasis on protists, and Ruggiero et al., 2015,[65] covering both eukaryotes and prokaryotes to the rank of Order, although both exclude fossil representatives.[65] A separate compilation (Ruggiero, 2014)[66] covers extant taxa to the rank of family. Other, database-driven treatments include the Encyclopedia of Life, the Global Biodiversity Information Facility, the NCBI taxonomy database, the Interim Register of Marine and Nonmarine Genera, the Open Tree of Life, and the Catalogue of Life. The Paleobiology Database is a resource for fossils.
|
55 |
+
|
56 |
+
Biological taxonomy is a sub-discipline of biology, and is generally practiced by biologists known as "taxonomists", though enthusiastic naturalists are also frequently involved in the publication of new taxa.[67] Because taxonomy aims to describe and organize life, the work conducted by taxonomists is essential for the study of biodiversity and the resulting field of conservation biology.[68][69]
|
57 |
+
|
58 |
+
Biological classification is a critical component of the taxonomic process. As a result, it informs the user as to what the relatives of the taxon are hypothesized to be. Biological classification uses taxonomic ranks, including among others (in order from most inclusive to least inclusive): Domain, Kingdom, Phylum, Class, Order, Family, Genus, Species, and Strain.[70][note 1]
|
59 |
+
|
60 |
+
The "definition" of a taxon is encapsulated by its description or its diagnosis or by both combined. There are no set rules governing the definition of taxa, but the naming and publication of new taxa is governed by sets of rules.[8] In zoology, the nomenclature for the more commonly used ranks (superfamily to subspecies), is regulated by the International Code of Zoological Nomenclature (ICZN Code).[71] In the fields of phycology, mycology, and botany, the naming of taxa is governed by the International Code of Nomenclature for algae, fungi, and plants (ICN).[72]
|
61 |
+
|
62 |
+
The initial description of a taxon involves five main requirements:[73]
|
63 |
+
|
64 |
+
However, often much more information is included, like the geographic range of the taxon, ecological notes, chemistry, behavior, etc. How researchers arrive at their taxa varies: depending on the available data, and resources, methods vary from simple quantitative or qualitative comparisons of striking features, to elaborate computer analyses of large amounts of DNA sequence data.[74]
|
65 |
+
|
66 |
+
An "authority" may be placed after a scientific name.[75] The authority is the name of the scientist or scientists who first validly published the name.[75] For example, in 1758 Linnaeus gave the Asian elephant the scientific name Elephas maximus, so the name is sometimes written as "Elephas maximus Linnaeus, 1758".[76] The names of authors are frequently abbreviated: the abbreviation L., for Linnaeus, is commonly used. In botany, there is, in fact, a regulated list of standard abbreviations (see list of botanists by author abbreviation).[77] The system for assigning authorities differs slightly between botany and zoology.[8] However, it is standard that if the genus of a species has been changed since the original description, the original authority's name is placed in parentheses.[78]
|
67 |
+
|
68 |
+
In phenetics, also known as taximetrics, or numerical taxonomy, organisms are classified based on overall similarity, regardless of their phylogeny or evolutionary relationships.[12] It results in a measure of evolutionary "distance" between taxa. Phenetic methods have become relatively rare in modern times, largely superseded by cladistic analyses, as phenetic methods do not distinguish common ancestral (or plesiomorphic) traits from new common (or apomorphic) traits.[79] However, certain phenetic methods, such as neighbor joining, have found their way into cladistics, as a reasonable approximation of phylogeny when more advanced methods (such as Bayesian inference) are too computationally expensive.[80]
|
69 |
+
|
70 |
+
Modern taxonomy uses database technologies to search and catalogue classifications and their documentation.[81] While there is no commonly used database, there are comprehensive databases such as the Catalogue of Life, which attempts to list every documented species.[82] The catalogue listed 1.64 million species for all kingdoms as of April 2016, claiming coverage of more than three quarters of the estimated species known to modern science.[83]
|
en/1168.html.txt
ADDED
@@ -0,0 +1,198 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Claudius (/ˈklɔːdiəs/ KLAW-dee-əs; Tiberius Claudius Caesar Augustus Germanicus; 1 August 10 BC – 13 October AD 54) was Roman emperor from AD 41 to 54. Born to Drusus and Antonia Minor at Lugdunum in Roman Gaul, where his father was stationed as a military legate, he was the first Roman emperor to be born outside Italy. Nonetheless, Claudius was an Italic of Sabine origins[2] and a member of the Julio-Claudian dynasty. Because he was afflicted with a limp and slight deafness due to sickness at a young age, his family ostracized him and excluded him from public office until his consulship, shared with his nephew Caligula in 37.
|
4 |
+
|
5 |
+
Claudius' infirmity probably saved him from the fate of many other nobles during the purges of Tiberius' and Caligula's reigns; potential enemies did not see him as a serious threat. His survival led to his being declared emperor by the Praetorian Guard after Caligula's assassination, at which point he was the last male of his family. Despite his lack of experience, Claudius proved to be an able and efficient administrator. He expanded the imperial bureaucracy to include freedmen, and helped to restore the empire's finances after the excess of Caligula's reign. He was also an ambitious builder, constructing many new roads, aqueducts, and canals across the Empire. During his reign the Empire started its successful conquest of Britain.
|
6 |
+
|
7 |
+
Having a personal interest in law, he presided at public trials, and issued up to twenty edicts a day. He was seen as vulnerable throughout his reign, particularly by elements of the nobility. Claudius was constantly forced to shore up his position; this resulted in the deaths of many senators. These events damaged his reputation among the ancient writers, though more recent historians have revised this opinion. Many authors contend that he was murdered by his own wife. After his death at the age of 63, his grand-nephew and legally adopted step-son Nero succeeded him as emperor. His 13-year reign (slightly longer than Nero's) would not be surpassed by any successors until that of Domitian, who reigned for 15 years.
|
8 |
+
|
9 |
+
He was a descendant of the Octavii Rufi (through Gaius Octavius), Julii Caesares (through Julia Minor and Julia (Mark Antony's mother)), and the Claudii Nerones (through Nero Claudius Drusus). He was a step-grandson (through his father Drusus) and great-nephew (through his mother Antonia Minor) of Augustus. Tiberius was his father's brother. Through his brother Germanicus, Claudius was an uncle of Caligula and a great uncle of Nero. Through his mother, Antonia Minor, he was also a grandson of Mark Antony and Octavia Minor.
|
10 |
+
|
11 |
+
Claudius was born on 1 August 10 BC at Lugdunum (modern Lyon, France). He had two older siblings, Germanicus and Livilla. His mother, Antonia, may have had two other children who died young.
|
12 |
+
|
13 |
+
His maternal grandparents were Mark Antony and Octavia Minor, Augustus' sister, and he was therefore the great-great grandnephew of Gaius Julius Caesar. His paternal grandparents were Livia, Augustus' third wife, and Tiberius Claudius Nero. During his reign, Claudius revived the rumor that his father Drusus was actually the illegitimate son of Augustus, to give the appearance that Augustus was Claudius' paternal grandfather.
|
14 |
+
|
15 |
+
In 9 BC, his father Drusus unexpectedly died on campaign in Germania, possibly from illness. Claudius was then left to be raised by his mother, who never remarried. When Claudius' disability became evident, the relationship with his family turned sour. Antonia referred to him as a monster, and used him as a standard for stupidity. She seems to have passed her son off to his grandmother Livia for a number of years.[3]
|
16 |
+
|
17 |
+
Livia was a little kinder, but nevertheless often sent him short, angry letters of reproof. He was put under the care of a "former mule-driver"[4] to keep him disciplined, under the logic that his condition was due to laziness and a lack of will-power. However, by the time he reached his teenage years his symptoms apparently waned and his family took some notice of his scholarly interests.[5]
|
18 |
+
|
19 |
+
In AD 7, Livy was hired to tutor him in history, with the assistance of Sulpicius Flavus. He spent a lot of his time with the latter and the philosopher Athenodorus. Augustus, according to a letter, was surprised at the clarity of Claudius' oratory.[5] Expectations about his future began to increase.
|
20 |
+
|
21 |
+
His work as a budding historian damaged his prospects for advancement in public life. According to Vincent Scramuzza and others, Claudius began work on a history of the Civil Wars that was either too truthful or too critical of Octavian[6]—then reigning as Augustus Caesar. In either case, it was far too early for such an account, and may have only served to remind Augustus that Claudius was Antony's descendant. His mother and grandmother quickly put a stop to it, and this may have convinced them that Claudius was not fit for public office. He could not be trusted to toe the existing party line.[7]
|
22 |
+
|
23 |
+
When he returned to the narrative later in life, Claudius skipped over the wars of the Second Triumvirate altogether. But the damage was done, and his family pushed him into the background. When the Arch of Pavia was erected to honor the Imperial clan in 8 AD, Claudius' name (now Tiberius Claudius Nero Germanicus after his elevation to paterfamilias of Claudii Nerones on the adoption of his brother) was inscribed on the edge—past the deceased princes, Gaius and Lucius, and Germanicus' children. There is some speculation that the inscription was added by Claudius himself decades later, and that he originally did not appear at all.[7]
|
24 |
+
|
25 |
+
When Augustus died in AD 14, Claudius—then aged 23—appealed to his uncle Tiberius to allow him to begin the cursus honorum. Tiberius, the new Emperor, responded by granting Claudius consular ornaments. Claudius requested office once more and was snubbed. Since the new Emperor was no more generous than the old, Claudius gave up hope of public office and retired to a scholarly, private life.
|
26 |
+
|
27 |
+
Despite the disdain of the Imperial family, it seems that from very early on the general public respected Claudius. At Augustus' death, the equites, or knights, chose Claudius to head their delegation. When his house burned down, the Senate demanded it be rebuilt at public expense. They also requested that Claudius be allowed to debate in the Senate. Tiberius turned down both motions, but the sentiment remained.
|
28 |
+
|
29 |
+
During the period immediately after the death of Tiberius' son, Drusus, Claudius was pushed by some quarters as a potential heir. This again suggests the political nature of his exclusion from public life. However, as this was also the period during which the power and terror of the commander of the Praetorian Guard, Sejanus, was at its peak, Claudius chose to downplay this possibility.
|
30 |
+
|
31 |
+
After the death of Tiberius, the new emperor Caligula (the son of Claudius' brother Germanicus) recognized Claudius to be of some use. He appointed Claudius his co-consul in 37 in order to emphasize the memory of Caligula's deceased father Germanicus. Despite this, Caligula relentlessly tormented his uncle: playing practical jokes, charging him enormous sums of money, humiliating him before the Senate, and the like. According to Cassius Dio Claudius became very sickly and thin by the end of Caligula's reign, most likely due to stress.[8] A possible surviving portrait of Claudius from this period may support this.
|
32 |
+
|
33 |
+
On 24 January 41, Caligula was assassinated in a broad-based conspiracy involving the Praetorian commander Cassius Chaerea and several senators. There is no evidence that Claudius had a direct hand in the assassination, although it has been argued that he knew about the plot—particularly since he left the scene of the crime shortly before his nephew was murdered.[9] However, after the deaths of Caligula's wife and daughter, it became apparent that Cassius intended to go beyond the terms of the conspiracy and wipe out the Imperial family.[10]
|
34 |
+
|
35 |
+
In the chaos following the murder, Claudius witnessed the German guard cut down several uninvolved noblemen, including many of his friends. He fled to the palace to hide. According to tradition, a Praetorian named Gratus found him hiding behind a curtain and suddenly declared him princeps.[10] A section of the guard may have planned in advance to seek out Claudius, perhaps with his approval. They reassured him that they were not one of the battalions looking for revenge. He was spirited away to the Praetorian camp and put under their protection.
|
36 |
+
|
37 |
+
The Senate quickly met and began debating a change of government, but this eventually devolved into an argument over which of them would be the new princeps. When they heard of the Praetorians' claim, they demanded that Claudius be delivered to them for approval, but he refused, sensing the danger that would come with complying. Some historians, particularly Josephus,[11] claim that Claudius was directed in his actions by the Judaean King Herod Agrippa. However, an earlier version of events by the same ancient author downplays Agrippa's role[12] so it remains uncertain. Eventually the Senate was forced to give in and, in return, Claudius pardoned nearly all the assassins.
|
38 |
+
|
39 |
+
Claudius took several steps to legitimize his rule against potential usurpers, most of them emphasizing his place within the Julio-Claudian family. He adopted the name "Caesar" as a cognomen, as the name still carried great weight with the populace. In order to do so, he dropped the cognomen "Nero" which he had adopted as paterfamilias of the Claudii Nerones when his brother Germanicus was adopted out.[13] As Pharaoh of Egypt, Claudius adopted the royal titulary Tiberios Klaudios, Autokrator Heqaheqau Meryasetptah, Kanakht Djediakhshuemakhet ("Tiberius Claudius, Emperor and ruler of rulers, beloved of Isis and Ptah, the strong bull of the stable moon on the horizon").[14]
|
40 |
+
|
41 |
+
While Claudius had never been formally adopted either by Augustus or his successors, he was nevertheless the grandson of Augustus' sister Octavia, and so he felt that he had the right of family. He also adopted the name "Augustus" as the two previous emperors had done at their accessions. He kept the honorific "Germanicus" to display the connection with his heroic brother. He deified his paternal grandmother Livia to highlight her position as wife of the divine Augustus. Claudius frequently used the term "filius Drusi" (son of Drusus) in his titles, in order to remind the people of his legendary father and lay claim to his reputation.[13]
|
42 |
+
|
43 |
+
Since Claudius was the first Emperor proclaimed on the initiative of the Praetorian Guard instead of the Senate, his repute suffered at the hands of commentators (such as Seneca). Moreover, he was the first Emperor who resorted to bribery as a means to secure army loyalty and rewarded the soldiers of the Praetorian Guard that had elevated him with 15,000 sesterces.[15] Tiberius and Augustus had both left gifts to the army and guard in their wills, and upon Caligula's death the same would have been expected, even if no will existed. Claudius remained grateful to the guard, however, issuing coins with tributes to the Praetorians in the early part of his reign.[16]
|
44 |
+
|
45 |
+
Pliny the Elder noted, according to the 1938 Loeb Classical Library translation by Harris Rackham, "... many people do not allow any gems in a signet-ring, and seal with the gold itself; this was a fashion invented when Claudius Cæsar was emperor."[17]
|
46 |
+
|
47 |
+
Claudius restored the status of the peaceful Imperial Roman provinces of Macedonia and Achaea as senatorial provinces.[18]
|
48 |
+
|
49 |
+
Under Claudius, the Empire underwent its first major expansion since the reign of Augustus. The provinces of Thrace, Noricum, Lycia, and Judea were annexed (or put under direct rule) under various circumstances during his term. The annexation of Mauretania, begun under Caligula, was completed after the defeat of rebel forces, and the official division of the former client kingdom into two Imperial provinces.[19] The most far-reaching conquest was that of Britannia.[20]
|
50 |
+
|
51 |
+
In 43, Claudius sent Aulus Plautius with four legions to Britain (Britannia) after an appeal from an ousted tribal ally. Britain was an attractive target for Rome because of its material wealth – particularly mines and slaves. It was also a haven for Gallic rebels and the like, and so could not be left alone much longer. Claudius himself traveled to the island after the completion of initial offensives, bringing with him reinforcements and elephants. The latter must have made an impression on the Britons when they were displayed in the large tribal centre of Camulodunum, modern day Colchester. The Roman colonia of Colonia Claudia Victricensis was established as the provincial capital of the newly established province of Britannia at Camulodunum,[22] where a large Temple was dedicated in his honour.[22]
|
52 |
+
|
53 |
+
He left after 16 days, but remained in the provinces for some time. The Senate granted him a triumph for his efforts. Only members of the Imperial family were allowed such honours, but Claudius subsequently lifted this restriction for some of his conquering generals. He was granted the honorific "Britannicus" but only accepted it on behalf of his son, never using the title himself. When the British general Caractacus was captured in 50, Claudius granted him clemency. Caractacus lived out his days on land provided by the Roman state, an unusual end for an enemy commander.
|
54 |
+
|
55 |
+
Claudius conducted a census in 48 that found 5,984,072 Roman citizens[23] (adult males with Roman citizenship; women, children, slaves, and free adult males without Roman citizenship were not counted), an increase of around a million since the census conducted at Augustus' death. He had helped increase this number through the foundation of Roman colonies that were granted blanket citizenship. These colonies were often made out of existing communities, especially those with elites who could rally the populace to the Roman cause. Several colonies were placed in new provinces or on the border of the Empire to secure Roman holdings as quickly as possible.
|
56 |
+
|
57 |
+
Claudius personally judged many of the legal cases tried during his reign. Ancient historians have many complaints about this, stating that his judgments were variable and sometimes did not follow the law.[24] He was also easily swayed. Nevertheless, Claudius paid detailed attention to the operation of the judicial system.[25]
|
58 |
+
|
59 |
+
He extended the summer court session, as well as the winter term, by shortening the traditional breaks. Claudius also made a law requiring plaintiffs to remain in the city while their cases were pending, as defendants had previously been required to do. These measures had the effect of clearing out the docket. The minimum age for jurors was also raised to 25 in order to ensure a more experienced jury pool.[25]
|
60 |
+
|
61 |
+
Claudius also settled disputes in the provinces. He freed the island of Rhodes from Roman rule for their good faith and exempted Ilium (Troy) from taxes. Early in his reign, the Greeks and Jews of Alexandria sent him two embassies at once after riots broke out between the two communities. This resulted in the famous "Letter to the Alexandrians", which reaffirmed Jewish rights in the city but also forbade them to move in more families en masse. According to Josephus, he then reaffirmed the rights and freedoms of all the Jews in the Empire.[26]
|
62 |
+
|
63 |
+
One of Claudius's investigators discovered that many old Roman citizens based in the city of Tridentum (modern Trento) were not in fact citizens.[27] The Emperor issued a declaration, contained in the Tabula clesiana, that they would be considered to hold citizenship from then on, since to strip them of their status would cause major problems. However, in individual cases, Claudius punished false assumption of citizenship harshly, making it a capital offense. Similarly, any freedmen found to be laying false claim to membership of the Roman equestrian order were sold back into slavery.[28]
|
64 |
+
|
65 |
+
Numerous edicts were issued throughout Claudius' reign. These were on a number of topics, everything from medical advice to moral judgments. A famous medical example is one promoting yew juice as a cure for snakebite.[29] Suetonius wrote that he is even said to have thought of an edict allowing public flatulence for good health.[30] One of the more famous edicts concerned the status of sick slaves. Masters had been abandoning ailing slaves at the temple of Aesculapius on Tiber Island to die instead of providing them with medical assistance and care, and then reclaiming them if they lived. Claudius ruled that slaves who were thus abandoned and recovered after such treatment would be free. Furthermore, masters who chose to kill slaves rather than take care of them were liable to be charged with murder.[31]
|
66 |
+
|
67 |
+
Claudius embarked on many public works throughout his reign, both in the capital and in the provinces. He built two aqueducts, the Aqua Claudia, begun by Caligula, and the Anio Novus. These entered the city in 52 and met at the Porta Maggiore. He also restored a third, the Aqua Virgo.
|
68 |
+
|
69 |
+
He paid special attention to transportation. Throughout Italy and the provinces he built roads and canals. Among these was a large canal leading from the Rhine to the sea, as well as a road from Italy to Germany – both begun by his father, Drusus. Closer to Rome, he built a navigable canal on the Tiber, leading to Portus, his new port just north of Ostia. This port was constructed in a semicircle with two moles and a lighthouse at its mouth. The construction also had the effect of reducing flooding in Rome.
|
70 |
+
|
71 |
+
The port at Ostia was part of Claudius' solution to the constant grain shortages that occurred in winter, after the Roman shipping season. The other part of his solution was to insure the ships of grain merchants who were willing to risk travelling to Egypt in the off-season. He also granted their sailors special privileges, including citizenship and exemption from the Lex Papia Poppaea, a law that regulated marriage. In addition, he repealed the taxes that Caligula had instituted on food, and further reduced taxes on communities suffering drought or famine.
|
72 |
+
|
73 |
+
The last part of Claudius' plan was to increase the amount of arable land in Italy. This was to be achieved by draining the Fucine lake, which would have the added benefit of making the nearby river navigable year-round.[32] A tunnel was dug through the lake bed, but the plan was a failure. The tunnel was crooked and not large enough to carry the water, which caused it to back up when opened. The resultant flood washed out a large gladiatorial exhibition held to commemorate the opening, causing Claudius to run for his life along with the other spectators. The draining of the lake continued to present a problem well into the Middle Ages. It was finally achieved by the Prince Torlonia in the 19th century, producing over 160,000 acres (650 km2) of new arable land.[33] He expanded the Claudian tunnel to three times its original size.
|
74 |
+
|
75 |
+
Because of the circumstances of his accession, Claudius took great pains to please the Senate. During regular sessions, the Emperor sat among the Senate body, speaking in turn. When introducing a law, he sat on a bench between the consuls in his position as holder of the power of Tribune (the Emperor could not officially serve as a Tribune of the Plebes as he was a Patrician, but it was a power taken by previous rulers). He refused to accept all his predecessors' titles (including Imperator) at the beginning of his reign, preferring to earn them in due course. He allowed the Senate to issue its own bronze coinage for the first time since Augustus. He also put the Imperial provinces of Macedonia and Achaea back under Senate control.
|
76 |
+
|
77 |
+
Claudius set about remodeling the Senate into a more efficient, representative body. He chided the senators about their reluctance to debate bills introduced by himself, as noted in the fragments of a surviving speech:
|
78 |
+
|
79 |
+
If you accept these proposals, Conscript Fathers, say so at once and simply, in accordance with your convictions. If you do not accept them, find alternatives, but do so here and now; or if you wish to take time for consideration, take it, provided you do not forget that you must be ready to pronounce your opinion whenever you may be summoned to meet. It ill befits the dignity of the Senate that the consul designate should repeat the phrases of the consuls word for word as his opinion, and that every one else should merely say 'I approve', and that then, after leaving, the assembly should announce 'We debated'.[34]
|
80 |
+
|
81 |
+
In 47 he assumed the office of censor with Lucius Vitellius, which had been allowed to lapse for some time. He struck the names of many senators and equites who no longer met qualifications, but showed respect by allowing them to resign in advance. At the same time, he sought to admit eligible men from the provinces. The Lyon Tablet preserves his speech on the admittance of Gallic senators, in which he addresses the Senate with reverence but also with criticism for their disdain of these men. He even jokes about how the Senate had admitted members from beyond Gallia Narbonensis (Lyons, France), i.e. himself. He also increased the number of Patricians by adding new families to the dwindling number of noble lines. Here he followed the precedent of Lucius Junius Brutus and Julius Caesar.
|
82 |
+
|
83 |
+
Nevertheless, many in the Senate remained hostile to Claudius, and many plots were made on his life. This hostility carried over into the historical accounts. As a result, Claudius reduced the Senate's power for the sake of efficiency. The administration of Ostia was turned over to an Imperial Procurator after construction of the port. Administration of many of the empire's financial concerns was turned over to Imperial appointees and freedmen. This led to further resentment and suggestions that these same freedmen were ruling the Emperor.
|
84 |
+
|
85 |
+
Several coup attempts were made during Claudius' reign, resulting in the deaths of many senators. Appius Silanus was executed early in Claudius' reign under questionable circumstances.[31] Shortly after, a large rebellion was undertaken by the Senator Vinicianus and Scribonianus, the governor of Dalmatia, and gained quite a few senatorial supporters. It ultimately failed because of the reluctance of Scribonianus' troops, which led to the suicide of the main conspirators.
|
86 |
+
|
87 |
+
Many other senators tried different conspiracies and were condemned. Claudius' son-in-law Pompeius Magnus was executed for his part in a conspiracy with his father Crassus Frugi. Another plot involved the consulars Lusiius Saturninus, Cornelius Lupus, and Pompeius Pedo.
|
88 |
+
|
89 |
+
In 46, Asinius Gallus, the grandson of Asinius Pollio, and Titus Statilius Taurus Corvinus were exiled for a plot hatched with several of Claudius' own freedmen. Valerius Asiaticus was executed without public trial for unknown reasons. The ancient sources say the charge was adultery, and that Claudius was tricked into issuing the punishment. However, Claudius singles out Asiaticus for special damnation in his speech on the Gauls, which dates over a year later, suggesting that the charge must have been much more serious.
|
90 |
+
|
91 |
+
Asiaticus had been a claimant to the throne in the chaos following Caligula's death and a co-consul with the Titus Statilius Taurus Corvinus mentioned above. Most of these conspiracies took place before Claudius' term as Censor, and may have induced him to review the Senatorial rolls. The conspiracy of Gaius Silius in the year after his Censorship, 48, is detailed in book 11 of Tacitus Annal. This section of Tacitus history narrates the alleged conspiracy of Claudius' third wife, Messalina. Suetonius states that a total of 35 senators and 300 knights were executed for offenses during Claudius' reign.[35] Needless to say, the responses to these conspiracies could not have helped Senate–emperor relations.
|
92 |
+
|
93 |
+
Claudius was hardly the first emperor to use freedmen to help with the day-to-day running of the Empire. He was, however, forced to increase their role as the powers of the princeps became more centralized and the burden larger. This was partly due to the ongoing hostility of the Senate, as mentioned above, but also due to his respect for the senators. Claudius did not want free-born magistrates to have to serve under him, as if they were not peers.
|
94 |
+
|
95 |
+
The secretariat was divided into bureaus, with each being placed under the leadership of one freedman. Narcissus was the secretary of correspondence. Pallas became the secretary of the treasury. Callistus became secretary of justice. There was a fourth bureau for miscellaneous issues, which was put under Polybius until his execution for treason. The freedmen could also officially speak for the Emperor, as when Narcissus addressed the troops in Claudius' stead before the conquest of Britain.[36]
|
96 |
+
|
97 |
+
Since these were important positions, the senators were aghast at their being placed in the hands of former slaves. If freedmen had total control of money, letters, and law, it seemed it would not be hard for them to manipulate the Emperor. This is exactly the accusation put forth by the ancient sources. However, these same sources admit that the freedmen were loyal to Claudius.[36]
|
98 |
+
|
99 |
+
He was similarly appreciative of them and gave them due credit for policies where he had used their advice. However, if they showed treasonous inclinations, the Emperor did punish them with just force, as in the case of Polybius and Pallas' brother, Felix. There is no evidence that the character of Claudius' policies and edicts changed with the rise and fall of the various freedmen, suggesting that he was firmly in control throughout.
|
100 |
+
|
101 |
+
Regardless of the extent of their political power, the freedmen did manage to amass wealth through their positions. Pliny the Elder notes that several of them were richer than Crassus, the richest man of the Republican era.[37]
|
102 |
+
|
103 |
+
Claudius, as the author of a treatise on Augustus' religious reforms, felt himself in a good position to institute some of his own. He had strong opinions about the proper form for state religion. He refused the request of Alexandrian Greeks to dedicate a temple to his divinity, saying that only gods may choose new gods. He restored lost days to festivals and got rid of many extraneous celebrations added by Caligula. He re-instituted old observances and archaic language.
|
104 |
+
|
105 |
+
Claudius was concerned with the spread of eastern mysteries within the city and searched for more Roman replacements. He emphasized the Eleusinian mysteries which had been practiced by so many during the Republic. He expelled foreign astrologers, and at the same time rehabilitated the old Roman soothsayers (known as haruspices) as a replacement. He was especially hard on Druidism, because of its incompatibility with the Roman state religion and its proselytizing activities.[38]
|
106 |
+
|
107 |
+
Claudius forbade proselytizing in any religion, even in those regions where he allowed natives to worship freely.
|
108 |
+
|
109 |
+
It is also reported that at one time he expelled the Jews from Rome, probably because the Jews within the city caused continuous disturbances at the instigation of Chrestus.[a]
|
110 |
+
|
111 |
+
According to Suetonius, Claudius was extraordinarily fond of games. He is said to have risen with the crowd after gladiatorial matches and given unrestrained praise to the fighters.[39] Claudius also presided over many new and original events. Soon after coming into power, Claudius instituted games to be held in honor of his father on the latter's birthday.[40] Annual games were also held in honour of his accession, and took place at the Praetorian camp where Claudius had first been proclaimed Emperor.[41]
|
112 |
+
|
113 |
+
Claudius organised a performance of the Secular Games, marking the 800th anniversary of the founding of Rome. Augustus had performed the same games less than a century prior. Augustus' excuse was that the interval for the games was 110 years, not 100, but his date actually did not qualify under either reasoning.[41] Claudius also presented naval battles to mark the attempted draining of the Fucine Lake, as well as many other public games and shows.
|
114 |
+
|
115 |
+
At Ostia, in front of a crowd of spectators, Claudius fought a killer whale which was trapped in the harbour. The event was witnessed by Pliny the Elder:
|
116 |
+
|
117 |
+
A killer whale was actually seen in the harbour of Ostia, locked in combat with the emperor Claudius. She had come when he was completing the construction of the harbour, drawn there by the wreck of a ship bringing leather hides from Gaul, and feeding there over a number of days, had made a furrow in the shallows: the waves had raised up such a mound of sand that she couldn't turn around at all, and while she was pursuing her banquet as the waves moved it shorewards, her back stuck up out of the water like the overturned keel of a boat. The Emperor ordered that a large array of nets be stretched across the mouths of the harbour, and setting out in person with the Praetorian cohorts gave a show to the Roman people, soldiers showering lances from attacking ships, one of which I saw swamped by the beast's waterspout and sunk.—"Historia Naturalis" IX.14–15.[42]
|
118 |
+
|
119 |
+
Claudius also restored and adorned many public venues in Rome. At the Circus Maximus, the turning posts and starting stalls were replaced in marble and embellished, and an embankment was probably added to prevent flooding of the track.[43] Claudius also reinforced or extended the seating rules that reserved front seating at the Circus for senators.[41] Claudius rebuilt Pompey's Theatre after it had been destroyed by fire, organising special fights at the re-dedication which he observed from a special platform in the orchestra box.[41]
|
120 |
+
|
121 |
+
Suetonius and the other ancient authors accused Claudius of being dominated by women and wives, and of being a womanizer.
|
122 |
+
|
123 |
+
Claudius married four times, after two failed betrothals. The first betrothal was to his distant cousin Aemilia Lepida, but was broken for political reasons. The second was to Livia Medullina Camilla, which ended with Medullina's sudden death on their wedding day.
|
124 |
+
|
125 |
+
Plautia Urgulanilla was the granddaughter of Livia's confidant Urgulania. During their marriage she gave birth to a son, Claudius Drusus. Drusus died of asphyxiation in his early teens, shortly after becoming engaged to Junilla, the daughter of Sejanus.
|
126 |
+
|
127 |
+
Claudius later divorced Urgulanilla for adultery and on suspicion of murdering her sister-in-law Apronia. When Urgulanilla gave birth after the divorce, Claudius repudiated the baby girl, Claudia, as the father was allegedly one of his own freedmen. This action made him later the target of criticism by his enemies.
|
128 |
+
|
129 |
+
Soon after (possibly in 28), Claudius married Aelia Paetina, a relative of Sejanus, if not Sejanus's adoptive sister. During their marriage, Claudius and Paetina had a daughter, Claudia Antonia. He later divorced her after the marriage became a political liability, although Leon (1948) suggests it may have been due to emotional and mental abuse by Paetina.
|
130 |
+
|
131 |
+
Some years after divorcing Aelia Paetina, in 38 or early 39, Claudius married Valeria Messalina, who was his first cousin once removed and closely allied with Caligula's circle. Shortly thereafter, she gave birth to a daughter, Claudia Octavia. A son, first named Tiberius Claudius Germanicus, and later known as Britannicus, was born just after Claudius' accession.
|
132 |
+
|
133 |
+
This marriage ended in tragedy. The ancient historians allege that Messalina was a nymphomaniac who was regularly unfaithful to Claudius—Tacitus states she went so far as to compete with a prostitute to see who could have the most sexual partners in a night[44]—and manipulated his policies in order to amass wealth. In 48, Messalina married her lover Gaius Silius in a public ceremony while Claudius was at Ostia.
|
134 |
+
|
135 |
+
Sources disagree as to whether or not she divorced the Emperor first, and whether the intention was to usurp the throne. Under Roman law, the spouse needed to be informed that he or she had been divorced before a new marriage could take place; the sources state that Claudius was in total ignorance until after the marriage.[45] Scramuzza, in his biography, suggests that Silius may have convinced Messalina that Claudius was doomed, and the union was her only hope of retaining rank and protecting her children.[46][47][48] The historian Tacitus suggests that Claudius's ongoing term as Censor may have prevented him from noticing the affair before it reached such a critical point.[49] Whatever the case, the result was the execution of Silius, Messalina, and most of her circle.[50]
|
136 |
+
|
137 |
+
Claudius did marry once more. The ancient sources tell that his freedmen put forward three candidates, Caligula's third wife Lollia Paulina, Claudius's divorced second wife Aelia Paetina and Claudius's niece Agrippina the Younger. According to Suetonius, Agrippina won out through her feminine wiles.[51]
|
138 |
+
|
139 |
+
The truth is probably more political. The attempted coup d'état by Silius and Messalina had probably made Claudius realize the weakness of his position as a member of the Claudian but not the Julian family. This weakness was compounded by the fact that he did not yet have an obvious adult heir, Britannicus being just a boy.[52]
|
140 |
+
|
141 |
+
Agrippina was one of the few remaining descendants of Augustus, and her son Lucius Domitius Ahenobarbus (the future Emperor Nero) was one of the last males of the Imperial family. Coup attempts could rally around the pair and Agrippina was already showing such ambition. It has been suggested that the Senate may have pushed for the marriage, to end the feud between the Julian and Claudian branches.[52] This feud dated back to Agrippina's mother's actions against Tiberius after the death of her husband Germanicus (Claudius's brother), actions which Tiberius had gladly punished. In any case, Claudius accepted Agrippina and later adopted the newly mature Nero as his son.
|
142 |
+
|
143 |
+
Nero was married to Claudius' daughter Octavia, made joint heir with the underage Britannicus, and promoted; Augustus had similarly named his grandson Postumus Agrippa and his stepson Tiberius as joint heirs,[53] and Tiberius had named Caligula joint heir with his grandson Tiberius Gemellus. Adoption of adults or near adults was an old tradition in Rome, when a suitable natural adult heir was unavailable as was the case during Britannicus' minority. Claudius may have previously looked to adopt one of his sons-in-law to protect his own reign.[54]
|
144 |
+
|
145 |
+
Faustus Cornelius Sulla Felix, who was married to Claudius's daughter Claudia Antonia, was only descended from Octavia and Antony on one side – not close enough to the Imperial family to prevent doubts (although that did not stop others from making him the object of a coup attempt against Nero a few years later). Besides which, he was the half-brother of Valeria Messalina and at this time those wounds were still fresh. Nero was more popular with the general public as the grandson of Germanicus and the direct descendant of Augustus.
|
146 |
+
|
147 |
+
The historian Suetonius describes the physical manifestations of Claudius' affliction in relatively good detail.[55] His knees were weak and gave way under him and his head shook. He stammered and his speech was confused. He slobbered and his nose ran when he was excited. The Stoic Seneca states in his Apocolocyntosis that Claudius' voice belonged to no land animal, and that his hands were weak as well.[56]
|
148 |
+
|
149 |
+
However, he showed no physical deformity, as Suetonius notes that when calm and seated he was a tall, well-built figure of dignitas.[55] When angered or stressed, his symptoms became worse. Historians agree that this condition improved upon his accession to the throne.[57] Claudius himself claimed that he had exaggerated his ailments to save his life.[58]
|
150 |
+
|
151 |
+
Modern assessments of his health have changed several times in the past century. Prior to World War II, infantile paralysis (or polio) was widely accepted as the cause. This is the diagnosis used in Robert Graves' Claudius novels, first published in the 1930s. Polio does not explain many of the described symptoms, however, and a more recent theory implicates cerebral palsy as the cause, as outlined by Ernestine Leon.[59] Tourette syndrome has also been considered a possibility.[60][61]
|
152 |
+
|
153 |
+
As a person, ancient historians described Claudius as generous and lowbrow, a man who sometimes lunched with the plebeians.[62][63] They also paint him as bloodthirsty and cruel, overly fond of gladiatorial combat and executions, and very quick to anger; Claudius himself acknowledged the latter trait, and apologized publicly for his temper.[64][65] According to the ancient historians he was also overly trusting, and easily manipulated by his wives and freedmen.[35][66] But at the same time they portray him as paranoid and apathetic, dull and easily confused.[67][68]
|
154 |
+
|
155 |
+
Claudius' extant works present a different view, painting a picture of an intelligent, scholarly, well-read, and conscientious administrator with an eye to detail and justice. Thus, Claudius becomes an enigma. Since the discovery of his "Letter to the Alexandrians" in the last century, much work has been done to rehabilitate Claudius and determine where the truth lies.
|
156 |
+
|
157 |
+
Claudius wrote copiously throughout his life. Arnaldo Momigliano states that during the reign of Tiberius – which covers the peak of Claudius' literary career – it became impolitic to speak of republican Rome. The trend among the young historians was to either write about the new empire or obscure antiquarian subjects. Claudius was the rare scholar who covered both.[69]
|
158 |
+
|
159 |
+
Besides the history of Augustus' reign that caused him so much grief, his major works included Tyrrhenica, a twenty-book Etruscan history, and Carchedonica, an eight-volume history of Carthage,[70] as well as an Etruscan dictionary. He also wrote a book on dice-playing. Despite the general avoidance of the Republican era, he penned a defense of Cicero against the charges of Asinius Gallus. Modern historians have used this to determine the nature of his politics and of the aborted chapters of his civil war history.
|
160 |
+
|
161 |
+
He proposed a reform of the Latin alphabet by the addition of three new letters. He officially instituted the change during his censorship but they did not survive his reign. Claudius also tried to revive the old custom of putting dots between successive words (Classical Latin was written with no spacing). Finally, he wrote an eight-volume autobiography that Suetonius describes as lacking in taste.[71] Since Claudius (like most of the members of his dynasty) harshly criticized his predecessors and relatives in surviving speeches,[72] it is not hard to imagine the nature of Suetonius' charge.
|
162 |
+
|
163 |
+
None of the works survive but live on as sources for the surviving histories of the Julio-Claudian dynasty. Suetonius quotes Claudius' autobiography once and must have used it as a source numerous times. Tacitus uses Claudius' arguments for the orthographical innovations mentioned above and may have used him for some of the more antiquarian passages in his annals. Claudius is the source for numerous passages of Pliny's Natural History.[73]
|
164 |
+
|
165 |
+
The influence of historical study on Claudius is obvious. In his speech on Gallic senators, he uses a version of the founding of Rome identical to that of Livy, his tutor in adolescence. The speech is meticulous in details, a common mark of all his extant works, and he goes into long digressions on related matters. This indicates a deep knowledge of a variety of historical subjects that he could not help but share. Many of the public works instituted in his reign were based on plans first suggested by Julius Caesar. Levick believes this emulation of Caesar may have spread to all aspects of his policies.[74]
|
166 |
+
|
167 |
+
His censorship seems to have been based on those of his ancestors, particularly Appius Claudius Caecus, and he used the office to put into place many policies based on those of Republican times. This is when many of his religious reforms took effect, and his building efforts greatly increased during his tenure. In fact, his assumption of the office of Censor may have been motivated by a desire to see his academic labors bear fruit. For example, he believed (as most Romans did) that Caecus had used the censorship to introduce the letter "R"[75] and so used his own term to introduce his new letters.
|
168 |
+
|
169 |
+
The consensus of ancient historians was that Claudius was murdered by poison—possibly contained in mushrooms or on a feather—and died in the early hours of 13 October 54.[76]
|
170 |
+
|
171 |
+
Nearly all implicate his final wife, Agrippina, as the instigator. Agrippina and Claudius had become more combative in the months leading up to his death. This carried on to the point where Claudius openly lamented his bad wives, and began to comment on Britannicus' approaching manhood with an eye towards restoring his status within the imperial family.[77] Agrippina had motive in ensuring the succession of Nero before Britannicus could gain power.
|
172 |
+
|
173 |
+
Some implicate either his taster Halotus, his doctor Xenophon, or the infamous poisoner Locusta as the administrator of the fatal substance.[78] Some say he died after prolonged suffering following a single dose at dinner, and some have him recovering only to be poisoned again.[79] Among contemporary sources, Seneca the Younger ascribed the emperor's death to natural causes, while Josephus only spoke of rumors on his poisoning.[80]
|
174 |
+
|
175 |
+
In modern times, authors have cast doubt on whether Claudius was murdered or merely succumbed to illness or old age.[81] Evidences against his murder include his old age, his serious illnesses in his last years, his unhealthy lifestyle and the fact that his taster Halotus continued to serve in the same position under Nero. On the other hand, some modern scholars claim the near universality of the accusations in ancient texts lends credence to the crime.[82] Claudius' ashes were interred in the Mausoleum of Augustus on 24 October 54, after a funeral similar to that of his great-uncle Augustus 40 years earlier.
|
176 |
+
|
177 |
+
Already, while alive, he received the widespread private worship of a living princeps[83] and was worshipped in Britannia in his own temple in Camulodunum.
|
178 |
+
|
179 |
+
Claudius was deified by Nero and the Senate almost immediately.[84] Those who regard this homage as cynical should note that, cynical or not, such a move would hardly have benefited those involved, had Claudius been "hated", as some commentators, both modern and historic, characterize him. Many of Claudius' less solid supporters quickly became Nero's men. Claudius' will had been changed shortly before his death to either recommend Nero and Britannicus jointly or perhaps just Britannicus, who would have been considered an adult man according to Roman law only a few months later.
|
180 |
+
|
181 |
+
Agrippina had sent away Narcissus shortly before Claudius' death, and now murdered the freedman. The last act of this secretary of letters was to burn all of Claudius' correspondence—most likely so it could not be used against him and others in an already hostile new regime. Thus Claudius' private words about his own policies and motives were lost to history. Just as Claudius had criticized his predecessors in official edicts (see below), Nero often criticized the deceased Emperor and many of Claudius' laws and edicts were disregarded under the reasoning that he was too stupid and senile to have meant them.[85]
|
182 |
+
|
183 |
+
Seneca's Apocolocyntosis mocks the deification of Claudius and reinforces the view of Claudius as an unpleasant fool; this remained the official view for the duration of Nero's reign. Eventually Nero stopped referring to his deified adoptive father at all, and realigned with his birth family. Claudius' temple was left unfinished after only some of the foundation had been laid down. Eventually the site was overtaken by Nero's Golden House.[86]
|
184 |
+
|
185 |
+
The Flavians, who had risen to prominence under Claudius, took a different tack. They were in a position where they needed to shore up their legitimacy, but also justify the fall of the Julio-Claudians. They reached back to Claudius in contrast with Nero, to show that they were good associated with good. Commemorative coins were issued of Claudius and his son Britannicus, who had been a friend of the Emperor Titus (Titus was born in 39, Britannicus was born in 41). When Nero's Golden House was burned, the Temple of Claudius was finally completed on the Caelian Hill.[86]
|
186 |
+
|
187 |
+
However, as the Flavians became established, they needed to emphasize their own credentials more, and their references to Claudius ceased. Instead, he was lumped with the other emperors of the fallen dynasty. His state cult in Rome probably continued until the abolition of all such cults of dead Emperors by Maximinus Thrax in 237–238.[87] The Feriale Duranum, probably identical to the festival calendars of every regular army unit, assigns him a sacrifice of a steer on his birthday, the Kalends of August.[88] And such commemoration (and consequent feasting) probably continued until the Christianization and disintegration of the army in the late 4th century.[89]
|
188 |
+
|
189 |
+
The main ancient historians Tacitus, Suetonius, and Cassius Dio all wrote after the last of the Flavians had gone. All three were senators or equites. They took the side of the Senate in most conflicts with the Princeps, invariably viewing him as being in the wrong. This resulted in biases, both conscious and unconscious. Suetonius lost access to the official archives shortly after beginning his work. He was forced to rely on second-hand accounts when it came to Claudius (with the exception of Augustus' letters, which had been gathered earlier). Suetonius painted Claudius as a ridiculous figure, belittling many of his acts and attributing the objectively good works to his retinue.[90]
|
190 |
+
|
191 |
+
Tacitus wrote a narrative for his fellow senators and fitted each of the emperors into a simple mold of his choosing.[91] He wrote of Claudius as a passive pawn and an idiot in affairs relating to the palace and often in public life. During his censorship of 47–48 Tacitus allows the reader a glimpse of a Claudius who is more statesmanlike (XI.23–25), but it is a mere glimpse. Tacitus is usually held to have 'hidden' his use of Claudius' writings and to have omitted Claudius' character from his works.[92] Even his version of Claudius' Lyons tablet speech is edited to be devoid of the Emperor's personality. Dio was less biased, but seems to have used Suetonius and Tacitus as sources. Thus the conception of Claudius as the weak fool, controlled by those he supposedly ruled, was preserved for the ages.
|
192 |
+
|
193 |
+
As time passed, Claudius was mostly forgotten outside of the historians' accounts. His books were lost first, as their antiquarian subjects became unfashionable. In the 2nd century, Pertinax, who shared his birthday, became emperor, overshadowing commemoration of Claudius.[93]
|
194 |
+
|
195 |
+
In literature, Claudius and his contemporaries appear in the historical novel The Roman by Mika Waltari. Canadian-born science fiction writer A. E. van Vogt reimagined Robert Graves' Claudius story, in his two novels, Empire of the Atom and The Wizard of Linn.
|
196 |
+
|
197 |
+
The historical novel Chariot of the Soul by Linda Proud features Claudius as host and mentor of the young Togidubnus, son of King Verica of the Atrebates, during his ten-year stay in Rome. When Togidubnus returns to Britain in advance of the Roman army, it is with a mission given to him by Claudius.
|
198 |
+
|
en/1169.html.txt
ADDED
@@ -0,0 +1,198 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Claudius (/ˈklɔːdiəs/ KLAW-dee-əs; Tiberius Claudius Caesar Augustus Germanicus; 1 August 10 BC – 13 October AD 54) was Roman emperor from AD 41 to 54. Born to Drusus and Antonia Minor at Lugdunum in Roman Gaul, where his father was stationed as a military legate, he was the first Roman emperor to be born outside Italy. Nonetheless, Claudius was an Italic of Sabine origins[2] and a member of the Julio-Claudian dynasty. Because he was afflicted with a limp and slight deafness due to sickness at a young age, his family ostracized him and excluded him from public office until his consulship, shared with his nephew Caligula in 37.
|
4 |
+
|
5 |
+
Claudius' infirmity probably saved him from the fate of many other nobles during the purges of Tiberius' and Caligula's reigns; potential enemies did not see him as a serious threat. His survival led to his being declared emperor by the Praetorian Guard after Caligula's assassination, at which point he was the last male of his family. Despite his lack of experience, Claudius proved to be an able and efficient administrator. He expanded the imperial bureaucracy to include freedmen, and helped to restore the empire's finances after the excess of Caligula's reign. He was also an ambitious builder, constructing many new roads, aqueducts, and canals across the Empire. During his reign the Empire started its successful conquest of Britain.
|
6 |
+
|
7 |
+
Having a personal interest in law, he presided at public trials, and issued up to twenty edicts a day. He was seen as vulnerable throughout his reign, particularly by elements of the nobility. Claudius was constantly forced to shore up his position; this resulted in the deaths of many senators. These events damaged his reputation among the ancient writers, though more recent historians have revised this opinion. Many authors contend that he was murdered by his own wife. After his death at the age of 63, his grand-nephew and legally adopted step-son Nero succeeded him as emperor. His 13-year reign (slightly longer than Nero's) would not be surpassed by any successors until that of Domitian, who reigned for 15 years.
|
8 |
+
|
9 |
+
He was a descendant of the Octavii Rufi (through Gaius Octavius), Julii Caesares (through Julia Minor and Julia (Mark Antony's mother)), and the Claudii Nerones (through Nero Claudius Drusus). He was a step-grandson (through his father Drusus) and great-nephew (through his mother Antonia Minor) of Augustus. Tiberius was his father's brother. Through his brother Germanicus, Claudius was an uncle of Caligula and a great uncle of Nero. Through his mother, Antonia Minor, he was also a grandson of Mark Antony and Octavia Minor.
|
10 |
+
|
11 |
+
Claudius was born on 1 August 10 BC at Lugdunum (modern Lyon, France). He had two older siblings, Germanicus and Livilla. His mother, Antonia, may have had two other children who died young.
|
12 |
+
|
13 |
+
His maternal grandparents were Mark Antony and Octavia Minor, Augustus' sister, and he was therefore the great-great grandnephew of Gaius Julius Caesar. His paternal grandparents were Livia, Augustus' third wife, and Tiberius Claudius Nero. During his reign, Claudius revived the rumor that his father Drusus was actually the illegitimate son of Augustus, to give the appearance that Augustus was Claudius' paternal grandfather.
|
14 |
+
|
15 |
+
In 9 BC, his father Drusus unexpectedly died on campaign in Germania, possibly from illness. Claudius was then left to be raised by his mother, who never remarried. When Claudius' disability became evident, the relationship with his family turned sour. Antonia referred to him as a monster, and used him as a standard for stupidity. She seems to have passed her son off to his grandmother Livia for a number of years.[3]
|
16 |
+
|
17 |
+
Livia was a little kinder, but nevertheless often sent him short, angry letters of reproof. He was put under the care of a "former mule-driver"[4] to keep him disciplined, under the logic that his condition was due to laziness and a lack of will-power. However, by the time he reached his teenage years his symptoms apparently waned and his family took some notice of his scholarly interests.[5]
|
18 |
+
|
19 |
+
In AD 7, Livy was hired to tutor him in history, with the assistance of Sulpicius Flavus. He spent a lot of his time with the latter and the philosopher Athenodorus. Augustus, according to a letter, was surprised at the clarity of Claudius' oratory.[5] Expectations about his future began to increase.
|
20 |
+
|
21 |
+
His work as a budding historian damaged his prospects for advancement in public life. According to Vincent Scramuzza and others, Claudius began work on a history of the Civil Wars that was either too truthful or too critical of Octavian[6]—then reigning as Augustus Caesar. In either case, it was far too early for such an account, and may have only served to remind Augustus that Claudius was Antony's descendant. His mother and grandmother quickly put a stop to it, and this may have convinced them that Claudius was not fit for public office. He could not be trusted to toe the existing party line.[7]
|
22 |
+
|
23 |
+
When he returned to the narrative later in life, Claudius skipped over the wars of the Second Triumvirate altogether. But the damage was done, and his family pushed him into the background. When the Arch of Pavia was erected to honor the Imperial clan in 8 AD, Claudius' name (now Tiberius Claudius Nero Germanicus after his elevation to paterfamilias of Claudii Nerones on the adoption of his brother) was inscribed on the edge—past the deceased princes, Gaius and Lucius, and Germanicus' children. There is some speculation that the inscription was added by Claudius himself decades later, and that he originally did not appear at all.[7]
|
24 |
+
|
25 |
+
When Augustus died in AD 14, Claudius—then aged 23—appealed to his uncle Tiberius to allow him to begin the cursus honorum. Tiberius, the new Emperor, responded by granting Claudius consular ornaments. Claudius requested office once more and was snubbed. Since the new Emperor was no more generous than the old, Claudius gave up hope of public office and retired to a scholarly, private life.
|
26 |
+
|
27 |
+
Despite the disdain of the Imperial family, it seems that from very early on the general public respected Claudius. At Augustus' death, the equites, or knights, chose Claudius to head their delegation. When his house burned down, the Senate demanded it be rebuilt at public expense. They also requested that Claudius be allowed to debate in the Senate. Tiberius turned down both motions, but the sentiment remained.
|
28 |
+
|
29 |
+
During the period immediately after the death of Tiberius' son, Drusus, Claudius was pushed by some quarters as a potential heir. This again suggests the political nature of his exclusion from public life. However, as this was also the period during which the power and terror of the commander of the Praetorian Guard, Sejanus, was at its peak, Claudius chose to downplay this possibility.
|
30 |
+
|
31 |
+
After the death of Tiberius, the new emperor Caligula (the son of Claudius' brother Germanicus) recognized Claudius to be of some use. He appointed Claudius his co-consul in 37 in order to emphasize the memory of Caligula's deceased father Germanicus. Despite this, Caligula relentlessly tormented his uncle: playing practical jokes, charging him enormous sums of money, humiliating him before the Senate, and the like. According to Cassius Dio Claudius became very sickly and thin by the end of Caligula's reign, most likely due to stress.[8] A possible surviving portrait of Claudius from this period may support this.
|
32 |
+
|
33 |
+
On 24 January 41, Caligula was assassinated in a broad-based conspiracy involving the Praetorian commander Cassius Chaerea and several senators. There is no evidence that Claudius had a direct hand in the assassination, although it has been argued that he knew about the plot—particularly since he left the scene of the crime shortly before his nephew was murdered.[9] However, after the deaths of Caligula's wife and daughter, it became apparent that Cassius intended to go beyond the terms of the conspiracy and wipe out the Imperial family.[10]
|
34 |
+
|
35 |
+
In the chaos following the murder, Claudius witnessed the German guard cut down several uninvolved noblemen, including many of his friends. He fled to the palace to hide. According to tradition, a Praetorian named Gratus found him hiding behind a curtain and suddenly declared him princeps.[10] A section of the guard may have planned in advance to seek out Claudius, perhaps with his approval. They reassured him that they were not one of the battalions looking for revenge. He was spirited away to the Praetorian camp and put under their protection.
|
36 |
+
|
37 |
+
The Senate quickly met and began debating a change of government, but this eventually devolved into an argument over which of them would be the new princeps. When they heard of the Praetorians' claim, they demanded that Claudius be delivered to them for approval, but he refused, sensing the danger that would come with complying. Some historians, particularly Josephus,[11] claim that Claudius was directed in his actions by the Judaean King Herod Agrippa. However, an earlier version of events by the same ancient author downplays Agrippa's role[12] so it remains uncertain. Eventually the Senate was forced to give in and, in return, Claudius pardoned nearly all the assassins.
|
38 |
+
|
39 |
+
Claudius took several steps to legitimize his rule against potential usurpers, most of them emphasizing his place within the Julio-Claudian family. He adopted the name "Caesar" as a cognomen, as the name still carried great weight with the populace. In order to do so, he dropped the cognomen "Nero" which he had adopted as paterfamilias of the Claudii Nerones when his brother Germanicus was adopted out.[13] As Pharaoh of Egypt, Claudius adopted the royal titulary Tiberios Klaudios, Autokrator Heqaheqau Meryasetptah, Kanakht Djediakhshuemakhet ("Tiberius Claudius, Emperor and ruler of rulers, beloved of Isis and Ptah, the strong bull of the stable moon on the horizon").[14]
|
40 |
+
|
41 |
+
While Claudius had never been formally adopted either by Augustus or his successors, he was nevertheless the grandson of Augustus' sister Octavia, and so he felt that he had the right of family. He also adopted the name "Augustus" as the two previous emperors had done at their accessions. He kept the honorific "Germanicus" to display the connection with his heroic brother. He deified his paternal grandmother Livia to highlight her position as wife of the divine Augustus. Claudius frequently used the term "filius Drusi" (son of Drusus) in his titles, in order to remind the people of his legendary father and lay claim to his reputation.[13]
|
42 |
+
|
43 |
+
Since Claudius was the first Emperor proclaimed on the initiative of the Praetorian Guard instead of the Senate, his repute suffered at the hands of commentators (such as Seneca). Moreover, he was the first Emperor who resorted to bribery as a means to secure army loyalty and rewarded the soldiers of the Praetorian Guard that had elevated him with 15,000 sesterces.[15] Tiberius and Augustus had both left gifts to the army and guard in their wills, and upon Caligula's death the same would have been expected, even if no will existed. Claudius remained grateful to the guard, however, issuing coins with tributes to the Praetorians in the early part of his reign.[16]
|
44 |
+
|
45 |
+
Pliny the Elder noted, according to the 1938 Loeb Classical Library translation by Harris Rackham, "... many people do not allow any gems in a signet-ring, and seal with the gold itself; this was a fashion invented when Claudius Cæsar was emperor."[17]
|
46 |
+
|
47 |
+
Claudius restored the status of the peaceful Imperial Roman provinces of Macedonia and Achaea as senatorial provinces.[18]
|
48 |
+
|
49 |
+
Under Claudius, the Empire underwent its first major expansion since the reign of Augustus. The provinces of Thrace, Noricum, Lycia, and Judea were annexed (or put under direct rule) under various circumstances during his term. The annexation of Mauretania, begun under Caligula, was completed after the defeat of rebel forces, and the official division of the former client kingdom into two Imperial provinces.[19] The most far-reaching conquest was that of Britannia.[20]
|
50 |
+
|
51 |
+
In 43, Claudius sent Aulus Plautius with four legions to Britain (Britannia) after an appeal from an ousted tribal ally. Britain was an attractive target for Rome because of its material wealth – particularly mines and slaves. It was also a haven for Gallic rebels and the like, and so could not be left alone much longer. Claudius himself traveled to the island after the completion of initial offensives, bringing with him reinforcements and elephants. The latter must have made an impression on the Britons when they were displayed in the large tribal centre of Camulodunum, modern day Colchester. The Roman colonia of Colonia Claudia Victricensis was established as the provincial capital of the newly established province of Britannia at Camulodunum,[22] where a large Temple was dedicated in his honour.[22]
|
52 |
+
|
53 |
+
He left after 16 days, but remained in the provinces for some time. The Senate granted him a triumph for his efforts. Only members of the Imperial family were allowed such honours, but Claudius subsequently lifted this restriction for some of his conquering generals. He was granted the honorific "Britannicus" but only accepted it on behalf of his son, never using the title himself. When the British general Caractacus was captured in 50, Claudius granted him clemency. Caractacus lived out his days on land provided by the Roman state, an unusual end for an enemy commander.
|
54 |
+
|
55 |
+
Claudius conducted a census in 48 that found 5,984,072 Roman citizens[23] (adult males with Roman citizenship; women, children, slaves, and free adult males without Roman citizenship were not counted), an increase of around a million since the census conducted at Augustus' death. He had helped increase this number through the foundation of Roman colonies that were granted blanket citizenship. These colonies were often made out of existing communities, especially those with elites who could rally the populace to the Roman cause. Several colonies were placed in new provinces or on the border of the Empire to secure Roman holdings as quickly as possible.
|
56 |
+
|
57 |
+
Claudius personally judged many of the legal cases tried during his reign. Ancient historians have many complaints about this, stating that his judgments were variable and sometimes did not follow the law.[24] He was also easily swayed. Nevertheless, Claudius paid detailed attention to the operation of the judicial system.[25]
|
58 |
+
|
59 |
+
He extended the summer court session, as well as the winter term, by shortening the traditional breaks. Claudius also made a law requiring plaintiffs to remain in the city while their cases were pending, as defendants had previously been required to do. These measures had the effect of clearing out the docket. The minimum age for jurors was also raised to 25 in order to ensure a more experienced jury pool.[25]
|
60 |
+
|
61 |
+
Claudius also settled disputes in the provinces. He freed the island of Rhodes from Roman rule for their good faith and exempted Ilium (Troy) from taxes. Early in his reign, the Greeks and Jews of Alexandria sent him two embassies at once after riots broke out between the two communities. This resulted in the famous "Letter to the Alexandrians", which reaffirmed Jewish rights in the city but also forbade them to move in more families en masse. According to Josephus, he then reaffirmed the rights and freedoms of all the Jews in the Empire.[26]
|
62 |
+
|
63 |
+
One of Claudius's investigators discovered that many old Roman citizens based in the city of Tridentum (modern Trento) were not in fact citizens.[27] The Emperor issued a declaration, contained in the Tabula clesiana, that they would be considered to hold citizenship from then on, since to strip them of their status would cause major problems. However, in individual cases, Claudius punished false assumption of citizenship harshly, making it a capital offense. Similarly, any freedmen found to be laying false claim to membership of the Roman equestrian order were sold back into slavery.[28]
|
64 |
+
|
65 |
+
Numerous edicts were issued throughout Claudius' reign. These were on a number of topics, everything from medical advice to moral judgments. A famous medical example is one promoting yew juice as a cure for snakebite.[29] Suetonius wrote that he is even said to have thought of an edict allowing public flatulence for good health.[30] One of the more famous edicts concerned the status of sick slaves. Masters had been abandoning ailing slaves at the temple of Aesculapius on Tiber Island to die instead of providing them with medical assistance and care, and then reclaiming them if they lived. Claudius ruled that slaves who were thus abandoned and recovered after such treatment would be free. Furthermore, masters who chose to kill slaves rather than take care of them were liable to be charged with murder.[31]
|
66 |
+
|
67 |
+
Claudius embarked on many public works throughout his reign, both in the capital and in the provinces. He built two aqueducts, the Aqua Claudia, begun by Caligula, and the Anio Novus. These entered the city in 52 and met at the Porta Maggiore. He also restored a third, the Aqua Virgo.
|
68 |
+
|
69 |
+
He paid special attention to transportation. Throughout Italy and the provinces he built roads and canals. Among these was a large canal leading from the Rhine to the sea, as well as a road from Italy to Germany – both begun by his father, Drusus. Closer to Rome, he built a navigable canal on the Tiber, leading to Portus, his new port just north of Ostia. This port was constructed in a semicircle with two moles and a lighthouse at its mouth. The construction also had the effect of reducing flooding in Rome.
|
70 |
+
|
71 |
+
The port at Ostia was part of Claudius' solution to the constant grain shortages that occurred in winter, after the Roman shipping season. The other part of his solution was to insure the ships of grain merchants who were willing to risk travelling to Egypt in the off-season. He also granted their sailors special privileges, including citizenship and exemption from the Lex Papia Poppaea, a law that regulated marriage. In addition, he repealed the taxes that Caligula had instituted on food, and further reduced taxes on communities suffering drought or famine.
|
72 |
+
|
73 |
+
The last part of Claudius' plan was to increase the amount of arable land in Italy. This was to be achieved by draining the Fucine lake, which would have the added benefit of making the nearby river navigable year-round.[32] A tunnel was dug through the lake bed, but the plan was a failure. The tunnel was crooked and not large enough to carry the water, which caused it to back up when opened. The resultant flood washed out a large gladiatorial exhibition held to commemorate the opening, causing Claudius to run for his life along with the other spectators. The draining of the lake continued to present a problem well into the Middle Ages. It was finally achieved by the Prince Torlonia in the 19th century, producing over 160,000 acres (650 km2) of new arable land.[33] He expanded the Claudian tunnel to three times its original size.
|
74 |
+
|
75 |
+
Because of the circumstances of his accession, Claudius took great pains to please the Senate. During regular sessions, the Emperor sat among the Senate body, speaking in turn. When introducing a law, he sat on a bench between the consuls in his position as holder of the power of Tribune (the Emperor could not officially serve as a Tribune of the Plebes as he was a Patrician, but it was a power taken by previous rulers). He refused to accept all his predecessors' titles (including Imperator) at the beginning of his reign, preferring to earn them in due course. He allowed the Senate to issue its own bronze coinage for the first time since Augustus. He also put the Imperial provinces of Macedonia and Achaea back under Senate control.
|
76 |
+
|
77 |
+
Claudius set about remodeling the Senate into a more efficient, representative body. He chided the senators about their reluctance to debate bills introduced by himself, as noted in the fragments of a surviving speech:
|
78 |
+
|
79 |
+
If you accept these proposals, Conscript Fathers, say so at once and simply, in accordance with your convictions. If you do not accept them, find alternatives, but do so here and now; or if you wish to take time for consideration, take it, provided you do not forget that you must be ready to pronounce your opinion whenever you may be summoned to meet. It ill befits the dignity of the Senate that the consul designate should repeat the phrases of the consuls word for word as his opinion, and that every one else should merely say 'I approve', and that then, after leaving, the assembly should announce 'We debated'.[34]
|
80 |
+
|
81 |
+
In 47 he assumed the office of censor with Lucius Vitellius, which had been allowed to lapse for some time. He struck the names of many senators and equites who no longer met qualifications, but showed respect by allowing them to resign in advance. At the same time, he sought to admit eligible men from the provinces. The Lyon Tablet preserves his speech on the admittance of Gallic senators, in which he addresses the Senate with reverence but also with criticism for their disdain of these men. He even jokes about how the Senate had admitted members from beyond Gallia Narbonensis (Lyons, France), i.e. himself. He also increased the number of Patricians by adding new families to the dwindling number of noble lines. Here he followed the precedent of Lucius Junius Brutus and Julius Caesar.
|
82 |
+
|
83 |
+
Nevertheless, many in the Senate remained hostile to Claudius, and many plots were made on his life. This hostility carried over into the historical accounts. As a result, Claudius reduced the Senate's power for the sake of efficiency. The administration of Ostia was turned over to an Imperial Procurator after construction of the port. Administration of many of the empire's financial concerns was turned over to Imperial appointees and freedmen. This led to further resentment and suggestions that these same freedmen were ruling the Emperor.
|
84 |
+
|
85 |
+
Several coup attempts were made during Claudius' reign, resulting in the deaths of many senators. Appius Silanus was executed early in Claudius' reign under questionable circumstances.[31] Shortly after, a large rebellion was undertaken by the Senator Vinicianus and Scribonianus, the governor of Dalmatia, and gained quite a few senatorial supporters. It ultimately failed because of the reluctance of Scribonianus' troops, which led to the suicide of the main conspirators.
|
86 |
+
|
87 |
+
Many other senators tried different conspiracies and were condemned. Claudius' son-in-law Pompeius Magnus was executed for his part in a conspiracy with his father Crassus Frugi. Another plot involved the consulars Lusiius Saturninus, Cornelius Lupus, and Pompeius Pedo.
|
88 |
+
|
89 |
+
In 46, Asinius Gallus, the grandson of Asinius Pollio, and Titus Statilius Taurus Corvinus were exiled for a plot hatched with several of Claudius' own freedmen. Valerius Asiaticus was executed without public trial for unknown reasons. The ancient sources say the charge was adultery, and that Claudius was tricked into issuing the punishment. However, Claudius singles out Asiaticus for special damnation in his speech on the Gauls, which dates over a year later, suggesting that the charge must have been much more serious.
|
90 |
+
|
91 |
+
Asiaticus had been a claimant to the throne in the chaos following Caligula's death and a co-consul with the Titus Statilius Taurus Corvinus mentioned above. Most of these conspiracies took place before Claudius' term as Censor, and may have induced him to review the Senatorial rolls. The conspiracy of Gaius Silius in the year after his Censorship, 48, is detailed in book 11 of Tacitus Annal. This section of Tacitus history narrates the alleged conspiracy of Claudius' third wife, Messalina. Suetonius states that a total of 35 senators and 300 knights were executed for offenses during Claudius' reign.[35] Needless to say, the responses to these conspiracies could not have helped Senate–emperor relations.
|
92 |
+
|
93 |
+
Claudius was hardly the first emperor to use freedmen to help with the day-to-day running of the Empire. He was, however, forced to increase their role as the powers of the princeps became more centralized and the burden larger. This was partly due to the ongoing hostility of the Senate, as mentioned above, but also due to his respect for the senators. Claudius did not want free-born magistrates to have to serve under him, as if they were not peers.
|
94 |
+
|
95 |
+
The secretariat was divided into bureaus, with each being placed under the leadership of one freedman. Narcissus was the secretary of correspondence. Pallas became the secretary of the treasury. Callistus became secretary of justice. There was a fourth bureau for miscellaneous issues, which was put under Polybius until his execution for treason. The freedmen could also officially speak for the Emperor, as when Narcissus addressed the troops in Claudius' stead before the conquest of Britain.[36]
|
96 |
+
|
97 |
+
Since these were important positions, the senators were aghast at their being placed in the hands of former slaves. If freedmen had total control of money, letters, and law, it seemed it would not be hard for them to manipulate the Emperor. This is exactly the accusation put forth by the ancient sources. However, these same sources admit that the freedmen were loyal to Claudius.[36]
|
98 |
+
|
99 |
+
He was similarly appreciative of them and gave them due credit for policies where he had used their advice. However, if they showed treasonous inclinations, the Emperor did punish them with just force, as in the case of Polybius and Pallas' brother, Felix. There is no evidence that the character of Claudius' policies and edicts changed with the rise and fall of the various freedmen, suggesting that he was firmly in control throughout.
|
100 |
+
|
101 |
+
Regardless of the extent of their political power, the freedmen did manage to amass wealth through their positions. Pliny the Elder notes that several of them were richer than Crassus, the richest man of the Republican era.[37]
|
102 |
+
|
103 |
+
Claudius, as the author of a treatise on Augustus' religious reforms, felt himself in a good position to institute some of his own. He had strong opinions about the proper form for state religion. He refused the request of Alexandrian Greeks to dedicate a temple to his divinity, saying that only gods may choose new gods. He restored lost days to festivals and got rid of many extraneous celebrations added by Caligula. He re-instituted old observances and archaic language.
|
104 |
+
|
105 |
+
Claudius was concerned with the spread of eastern mysteries within the city and searched for more Roman replacements. He emphasized the Eleusinian mysteries which had been practiced by so many during the Republic. He expelled foreign astrologers, and at the same time rehabilitated the old Roman soothsayers (known as haruspices) as a replacement. He was especially hard on Druidism, because of its incompatibility with the Roman state religion and its proselytizing activities.[38]
|
106 |
+
|
107 |
+
Claudius forbade proselytizing in any religion, even in those regions where he allowed natives to worship freely.
|
108 |
+
|
109 |
+
It is also reported that at one time he expelled the Jews from Rome, probably because the Jews within the city caused continuous disturbances at the instigation of Chrestus.[a]
|
110 |
+
|
111 |
+
According to Suetonius, Claudius was extraordinarily fond of games. He is said to have risen with the crowd after gladiatorial matches and given unrestrained praise to the fighters.[39] Claudius also presided over many new and original events. Soon after coming into power, Claudius instituted games to be held in honor of his father on the latter's birthday.[40] Annual games were also held in honour of his accession, and took place at the Praetorian camp where Claudius had first been proclaimed Emperor.[41]
|
112 |
+
|
113 |
+
Claudius organised a performance of the Secular Games, marking the 800th anniversary of the founding of Rome. Augustus had performed the same games less than a century prior. Augustus' excuse was that the interval for the games was 110 years, not 100, but his date actually did not qualify under either reasoning.[41] Claudius also presented naval battles to mark the attempted draining of the Fucine Lake, as well as many other public games and shows.
|
114 |
+
|
115 |
+
At Ostia, in front of a crowd of spectators, Claudius fought a killer whale which was trapped in the harbour. The event was witnessed by Pliny the Elder:
|
116 |
+
|
117 |
+
A killer whale was actually seen in the harbour of Ostia, locked in combat with the emperor Claudius. She had come when he was completing the construction of the harbour, drawn there by the wreck of a ship bringing leather hides from Gaul, and feeding there over a number of days, had made a furrow in the shallows: the waves had raised up such a mound of sand that she couldn't turn around at all, and while she was pursuing her banquet as the waves moved it shorewards, her back stuck up out of the water like the overturned keel of a boat. The Emperor ordered that a large array of nets be stretched across the mouths of the harbour, and setting out in person with the Praetorian cohorts gave a show to the Roman people, soldiers showering lances from attacking ships, one of which I saw swamped by the beast's waterspout and sunk.—"Historia Naturalis" IX.14–15.[42]
|
118 |
+
|
119 |
+
Claudius also restored and adorned many public venues in Rome. At the Circus Maximus, the turning posts and starting stalls were replaced in marble and embellished, and an embankment was probably added to prevent flooding of the track.[43] Claudius also reinforced or extended the seating rules that reserved front seating at the Circus for senators.[41] Claudius rebuilt Pompey's Theatre after it had been destroyed by fire, organising special fights at the re-dedication which he observed from a special platform in the orchestra box.[41]
|
120 |
+
|
121 |
+
Suetonius and the other ancient authors accused Claudius of being dominated by women and wives, and of being a womanizer.
|
122 |
+
|
123 |
+
Claudius married four times, after two failed betrothals. The first betrothal was to his distant cousin Aemilia Lepida, but was broken for political reasons. The second was to Livia Medullina Camilla, which ended with Medullina's sudden death on their wedding day.
|
124 |
+
|
125 |
+
Plautia Urgulanilla was the granddaughter of Livia's confidant Urgulania. During their marriage she gave birth to a son, Claudius Drusus. Drusus died of asphyxiation in his early teens, shortly after becoming engaged to Junilla, the daughter of Sejanus.
|
126 |
+
|
127 |
+
Claudius later divorced Urgulanilla for adultery and on suspicion of murdering her sister-in-law Apronia. When Urgulanilla gave birth after the divorce, Claudius repudiated the baby girl, Claudia, as the father was allegedly one of his own freedmen. This action made him later the target of criticism by his enemies.
|
128 |
+
|
129 |
+
Soon after (possibly in 28), Claudius married Aelia Paetina, a relative of Sejanus, if not Sejanus's adoptive sister. During their marriage, Claudius and Paetina had a daughter, Claudia Antonia. He later divorced her after the marriage became a political liability, although Leon (1948) suggests it may have been due to emotional and mental abuse by Paetina.
|
130 |
+
|
131 |
+
Some years after divorcing Aelia Paetina, in 38 or early 39, Claudius married Valeria Messalina, who was his first cousin once removed and closely allied with Caligula's circle. Shortly thereafter, she gave birth to a daughter, Claudia Octavia. A son, first named Tiberius Claudius Germanicus, and later known as Britannicus, was born just after Claudius' accession.
|
132 |
+
|
133 |
+
This marriage ended in tragedy. The ancient historians allege that Messalina was a nymphomaniac who was regularly unfaithful to Claudius—Tacitus states she went so far as to compete with a prostitute to see who could have the most sexual partners in a night[44]—and manipulated his policies in order to amass wealth. In 48, Messalina married her lover Gaius Silius in a public ceremony while Claudius was at Ostia.
|
134 |
+
|
135 |
+
Sources disagree as to whether or not she divorced the Emperor first, and whether the intention was to usurp the throne. Under Roman law, the spouse needed to be informed that he or she had been divorced before a new marriage could take place; the sources state that Claudius was in total ignorance until after the marriage.[45] Scramuzza, in his biography, suggests that Silius may have convinced Messalina that Claudius was doomed, and the union was her only hope of retaining rank and protecting her children.[46][47][48] The historian Tacitus suggests that Claudius's ongoing term as Censor may have prevented him from noticing the affair before it reached such a critical point.[49] Whatever the case, the result was the execution of Silius, Messalina, and most of her circle.[50]
|
136 |
+
|
137 |
+
Claudius did marry once more. The ancient sources tell that his freedmen put forward three candidates, Caligula's third wife Lollia Paulina, Claudius's divorced second wife Aelia Paetina and Claudius's niece Agrippina the Younger. According to Suetonius, Agrippina won out through her feminine wiles.[51]
|
138 |
+
|
139 |
+
The truth is probably more political. The attempted coup d'état by Silius and Messalina had probably made Claudius realize the weakness of his position as a member of the Claudian but not the Julian family. This weakness was compounded by the fact that he did not yet have an obvious adult heir, Britannicus being just a boy.[52]
|
140 |
+
|
141 |
+
Agrippina was one of the few remaining descendants of Augustus, and her son Lucius Domitius Ahenobarbus (the future Emperor Nero) was one of the last males of the Imperial family. Coup attempts could rally around the pair and Agrippina was already showing such ambition. It has been suggested that the Senate may have pushed for the marriage, to end the feud between the Julian and Claudian branches.[52] This feud dated back to Agrippina's mother's actions against Tiberius after the death of her husband Germanicus (Claudius's brother), actions which Tiberius had gladly punished. In any case, Claudius accepted Agrippina and later adopted the newly mature Nero as his son.
|
142 |
+
|
143 |
+
Nero was married to Claudius' daughter Octavia, made joint heir with the underage Britannicus, and promoted; Augustus had similarly named his grandson Postumus Agrippa and his stepson Tiberius as joint heirs,[53] and Tiberius had named Caligula joint heir with his grandson Tiberius Gemellus. Adoption of adults or near adults was an old tradition in Rome, when a suitable natural adult heir was unavailable as was the case during Britannicus' minority. Claudius may have previously looked to adopt one of his sons-in-law to protect his own reign.[54]
|
144 |
+
|
145 |
+
Faustus Cornelius Sulla Felix, who was married to Claudius's daughter Claudia Antonia, was only descended from Octavia and Antony on one side – not close enough to the Imperial family to prevent doubts (although that did not stop others from making him the object of a coup attempt against Nero a few years later). Besides which, he was the half-brother of Valeria Messalina and at this time those wounds were still fresh. Nero was more popular with the general public as the grandson of Germanicus and the direct descendant of Augustus.
|
146 |
+
|
147 |
+
The historian Suetonius describes the physical manifestations of Claudius' affliction in relatively good detail.[55] His knees were weak and gave way under him and his head shook. He stammered and his speech was confused. He slobbered and his nose ran when he was excited. The Stoic Seneca states in his Apocolocyntosis that Claudius' voice belonged to no land animal, and that his hands were weak as well.[56]
|
148 |
+
|
149 |
+
However, he showed no physical deformity, as Suetonius notes that when calm and seated he was a tall, well-built figure of dignitas.[55] When angered or stressed, his symptoms became worse. Historians agree that this condition improved upon his accession to the throne.[57] Claudius himself claimed that he had exaggerated his ailments to save his life.[58]
|
150 |
+
|
151 |
+
Modern assessments of his health have changed several times in the past century. Prior to World War II, infantile paralysis (or polio) was widely accepted as the cause. This is the diagnosis used in Robert Graves' Claudius novels, first published in the 1930s. Polio does not explain many of the described symptoms, however, and a more recent theory implicates cerebral palsy as the cause, as outlined by Ernestine Leon.[59] Tourette syndrome has also been considered a possibility.[60][61]
|
152 |
+
|
153 |
+
As a person, ancient historians described Claudius as generous and lowbrow, a man who sometimes lunched with the plebeians.[62][63] They also paint him as bloodthirsty and cruel, overly fond of gladiatorial combat and executions, and very quick to anger; Claudius himself acknowledged the latter trait, and apologized publicly for his temper.[64][65] According to the ancient historians he was also overly trusting, and easily manipulated by his wives and freedmen.[35][66] But at the same time they portray him as paranoid and apathetic, dull and easily confused.[67][68]
|
154 |
+
|
155 |
+
Claudius' extant works present a different view, painting a picture of an intelligent, scholarly, well-read, and conscientious administrator with an eye to detail and justice. Thus, Claudius becomes an enigma. Since the discovery of his "Letter to the Alexandrians" in the last century, much work has been done to rehabilitate Claudius and determine where the truth lies.
|
156 |
+
|
157 |
+
Claudius wrote copiously throughout his life. Arnaldo Momigliano states that during the reign of Tiberius – which covers the peak of Claudius' literary career – it became impolitic to speak of republican Rome. The trend among the young historians was to either write about the new empire or obscure antiquarian subjects. Claudius was the rare scholar who covered both.[69]
|
158 |
+
|
159 |
+
Besides the history of Augustus' reign that caused him so much grief, his major works included Tyrrhenica, a twenty-book Etruscan history, and Carchedonica, an eight-volume history of Carthage,[70] as well as an Etruscan dictionary. He also wrote a book on dice-playing. Despite the general avoidance of the Republican era, he penned a defense of Cicero against the charges of Asinius Gallus. Modern historians have used this to determine the nature of his politics and of the aborted chapters of his civil war history.
|
160 |
+
|
161 |
+
He proposed a reform of the Latin alphabet by the addition of three new letters. He officially instituted the change during his censorship but they did not survive his reign. Claudius also tried to revive the old custom of putting dots between successive words (Classical Latin was written with no spacing). Finally, he wrote an eight-volume autobiography that Suetonius describes as lacking in taste.[71] Since Claudius (like most of the members of his dynasty) harshly criticized his predecessors and relatives in surviving speeches,[72] it is not hard to imagine the nature of Suetonius' charge.
|
162 |
+
|
163 |
+
None of the works survive but live on as sources for the surviving histories of the Julio-Claudian dynasty. Suetonius quotes Claudius' autobiography once and must have used it as a source numerous times. Tacitus uses Claudius' arguments for the orthographical innovations mentioned above and may have used him for some of the more antiquarian passages in his annals. Claudius is the source for numerous passages of Pliny's Natural History.[73]
|
164 |
+
|
165 |
+
The influence of historical study on Claudius is obvious. In his speech on Gallic senators, he uses a version of the founding of Rome identical to that of Livy, his tutor in adolescence. The speech is meticulous in details, a common mark of all his extant works, and he goes into long digressions on related matters. This indicates a deep knowledge of a variety of historical subjects that he could not help but share. Many of the public works instituted in his reign were based on plans first suggested by Julius Caesar. Levick believes this emulation of Caesar may have spread to all aspects of his policies.[74]
|
166 |
+
|
167 |
+
His censorship seems to have been based on those of his ancestors, particularly Appius Claudius Caecus, and he used the office to put into place many policies based on those of Republican times. This is when many of his religious reforms took effect, and his building efforts greatly increased during his tenure. In fact, his assumption of the office of Censor may have been motivated by a desire to see his academic labors bear fruit. For example, he believed (as most Romans did) that Caecus had used the censorship to introduce the letter "R"[75] and so used his own term to introduce his new letters.
|
168 |
+
|
169 |
+
The consensus of ancient historians was that Claudius was murdered by poison—possibly contained in mushrooms or on a feather—and died in the early hours of 13 October 54.[76]
|
170 |
+
|
171 |
+
Nearly all implicate his final wife, Agrippina, as the instigator. Agrippina and Claudius had become more combative in the months leading up to his death. This carried on to the point where Claudius openly lamented his bad wives, and began to comment on Britannicus' approaching manhood with an eye towards restoring his status within the imperial family.[77] Agrippina had motive in ensuring the succession of Nero before Britannicus could gain power.
|
172 |
+
|
173 |
+
Some implicate either his taster Halotus, his doctor Xenophon, or the infamous poisoner Locusta as the administrator of the fatal substance.[78] Some say he died after prolonged suffering following a single dose at dinner, and some have him recovering only to be poisoned again.[79] Among contemporary sources, Seneca the Younger ascribed the emperor's death to natural causes, while Josephus only spoke of rumors on his poisoning.[80]
|
174 |
+
|
175 |
+
In modern times, authors have cast doubt on whether Claudius was murdered or merely succumbed to illness or old age.[81] Evidences against his murder include his old age, his serious illnesses in his last years, his unhealthy lifestyle and the fact that his taster Halotus continued to serve in the same position under Nero. On the other hand, some modern scholars claim the near universality of the accusations in ancient texts lends credence to the crime.[82] Claudius' ashes were interred in the Mausoleum of Augustus on 24 October 54, after a funeral similar to that of his great-uncle Augustus 40 years earlier.
|
176 |
+
|
177 |
+
Already, while alive, he received the widespread private worship of a living princeps[83] and was worshipped in Britannia in his own temple in Camulodunum.
|
178 |
+
|
179 |
+
Claudius was deified by Nero and the Senate almost immediately.[84] Those who regard this homage as cynical should note that, cynical or not, such a move would hardly have benefited those involved, had Claudius been "hated", as some commentators, both modern and historic, characterize him. Many of Claudius' less solid supporters quickly became Nero's men. Claudius' will had been changed shortly before his death to either recommend Nero and Britannicus jointly or perhaps just Britannicus, who would have been considered an adult man according to Roman law only a few months later.
|
180 |
+
|
181 |
+
Agrippina had sent away Narcissus shortly before Claudius' death, and now murdered the freedman. The last act of this secretary of letters was to burn all of Claudius' correspondence—most likely so it could not be used against him and others in an already hostile new regime. Thus Claudius' private words about his own policies and motives were lost to history. Just as Claudius had criticized his predecessors in official edicts (see below), Nero often criticized the deceased Emperor and many of Claudius' laws and edicts were disregarded under the reasoning that he was too stupid and senile to have meant them.[85]
|
182 |
+
|
183 |
+
Seneca's Apocolocyntosis mocks the deification of Claudius and reinforces the view of Claudius as an unpleasant fool; this remained the official view for the duration of Nero's reign. Eventually Nero stopped referring to his deified adoptive father at all, and realigned with his birth family. Claudius' temple was left unfinished after only some of the foundation had been laid down. Eventually the site was overtaken by Nero's Golden House.[86]
|
184 |
+
|
185 |
+
The Flavians, who had risen to prominence under Claudius, took a different tack. They were in a position where they needed to shore up their legitimacy, but also justify the fall of the Julio-Claudians. They reached back to Claudius in contrast with Nero, to show that they were good associated with good. Commemorative coins were issued of Claudius and his son Britannicus, who had been a friend of the Emperor Titus (Titus was born in 39, Britannicus was born in 41). When Nero's Golden House was burned, the Temple of Claudius was finally completed on the Caelian Hill.[86]
|
186 |
+
|
187 |
+
However, as the Flavians became established, they needed to emphasize their own credentials more, and their references to Claudius ceased. Instead, he was lumped with the other emperors of the fallen dynasty. His state cult in Rome probably continued until the abolition of all such cults of dead Emperors by Maximinus Thrax in 237–238.[87] The Feriale Duranum, probably identical to the festival calendars of every regular army unit, assigns him a sacrifice of a steer on his birthday, the Kalends of August.[88] And such commemoration (and consequent feasting) probably continued until the Christianization and disintegration of the army in the late 4th century.[89]
|
188 |
+
|
189 |
+
The main ancient historians Tacitus, Suetonius, and Cassius Dio all wrote after the last of the Flavians had gone. All three were senators or equites. They took the side of the Senate in most conflicts with the Princeps, invariably viewing him as being in the wrong. This resulted in biases, both conscious and unconscious. Suetonius lost access to the official archives shortly after beginning his work. He was forced to rely on second-hand accounts when it came to Claudius (with the exception of Augustus' letters, which had been gathered earlier). Suetonius painted Claudius as a ridiculous figure, belittling many of his acts and attributing the objectively good works to his retinue.[90]
|
190 |
+
|
191 |
+
Tacitus wrote a narrative for his fellow senators and fitted each of the emperors into a simple mold of his choosing.[91] He wrote of Claudius as a passive pawn and an idiot in affairs relating to the palace and often in public life. During his censorship of 47–48 Tacitus allows the reader a glimpse of a Claudius who is more statesmanlike (XI.23–25), but it is a mere glimpse. Tacitus is usually held to have 'hidden' his use of Claudius' writings and to have omitted Claudius' character from his works.[92] Even his version of Claudius' Lyons tablet speech is edited to be devoid of the Emperor's personality. Dio was less biased, but seems to have used Suetonius and Tacitus as sources. Thus the conception of Claudius as the weak fool, controlled by those he supposedly ruled, was preserved for the ages.
|
192 |
+
|
193 |
+
As time passed, Claudius was mostly forgotten outside of the historians' accounts. His books were lost first, as their antiquarian subjects became unfashionable. In the 2nd century, Pertinax, who shared his birthday, became emperor, overshadowing commemoration of Claudius.[93]
|
194 |
+
|
195 |
+
In literature, Claudius and his contemporaries appear in the historical novel The Roman by Mika Waltari. Canadian-born science fiction writer A. E. van Vogt reimagined Robert Graves' Claudius story, in his two novels, Empire of the Atom and The Wizard of Linn.
|
196 |
+
|
197 |
+
The historical novel Chariot of the Soul by Linda Proud features Claudius as host and mentor of the young Togidubnus, son of King Verica of the Atrebates, during his ten-year stay in Rome. When Togidubnus returns to Britain in advance of the Roman army, it is with a mission given to him by Claudius.
|
198 |
+
|
en/117.html.txt
ADDED
@@ -0,0 +1,181 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Coordinates: 51°N 9°E / 51°N 9°E / 51; 9
|
4 |
+
|
5 |
+
– in Europe (light green & dark grey)– in the European Union (light green)
|
6 |
+
|
7 |
+
Germany (German: Deutschland, German pronunciation: [ˈdɔʏtʃlant]), officially the Federal Republic of Germany (German: Bundesrepublik Deutschland, listen),[e] is a country in Central and Western Europe. Covering an area of 357,022 square kilometres (137,847 sq mi), it lies between the Baltic and North seas to the north, and the Alps to the south. It borders Denmark to the north, Poland and the Czech Republic to the east, Austria and Switzerland to the south, and France, Luxembourg, Belgium, and the Netherlands to the west.
|
8 |
+
|
9 |
+
Various Germanic tribes have inhabited the northern parts of modern Germany since classical antiquity. A region named Germania was documented before AD 100. Beginning in the 10th century, German territories formed a central part of the Holy Roman Empire. During the 16th century, northern German regions became the centre of the Protestant Reformation. Following the Napoleonic Wars and the dissolution of the Holy Roman Empire in 1806, the German Confederation was formed in 1815. In 1871, Germany became a nation state when most of the German states unified into the Prussian-dominated German Empire. After World War I and the German Revolution of 1918–1919, the Empire was replaced by the parliamentary Weimar Republic. The Nazi seizure of power in 1933 led to the establishment of a dictatorship, World War II, and the Holocaust. After the end of World War II in Europe and a period of Allied occupation, two new German states were founded: West Germany and East Germany. The Federal Republic of Germany was a founding member of the European Economic Community and the European Union. The country was reunified on 3 October 1990.
|
10 |
+
|
11 |
+
Today, Germany is a federal parliamentary republic led by a chancellor. With 83 million inhabitants of its 16 constituent states, it is the second-most populous country in Europe after Russia, as well as the most populous member state of the European Union. Its capital and largest city is Berlin, and its financial centre is Frankfurt; the largest urban area is the Ruhr.
|
12 |
+
|
13 |
+
Germany is a great power with a strong economy; it has the largest economy in Europe, the world's fourth-largest economy by nominal GDP, and the fifth-largest by PPP. As a global leader in several industrial and technological sectors, it is both the world's third-largest exporter and importer of goods. A highly developed country with a very high standard of living, it offers social security and a universal health care system, environmental protections, and a tuition-free university education. Germany is also a member of the United Nations, NATO, the G7, the G20, and the OECD. Known for its long and rich cultural history, Germany has many World Heritage sites and is among the top tourism destinations in the world.
|
14 |
+
|
15 |
+
The English word Germany derives from the Latin Germania, which came into use after Julius Caesar adopted it for the peoples east of the Rhine.[10] The German term Deutschland, originally diutisciu land ("the German lands") is derived from deutsch, descended from Old High German diutisc "of the people" (from diot or diota "people"), originally used to distinguish the language of the common people from Latin and its Romance descendants. This in turn descends from Proto-Germanic *þiudiskaz "of the people" (see also the Latinised form Theodiscus), derived from *þeudō, descended from Proto-Indo-European *tewtéh₂- "people", from which the word Teutons also originates.[11]
|
16 |
+
|
17 |
+
Ancient humans were present in Germany at least 600,000 years ago.[12] The first non-modern human fossil (the Neanderthal) was discovered in the Neander Valley.[13] Similarly dated evidence of modern humans has been found in the Swabian Jura, including 42,000-year-old flutes which are the oldest musical instruments ever found,[14] the 40,000-year-old Lion Man,[15] and the 35,000-year-old Venus of Hohle Fels.[16] The Nebra sky disk, created during the European Bronze Age, is attributed to a German site.[17]
|
18 |
+
|
19 |
+
The Germanic tribes are thought to date from the Nordic Bronze Age or the Pre-Roman Iron Age.[18] From southern Scandinavia and north Germany, they expanded south, east, and west, coming into contact with the Celtic, Iranian, Baltic, and Slavic tribes.[19]
|
20 |
+
|
21 |
+
Under Augustus, Rome began to invade Germania. In 9 AD, three Roman legions were defeated by Arminius.[20] By 100 AD, when Tacitus wrote Germania, Germanic tribes had settled along the Rhine and the Danube (the Limes Germanicus), occupying most of modern Germany. However, Baden Württemberg, southern Bavaria, southern Hesse and the western Rhineland had been incorporated into Roman provinces.[21][22][23] Around 260, Germanic peoples broke into Roman-controlled lands.[24] After the invasion of the Huns in 375, and with the decline of Rome from 395, Germanic tribes moved farther southwest: the Franks established the Frankish Kingdom and pushed east to subjugate Saxony and Bavaria, and areas of what is today eastern Germany were inhabited by Western Slavic tribes.[21]
|
22 |
+
|
23 |
+
Charlemagne founded the Carolingian Empire in 800; it was divided in 843[25] and the Holy Roman Empire emerged from the eastern portion. The territory initially known as East Francia stretched from the Rhine in the west to the Elbe River in the east and from the North Sea to the Alps.[25] The Ottonian rulers (919–1024) consolidated several major duchies.[26] In 996 Gregory V became the first German Pope, appointed by his cousin Otto III, whom he shortly after crowned Holy Roman Emperor. The Holy Roman Empire absorbed northern Italy and Burgundy under the Salian emperors (1024–1125), although the emperors lost power through the Investiture controversy.[27]
|
24 |
+
|
25 |
+
Under the Hohenstaufen emperors (1138–1254), German princes encouraged German settlement to the south and east (Ostsiedlung). Members of the Hanseatic League, mostly north German towns, prospered in the expansion of trade.[28] Population declined starting with the Great Famine in 1315, followed by the Black Death of 1348–50.[29] The Golden Bull issued in 1356 provided the constitutional structure of the Empire and codified the election of the emperor by seven prince-electors.[30]
|
26 |
+
|
27 |
+
Johannes Gutenberg introduced moveable-type printing to Europe, laying the basis for the democratization of knowledge.[31] In 1517, Martin Luther incited the Protestant Reformation; the 1555 Peace of Augsburg tolerated the "Evangelical" faith (Lutheranism), but also decreed that the faith of the prince was to be the faith of his subjects (cuius regio, eius religio).[32] From the Cologne War through the Thirty Years' Wars (1618–1648), religious conflict devastated German lands and significantly reduced the population.[33][34]
|
28 |
+
|
29 |
+
The Peace of Westphalia ended religious warfare among the Imperial Estates;[33] their mostly German-speaking rulers were able to choose Roman Catholicism, Lutheranism, or the Reformed faith as their official religion.[35] The legal system initiated by a series of Imperial Reforms (approximately 1495–1555) provided for considerable local autonomy and a stronger Imperial Diet.[36] The House of Habsburg held the imperial crown from 1438 until the death of Charles VI in 1740. Following the War of Austrian Succession and the Treaty of Aix-la-Chapelle, Charles VI's daughter Maria Theresa ruled as Empress Consort when her husband, Francis I, became Emperor.[37][38]
|
30 |
+
|
31 |
+
From 1740, dualism between the Austrian Habsburg Monarchy and the Kingdom of Prussia dominated German history. In 1772, 1793, and 1795, Prussia and Austria, along with the Russian Empire, agreed to the Partitions of Poland.[39][40] During the period of the French Revolutionary Wars, the Napoleonic era and the subsequent final meeting of the Imperial Diet, most of the Free Imperial Cities were annexed by dynastic territories; the ecclesiastical territories were secularised and annexed. In 1806 the Imperium was dissolved; France, Russia, Prussia and the Habsburgs (Austria) competed for hegemony in the German states during the Napoleonic Wars.[41]
|
32 |
+
|
33 |
+
Following the fall of Napoleon, the Congress of Vienna founded the German Confederation, a loose league of 39 sovereign states. The appointment of the Emperor of Austria as the permanent president reflected the Congress's rejection of Prussia's rising influence. Disagreement within restoration politics partly led to the rise of liberal movements, followed by new measures of repression by Austrian statesman Klemens von Metternich.[42][43] The Zollverein, a tariff union, furthered economic unity.[44] In light of revolutionary movements in Europe, intellectuals and commoners started the revolutions of 1848 in the German states. King Frederick William IV of Prussia was offered the title of Emperor, but with a loss of power; he rejected the crown and the proposed constitution, a temporary setback for the movement.[45]
|
34 |
+
|
35 |
+
King William I appointed Otto von Bismarck as the Minister President of Prussia in 1862. Bismarck successfully concluded the war with Denmark in 1864; the subsequent decisive Prussian victory in the Austro-Prussian War of 1866 enabled him to create the North German Confederation which excluded Austria. After the defeat of France in the Franco-Prussian War, the German princes proclaimed the founding of the German Empire in 1871. Prussia was the dominant constituent state of the new empire; the King of Prussia ruled as its Kaiser, and Berlin became its capital.[46][47]
|
36 |
+
|
37 |
+
In the Gründerzeit period following the unification of Germany, Bismarck's foreign policy as Chancellor of Germany secured Germany's position as a great nation by forging alliances and avoiding war.[47] However, under Wilhelm II, Germany took an imperialistic course, leading to friction with neighbouring countries.[48] A dual alliance was created with the multinational realm of Austria-Hungary; the Triple Alliance of 1882 included Italy. Britain, France and Russia also concluded alliances to protect against Habsburg interference with Russian interests in the Balkans or German interference against France.[49] At the Berlin Conference in 1884, Germany claimed several colonies including German East Africa, German South West Africa, Togoland, and Kamerun.[50] Later, Germany further expanded its colonial empire to include holdings in the Pacific and China.[51] The colonial government in South West Africa (present-day Namibia), from 1904 to 1907, carried out the annihilation of the local Herero and Namaqua peoples as punishment for an uprising;[52][53] this was the 20th century's first genocide.[53]
|
38 |
+
|
39 |
+
The assassination of Austria's crown prince on 28 June 1914 provided the pretext for Austria-Hungary to attack Serbia and trigger World War I. After four years of warfare, in which approximately two million German soldiers were killed,[54] a general armistice ended the fighting. In the German Revolution (November 1918), Emperor Wilhelm II and the ruling princes abdicated their positions and Germany was declared a federal republic. Germany's new leadership signed the Treaty of Versailles in 1919, accepting defeat by the Allies. Germans perceived the treaty as humiliating, which was seen by historians as influential in the rise of Adolf Hitler.[55] Germany lost around 13% of its European territory and ceded all of its colonial possessions in Africa and the South Sea.[56]
|
40 |
+
|
41 |
+
On 11 August 1919, President Friedrich Ebert signed the democratic Weimar Constitution.[57] In the subsequent struggle for power, communists seized power in Bavaria, but conservative elements elsewhere attempted to overthrow the Republic in the Kapp Putsch. Street fighting in the major industrial centres, the occupation of the Ruhr by Belgian and French troops, and a period of hyperinflation followed. A debt restructuring plan and the creation of a new currency in 1924 ushered in the Golden Twenties, an era of artistic innovation and liberal cultural life.[58][59][60]
|
42 |
+
|
43 |
+
The worldwide Great Depression hit Germany in 1929. Chancellor Heinrich Brüning's government pursued a policy of fiscal austerity and deflation which caused unemployment of nearly 30% by 1932.[61] The Nazi Party led by Adolf Hitler won a special election in 1932 and Hindenburg appointed Hitler as Chancellor of Germany on 30 January 1933.[62] After the Reichstag fire, a decree abrogated basic civil rights and the first Nazi concentration camp opened.[63][64] The Enabling Act gave Hitler unrestricted legislative power, overriding the constitution;[65] his government established a centralised totalitarian state, withdrew from the League of Nations, and dramatically increased the country's rearmament.[66] A government-sponsored programme for economic renewal focused on public works, the most famous of which was the autobahn.[67]
|
44 |
+
|
45 |
+
In 1935, the regime withdrew from the Treaty of Versailles and introduced the Nuremberg Laws which targeted Jews and other minorities.[68] Germany also reacquired control of the Saarland in 1935,[69] remilitarised the Rhineland in 1936, annexed Austria in 1938, annexed the Sudetenland in 1938 with the Munich Agreement, and in violation of the agreement occupied Czechoslovakia in March 1939.[70] Kristallnacht saw the burning of synagogues, the destruction of Jewish businesses, and mass arrests of Jewish people.[71]
|
46 |
+
|
47 |
+
In August 1939, Hitler's government negotiated the Molotov–Ribbentrop pact that divided Eastern Europe into German and Soviet spheres of influence.[72] On 1 September 1939, Germany invaded Poland, beginning World War II in Europe;[73] Britain and France declared war on Germany on 3 September.[74] In the spring of 1940, Germany conquered Denmark and Norway, the Netherlands, Belgium, Luxembourg, and France, forcing the French government to sign an armistice. The British repelled German air attacks in the Battle of Britain in the same year. In 1941, German troops invaded Yugoslavia, Greece and the Soviet Union. By 1942, Germany and her allies controlled most of continental Europe and North Africa, but following the Soviet victory at the Battle of Stalingrad, the allies' reconquest of North Africa and invasion of Italy in 1943, German forces suffered repeated military defeats. In 1944, the Soviets pushed into Eastern Europe; the Western allies landed in France and entered Germany despite a final German counteroffensive. Following Hitler's suicide during the Battle of Berlin, Germany surrendered on 8 May 1945, ending World War II in Europe.[73][75] Following the end of the war, surviving Nazi officials were tried for war crimes at the Nuremberg trials.[76][77]
|
48 |
+
|
49 |
+
In what later became known as the Holocaust, the German government persecuted minorities, including interning them in concentration and death camps across Europe. In total 17 million people were systematically murdered, including 6 million Jews, at least 130,000 Romani, 275,000 persons with disabilities, thousands of Jehovah's Witnesses, thousands of homosexuals, and hundreds of thousands of political and religious opponents.[78] Nazi policies in German-occupied countries resulted in the deaths of 2.7 million Poles,[79] 1.3 million Ukrainians, 1 million Belarusians[80] and 3.5 million Soviet prisoners of war.[80][76] German military casualties have been estimated at 5.3 million,[81] and around 900,000 German civilians died.[82] Around 12 million ethnic Germans were expelled from across Eastern Europe, and Germany lost roughly one-quarter of its pre-war territory.[83]
|
50 |
+
|
51 |
+
After Nazi Germany surrendered, the Allies partitioned Berlin and Germany's remaining territory into four occupation zones. The western sectors, controlled by France, the United Kingdom, and the United States, were merged on 23 May 1949 to form the Federal Republic of Germany (Bundesrepublik Deutschland (BRD)); on 7 October 1949, the Soviet Zone became the German Democratic Republic (Deutsche Demokratische Republik (DDR)). They were informally known as West Germany and East Germany.[85] East Germany selected East Berlin as its capital, while West Germany chose Bonn as a provisional capital, to emphasise its stance that the two-state solution was temporary.[86]
|
52 |
+
|
53 |
+
West Germany was established as a federal parliamentary republic with a "social market economy". Starting in 1948 West Germany became a major recipient of reconstruction aid under the Marshall Plan.[87] Konrad Adenauer was elected the first Federal Chancellor of Germany in 1949. The country enjoyed prolonged economic growth (Wirtschaftswunder) beginning in the early 1950s.[88] West Germany joined NATO in 1955 and was a founding member of the European Economic Community.[89]
|
54 |
+
|
55 |
+
East Germany was an Eastern Bloc state under political and military control by the USSR via occupation forces and the Warsaw Pact. Although East Germany claimed to be a democracy, political power was exercised solely by leading members (Politbüro) of the communist-controlled Socialist Unity Party of Germany, supported by the Stasi, an immense secret service.[90] While East German propaganda was based on the benefits of the GDR's social programmes and the alleged threat of a West German invasion, many of its citizens looked to the West for freedom and prosperity.[91] The Berlin Wall, built in 1961, prevented East German citizens from escaping to West Germany, becoming a symbol of the Cold War.[92]
|
56 |
+
|
57 |
+
Tensions between East and West Germany were reduced in the late 1960s by Chancellor Willy Brandt's Ostpolitik.[93] In 1989, Hungary decided to dismantle the Iron Curtain and open its border with Austria, causing the emigration of thousands of East Germans to West Germany via Hungary and Austria. This had devastating effects on the GDR, where regular mass demonstrations received increasing support. In an effort to help retain East Germany as a state, the East German authorities eased border restrictions, but this actually led to an acceleration of the Wende reform process culminating in the Two Plus Four Treaty under which Germany regained full sovereignty. This permitted German reunification on 3 October 1990, with the accession of the five re-established states of the former GDR.[94] The fall of the Wall in 1989 became a symbol of the Fall of Communism, the Dissolution of the Soviet Union, German Reunification and Die Wende.[95]
|
58 |
+
|
59 |
+
United Germany was considered the enlarged continuation of West Germany so it retained its memberships in international organisations.[96] Based on the Berlin/Bonn Act (1994), Berlin again became the capital of Germany, while Bonn obtained the unique status of a Bundesstadt (federal city) retaining some federal ministries.[97] The relocation of the government was completed in 1999, and modernisation of the east German economy was scheduled to last until 2019.[98][99]
|
60 |
+
|
61 |
+
Since reunification, Germany has taken a more active role in the European Union, signing the Maastricht Treaty in 1992 and the Lisbon Treaty in 2007,[100] and co-founding the Eurozone.[101] Germany sent a peacekeeping force to secure stability in the Balkans and sent German troops to Afghanistan as part of a NATO effort to provide security in that country after the ousting of the Taliban.[102][103]
|
62 |
+
|
63 |
+
In the 2005 elections, Angela Merkel became the first female chancellor. In 2009 the German government approved a €50 billion stimulus plan.[104] Among the major German political projects of the early 21st century are the advancement of European integration, the energy transition (Energiewende) for a sustainable energy supply, the "Debt Brake" for balanced budgets, measures to increase the fertility rate (pronatalism), and high-tech strategies for the transition of the German economy, summarised as Industry 4.0.[105] Germany was affected by the European migrant crisis in 2015: the country took in over a million migrants and developed a quota system which redistributed migrants around its federal states.[106]
|
64 |
+
|
65 |
+
Germany is in Western and Central Europe, bordering Denmark to the north, Poland and the Czech Republic to the east, Austria to the southeast, and Switzerland to the south-southwest. France, Luxembourg and Belgium are situated to the west, with the Netherlands to the northwest. Germany is also bordered by the North Sea and, at the north-northeast, by the Baltic Sea. German territory covers 357,022 km2 (137,847 sq mi), consisting of 348,672 km2 (134,623 sq mi) of land and 8,350 km2 (3,224 sq mi) of water. It is the seventh largest country by area in Europe and the 62nd largest in the world.[4]
|
66 |
+
|
67 |
+
Elevation ranges from the mountains of the Alps (highest point: the Zugspitze at 2,963 metres or 9,721 feet) in the south to the shores of the North Sea (Nordsee) in the northwest and the Baltic Sea (Ostsee) in the northeast. The forested uplands of central Germany and the lowlands of northern Germany (lowest point: Wilstermarsch at 3.54 metres or 11.6 feet below sea level) are traversed by such major rivers as the Rhine, Danube and Elbe. Significant natural resources include iron ore, coal, potash, timber, lignite, uranium, copper, natural gas, salt, and nickel.[4]
|
68 |
+
|
69 |
+
Most of Germany has a temperate climate, ranging from oceanic in the north to continental in the east and southeast. Winters range from cold in the southern Alps to mild and are generally overcast with limited precipitation, while summers can vary from hot and dry to cool and rainy. The northern regions have prevailing westerly winds that bring in moist air from the North Sea, moderating the temperature and increasing precipitation. Conversely, the southeast regions have more extreme temperatures.[107]
|
70 |
+
|
71 |
+
From February 2019–2020, average monthly temperatures in Germany ranged from a low of 3.3 °C (37.9 °F) in January 2020 to a high of 19.8 °C (67.6 °F) in June 2019.[108] Average monthly precipitation ranged from 30 litres per square metre in February and April 2019 to 125 litres per square metre in February 2020.[109] Average monthly hours of sunshine ranged from 45 in November 2019 to 300 in June 2019.[110]
|
72 |
+
|
73 |
+
The territory of Germany can be divided into two ecoregions: European-Mediterranean montane mixed forests and Northeast-Atlantic shelf marine.[111] As of 2016[update] 51% of Germany's land area is devoted to agriculture, while 30% is forested and 14% is covered by settlements or infrastructure.[112]
|
74 |
+
|
75 |
+
Plants and animals include those generally common to Central Europe. According to the National Forest Inventory, beeches, oaks, and other deciduous trees constitute just over 40% of the forests; roughly 60% are conifers, particularly spruce and pine.[113] There are many species of ferns, flowers, fungi, and mosses. Wild animals include roe deer, wild boar, mouflon (a subspecies of wild sheep), fox, badger, hare, and small numbers of the Eurasian beaver.[114] The blue cornflower was once a German national symbol.[115]
|
76 |
+
|
77 |
+
The 16 national parks in Germany include the Jasmund National Park, the Vorpommern Lagoon Area National Park, the Müritz National Park, the Wadden Sea National Parks, the Harz National Park, the Hainich National Park, the Black Forest National Park, the Saxon Switzerland National Park, the Bavarian Forest National Park and the Berchtesgaden National Park.[116] In addition, there are 17 Biosphere Reserves[117] and 105 nature parks.[118] More than 400 zoos and animal parks operate in Germany.[119] The Berlin Zoo, which opened in 1844, is the oldest in Germany, and claims the most comprehensive collection of species in the world.[120]
|
78 |
+
|
79 |
+
Germany is a federal, parliamentary, representative democratic republic. Federal legislative power is vested in the parliament consisting of the Bundestag (Federal Diet) and Bundesrat (Federal Council), which together form the legislative body. The Bundestag is elected through direct elections: half by majority vote and half by proportional representation. The members of the Bundesrat represent and are appointed by the governments of the sixteen federated states.[4] The German political system operates under a framework laid out in the 1949 constitution known as the Grundgesetz (Basic Law). Amendments generally require a two-thirds majority of both the Bundestag and the Bundesrat; the fundamental principles of the constitution, as expressed in the articles guaranteeing human dignity, the separation of powers, the federal structure, and the rule of law, are valid in perpetuity.[121]
|
80 |
+
|
81 |
+
The president, currently Frank-Walter Steinmeier, is the head of state and invested primarily with representative responsibilities and powers. He is elected by the Bundesversammlung (federal convention), an institution consisting of the members of the Bundestag and an equal number of state delegates.[4] The second-highest official in the German order of precedence is the Bundestagspräsident (president of the Bundestag), who is elected by the Bundestag and responsible for overseeing the daily sessions of the body.[122] The third-highest official and the head of government is the chancellor, who is appointed by the Bundespräsident after being elected by the party or coalition with the most seats in the Bundestag.[4] The chancellor, currently Angela Merkel, is the head of government and exercises executive power through their Cabinet.[4]
|
82 |
+
|
83 |
+
Since 1949, the party system has been dominated by the Christian Democratic Union and the Social Democratic Party of Germany. So far every chancellor has been a member of one of these parties. However, the smaller liberal Free Democratic Party and the Alliance '90/The Greens have also achieved some success. Since 2007, the left-wing populist party The Left has been a staple in the German Bundestag, though they have never been part of the federal government. In the 2017 German federal election, the right-wing populist Alternative for Germany gained enough votes to attain representation in the parliament for the first time.[123][124]
|
84 |
+
|
85 |
+
Germany comprises sixteen federal states which are collectively referred to as Bundesländer.[125] Each state has its own state constitution,[126] and is largely autonomous in regard to its internal organisation. As of 2017[update] Germany is divided into 401 districts (Kreise) at a municipal level; these consist of 294 rural districts and 107 urban districts.[127]
|
86 |
+
|
87 |
+
Germany has a civil law system based on Roman law with some references to Germanic law.[131] The Bundesverfassungsgericht (Federal Constitutional Court) is the German Supreme Court responsible for constitutional matters, with power of judicial review.[132] Germany's supreme court system is specialised: for civil and criminal cases, the highest court of appeal is the inquisitorial Federal Court of Justice, and for other affairs the courts are the Federal Labour Court, the Federal Social Court, the Federal Finance Court and the Federal Administrative Court.[133]
|
88 |
+
|
89 |
+
Criminal and private laws are codified on the national level in the Strafgesetzbuch and the Bürgerliches Gesetzbuch respectively. The German penal system seeks the rehabilitation of the criminal and the protection of the public.[134] Except for petty crimes, which are tried before a single professional judge, and serious political crimes, all charges are tried before mixed tribunals on which lay judges (Schöffen) sit side by side with professional judges.[135][136]
|
90 |
+
|
91 |
+
Germany has a low murder rate with 1.18 murders per 100,000 as of 2016[update].[137] In 2018, the overall crime rate fell to its lowest since 1992.[138]
|
92 |
+
|
93 |
+
Germany has a network of 227 diplomatic missions abroad[140] and maintains relations with more than 190 countries.[141] Germany is a member of NATO, the OECD, the G8, the G20, the World Bank and the IMF. It has played an influential role in the European Union since its inception and has maintained a strong alliance with France and all neighbouring countries since 1990. Germany promotes the creation of a more unified European political, economic and security apparatus.[142][143][144] The governments of Germany and the United States are close political allies.[145] Cultural ties and economic interests have crafted a bond between the two countries resulting in Atlanticism.[146]
|
94 |
+
|
95 |
+
The development policy of Germany is an independent area of foreign policy. It is formulated by the Federal Ministry for Economic Cooperation and Development and carried out by the implementing organisations. The German government sees development policy as a joint responsibility of the international community.[147] It was the world's second biggest aid donor in 2019 after the United States.[148]
|
96 |
+
|
97 |
+
Germany's military, the Bundeswehr, is organised into the Heer (Army and special forces KSK), Marine (Navy), Luftwaffe (Air Force), Zentraler Sanitätsdienst der Bundeswehr (Joint Medical Service) and Streitkräftebasis (Joint Support Service) branches. In absolute terms, German military expenditure is the 8th highest in the world.[149] In 2018, military spending was at $49.5 billion, about 1.2% of the country's GDP, well below the NATO target of 2%.[150]
|
98 |
+
|
99 |
+
As of January 2020[update], the Bundeswehr has a strength of 184,001 active soldiers and 80,947 civilians.[151] Reservists are available to the armed forces and participate in defence exercises and deployments abroad.[152] Until 2011, military service was compulsory for men at age 18, but this has been officially suspended and replaced with a voluntary service.[153][154] Since 2001 women may serve in all functions of service without restriction.[155] According to SIPRI, Germany was the fourth largest exporter of major arms in the world from 2014 to 2018.[156]
|
100 |
+
|
101 |
+
In peacetime, the Bundeswehr is commanded by the Minister of Defence. In state of defence, the Chancellor would become commander-in-chief of the Bundeswehr.[157] The role of the Bundeswehr is described in the Constitution of Germany as defensive only. But after a ruling of the Federal Constitutional Court in 1994 the term "defence" has been defined to not only include protection of the borders of Germany, but also crisis reaction and conflict prevention, or more broadly as guarding the security of Germany anywhere in the world. As of 2017[update], the German military has about 3,600 troops stationed in foreign countries as part of international peacekeeping forces, including about 1,200 supporting operations against Daesh, 980 in the NATO-led Resolute Support Mission in Afghanistan, and 800 in Kosovo.[158]
|
102 |
+
|
103 |
+
Germany has a social market economy with a highly skilled labour force, a low level of corruption, and a high level of innovation.[4][160][161] It is the world's third largest exporter of goods,[4] and has the largest national economy in Europe which is also the world's fourth largest by nominal GDP[162] and the fifth by PPP.[163] Its GDP per capita measured in purchasing power standards amounts to 121% of the EU27 average (100%).[164] The service sector contributes approximately 69% of the total GDP, industry 31%, and agriculture 1% as of 2017[update].[4] The unemployment rate published by Eurostat amounts to 3.2% as of January 2020[update], which is the fourth-lowest in the EU.[165]
|
104 |
+
|
105 |
+
Germany is part of the European single market which represents more than 450 million consumers.[166] In 2017, the country accounted for 28% of the Eurozone economy according to the International Monetary Fund.[167] Germany introduced the common European currency, the Euro, in 2002.[168] Its monetary policy is set by the European Central Bank, which is headquartered in Frankfurt.[169][159]
|
106 |
+
|
107 |
+
Being home to the modern car, the automotive industry in Germany is regarded as one of the most competitive and innovative in the world,[170] and is the fourth largest by production.[171] The top 10 exports of Germany are vehicles, machinery, chemical goods, electronic products, electrical equipments, pharmaceuticals, transport equipments, basic metals, food products, and rubber and plastics.[172] Germany is one of the largest exporters globally.[173]
|
108 |
+
|
109 |
+
Of the world's 500 largest stock-market-listed companies measured by revenue in 2019, the Fortune Global 500, 29 are headquartered in Germany.[174] 30 major Germany-based companies are included in the DAX, the German stock market index which is operated by Frankfurt Stock Exchange.[175] Well-known international brands include Mercedes-Benz, BMW, Volkswagen, Audi, Siemens, Allianz, Adidas, Porsche, Bosch and Deutsche Telekom.[176] Berlin is a hub for startup companies and has become the leading location for venture capital funded firms in the European Union.[177] Germany is recognised for its large portion of specialised small and medium enterprises, known as the Mittelstand model.[178] These companies represent 48% global market leaders in their segments, labelled Hidden Champions.[179]
|
110 |
+
|
111 |
+
Research and development efforts form an integral part of the German economy.[180] In 2018 Germany ranked fourth globally in terms of number of science and engineering research papers published.[181] Research institutions in Germany include the Max Planck Society, the Helmholtz Association, and the Fraunhofer Society and the Leibniz Association.[182] Germany is the largest contributor to the European Space Agency.[183]
|
112 |
+
|
113 |
+
With its central position in Europe, Germany is a transport hub for the continent.[184] Its road network is among the densest in Europe.[185] The motorway (Autobahn) is widely known for having no federally mandated speed limit for some classes of vehicles.[186] The InterCityExpress or ICE train network serves major German cities as well as destinations in neighbouring countries with speeds up to 300 km/h (190 mph).[187] The largest German airports are Frankfurt Airport and Munich Airport.[188] The Port of Hamburg is one of the top twenty largest container ports in the world.[189]
|
114 |
+
|
115 |
+
In 2015[update], Germany was the world's seventh-largest consumer of energy.[190] The government and the nuclear power industry agreed to phase out all nuclear power plants by 2021.[191] It meets the country's power demands using 40% renewable sources.[192] Germany is committed to the Paris Agreement and several other treaties promoting biodiversity, low emission standards, and water management.[193][194][195] The country's household recycling rate is among the highest in the world—at around 65%.[196] Nevertheless, the country's total greenhouse gas emissions were the highest in the EU in 2017[update].[197] The German energy transition (Energiewende) is the recognised move to a sustainable economy by means of energy efficiency and renewable energy.[198]
|
116 |
+
|
117 |
+
Germany is the ninth most visited country in the world as of 2017[update], with 37.4 million visits.[199] Berlin has become the third most visited city destination in Europe.[200] Domestic and international travel and tourism combined directly contribute over €105.3 billion to German GDP. Including indirect and induced impacts, the industry supports 4.2 million jobs.[201]
|
118 |
+
|
119 |
+
Germany's most visited and popular landmarks include Cologne Cathedral, the Brandenburg Gate, the Reichstag, the Dresden Frauenkirche, Neuschwanstein Castle, Heidelberg Castle, the Wartburg, and Sanssouci Palace.[202] The Europa-Park near Freiburg is Europe's second most popular theme park resort.[203]
|
120 |
+
|
121 |
+
With a population of 80.2 million according to the 2011 census,[204] rising to 83.1 million as of 2019[update],[5] Germany is the most populous country in the European Union, the second most populous country in Europe after Russia, and the 19th most populous country in the world. Its population density stands at 227 inhabitants per square kilometre (588 per square mile). The overall life expectancy in Germany at birth is 80.19 years (77.93 years for males and 82.58 years for females).[4] The fertility rate of 1.41 children born per woman (2011 estimates) is below the replacement rate of 2.1 and is one of the lowest fertility rates in the world.[4] Since the 1970s, Germany's death rate has exceeded its birth rate. However, Germany is witnessing increased birth rates and migration rates since the beginning of the 2010s, particularly a rise in the number of well-educated migrants. Germany has the third oldest population in the world, with the average age of 47.4 years.[4]
|
122 |
+
|
123 |
+
Four sizeable groups of people are referred to as "national minorities" because their ancestors have lived in their respective regions for centuries:[205] There is a Danish minority in the northernmost state of Schleswig-Holstein;[205] the Sorbs, a Slavic population, are in the Lusatia region of Saxony and Brandenburg.; the Roma and Sinti live throughout the country; and the Frisians are concentrated in Schleswig-Holstein's western coast and in the north-western part of Lower Saxony.[205]
|
124 |
+
|
125 |
+
After the United States, Germany is the second most popular immigration destination in the world. The majority of migrants live in western Germany, in particular in urban areas. Of the country's residents, 18.6 million people (22.5%) were of immigrant or partially immigrant descent in 2016 (including persons descending or partially descending from ethnic German repatriates).[206] In 2015, the Population Division of the United Nations Department of Economic and Social Affairs listed Germany as host to the second-highest number of international migrants worldwide, about 5% or 12 million of all 244 million migrants.[207] As of 2018[update], Germany ranks fifth amongst EU countries in terms of the percentage of migrants in the country's population, at 12.9%.[208]
|
126 |
+
|
127 |
+
Germany has a number of large cities. There are 11 officially recognised metropolitan regions. The country's largest city is Berlin, while its largest urban area is the Ruhr.[209]
|
128 |
+
|
129 |
+
The 2011 German Census showed Christianity as the largest religion in Germany, with 66.8% identified themselves as Christian, with 3.8% of those not being church members.[210] 31.7% declared themselves as Protestants, including members of the Evangelical Church in Germany (which encompasses Lutheran, Reformed and administrative or confessional unions of both traditions) and the free churches (German: Evangelische Freikirchen); 31.2% declared themselves as Roman Catholics, and Orthodox believers constituted 1.3%. According to data from 2016, the Catholic Church and the Evangelical Church claimed 28.5% and 27.5%, respectively, of the population.[211][212] Islam is the second largest religion in the country.[213] In the 2011 census, 1.9% of the census population (1.52 million people) gave their religion as Islam, but this figure is deemed unreliable because a disproportionate number of adherents of this religion (and other religions, such as Judaism) are likely to have made use of their right not to answer the question.[214] Most of the Muslims are Sunnis and Alevites from Turkey, but there are a small number of Shi'ites, Ahmadiyyas and other denominations. Other religions comprise less than one percent of Germany's population.[213]
|
130 |
+
|
131 |
+
A study in 2018 estimated that 38% of the population are not members of any religious organization or denomination,[215] though up to a third may still consider themselves religious. Irreligion in Germany is strongest in the former East Germany, which used to be predominantly Protestant before state atheism, and in major metropolitan areas.[216][217]
|
132 |
+
|
133 |
+
German is the official and predominant spoken language in Germany.[218] It is one of 24 official and working languages of the European Union, and one of the three procedural languages of the European Commission.[219] German is the most widely spoken first language in the European Union, with around 100 million native speakers.[220]
|
134 |
+
|
135 |
+
Recognised native minority languages in Germany are Danish, Low German, Low Rhenish, Sorbian, Romany, North Frisian and Saterland Frisian; they are officially protected by the European Charter for Regional or Minority Languages. The most used immigrant languages are Turkish, Arabic, Kurdish, Polish, the Balkan languages and Russian. Germans are typically multilingual: 67% of German citizens claim to be able to communicate in at least one foreign language and 27% in at least two.[218]
|
136 |
+
|
137 |
+
Responsibility for educational supervision in Germany is primarily organised within the individual federal states. Optional kindergarten education is provided for all children between three and six years old, after which school attendance is compulsory for at least nine years. Primary education usually lasts for four to six years.[221] Secondary schooling is divided into tracks based on whether students pursue academic or vocational education.[222] A system of apprenticeship called Duale Ausbildung leads to a skilled qualification which is almost comparable to an academic degree. It allows students in vocational training to learn in a company as well as in a state-run trade school.[221] This model is well regarded and reproduced all around the world.[223]
|
138 |
+
|
139 |
+
Most of the German universities are public institutions, and students traditionally study without fee payment.[224] The general requirement for university is the Abitur. According to an OECD report in 2014, Germany is the world's third leading destination for international study.[225] The established universities in Germany include some of the oldest in the world, with Heidelberg University (established in 1386) being the oldest.[226] The Humboldt University of Berlin, founded in 1810 by the liberal educational reformer Wilhelm von Humboldt, became the academic model for many Western universities.[227][228] In the contemporary era Germany has developed eleven Universities of Excellence.
|
140 |
+
|
141 |
+
Germany's system of hospitals, called Krankenhäuser, dates from medieval times, and today, Germany has the world's oldest universal health care system, dating from Bismarck's social legislation of the 1880s.[230] Since the 1880s, reforms and provisions have ensured a balanced health care system. The population is covered by a health insurance plan provided by statute, with criteria allowing some groups to opt for a private health insurance contract. According to the World Health Organization, Germany's health care system was 77% government-funded and 23% privately funded as of 2013[update].[231] In 2014, Germany spent 11.3% of its GDP on health care.[232]
|
142 |
+
|
143 |
+
Germany ranked 20th in the world in 2013 in life expectancy with 77 years for men and 82 years for women, and it had a very low infant mortality rate (4 per 1,000 live births). In 2019[update], the principal cause of death was cardiovascular disease, at 37%.[233] Obesity in Germany has been increasingly cited as a major health issue. A 2014 study showed that 52 percent of the adult German population was overweight or obese.[234]
|
144 |
+
|
145 |
+
Culture in German states has been shaped by major intellectual and popular currents in Europe, both religious and secular. Historically, Germany has been called Das Land der Dichter und Denker ("the land of poets and thinkers"),[235] because of the major role its writers and philosophers have played in the development of Western thought.[236] A global opinion poll for the BBC revealed that Germany is recognised for having the most positive influence in the world in 2013 and 2014.[237][238]
|
146 |
+
|
147 |
+
Germany is well known for such folk festival traditions as Oktoberfest and Christmas customs, which include Advent wreaths, Christmas pageants, Christmas trees, Stollen cakes, and other practices.[239][240] As of 2016[update] UNESCO inscribed 41 properties in Germany on the World Heritage List.[241] There are a number of public holidays in Germany determined by each state; 3 October has been a national day of Germany since 1990, celebrated as the Tag der Deutschen Einheit (German Unity Day).[242]
|
148 |
+
|
149 |
+
German classical music includes works by some of the world's most well-known composers. Dieterich Buxtehude, Johann Sebastian Bach and Georg Friedrich Händel were influential composers of the Baroque period. Ludwig van Beethoven was a crucial figure in the transition between the Classical and Romantic eras. Carl Maria von Weber, Felix Mendelssohn, Robert Schumann and Johannes Brahms were significant Romantic composers. Richard Wagner was known for his operas. Richard Strauss was a leading composer of the late Romantic and early modern eras. Karlheinz Stockhausen and Wolfgang Rihm are important composers of the 20th and early 21st centuries.[243]
|
150 |
+
|
151 |
+
As of 2013, Germany was the second largest music market in Europe, and fourth largest in the world.[244] German popular music of the 20th and 21st centuries includes the movements of Neue Deutsche Welle, pop, Ostrock, heavy metal/rock, punk, pop rock, indie and schlager pop. German electronic music gained global influence, with Kraftwerk and Tangerine Dream pioneering in this genre.[245] DJs and artists of the techno and house music scenes of Germany have become well known (e.g. Paul van Dyk, Paul Kalkbrenner, and Scooter).[246]
|
152 |
+
|
153 |
+
German painters have influenced western art. Albrecht Dürer, Hans Holbein the Younger, Matthias Grünewald and Lucas Cranach the Elder were important German artists of the Renaissance, Peter Paul Rubens and Johann Baptist Zimmermann of the Baroque, Caspar David Friedrich and Carl Spitzweg of Romanticism, Max Liebermann of Impressionism and Max Ernst of Surrealism. Several German art groups formed in the 20th century; Die Brücke (The Bridge) and Der Blaue Reiter (The Blue Rider) influenced the development of expressionism in Munich and Berlin. The New Objectivity arose in response to expressionism during the Weimar Republic. After World War II, broad trends in German art include neo-expressionism and the New Leipzig School.[247]
|
154 |
+
|
155 |
+
Architectural contributions from Germany include the Carolingian and Ottonian styles, which were precursors of Romanesque. Brick Gothic is a distinctive medieval style that evolved in Germany. Also in Renaissance and Baroque art, regional and typically German elements evolved (e.g. Weser Renaissance).[247] Vernacular architecture in Germany is often identified by its timber framing (Fachwerk) traditions and varies across regions, and among carpentry styles.[248] When industrialisation spread across Europe, Classicism and a distinctive style of historism developed in Germany, sometimes referred to as Gründerzeit style. Expressionist architecture developed in the 1910s in Germany and influenced Art Deco and other modern styles. Germany was particularly important in the early modernist movement: it is the home of Werkbund initiated by Hermann Muthesius (New Objectivity), and of the Bauhaus movement founded by Walter Gropius.[247] Ludwig Mies van der Rohe became one of the world's most renowned architects in the second half of the 20th century; he conceived of the glass façade skyscraper.[249] Renowned contemporary architects and offices include Pritzker Prize winners Gottfried Böhm and Frei Otto.[250]
|
156 |
+
|
157 |
+
German designers became early leaders of modern product design.[251] The Berlin Fashion Week and the fashion trade fair Bread & Butter are held twice a year.[252]
|
158 |
+
|
159 |
+
German literature can be traced back to the Middle Ages and the works of writers such as Walther von der Vogelweide and Wolfram von Eschenbach. Well-known German authors include Johann Wolfgang von Goethe, Friedrich Schiller, Gotthold Ephraim Lessing and Theodor Fontane. The collections of folk tales published by the Brothers Grimm popularised German folklore on an international level.[253] The Grimms also gathered and codified regional variants of the German language, grounding their work in historical principles; their Deutsches Wörterbuch, or German Dictionary, sometimes called the Grimm dictionary, was begun in 1838 and the first volumes published in 1854.[254]
|
160 |
+
|
161 |
+
Influential authors of the 20th century include Gerhart Hauptmann, Thomas Mann, Hermann Hesse, Heinrich Böll and Günter Grass.[255] The German book market is the third largest in the world, after the United States and China.[256] The Frankfurt Book Fair is the most important in the world for international deals and trading, with a tradition spanning over 500 years.[257] The Leipzig Book Fair also retains a major position in Europe.[258]
|
162 |
+
|
163 |
+
German philosophy is historically significant: Gottfried Leibniz's contributions to rationalism; the enlightenment philosophy by Immanuel Kant; the establishment of classical German idealism by Johann Gottlieb Fichte, Georg Wilhelm Friedrich Hegel and Friedrich Wilhelm Joseph Schelling; Arthur Schopenhauer's composition of metaphysical pessimism; the formulation of communist theory by Karl Marx and Friedrich Engels; Friedrich Nietzsche's development of perspectivism; Gottlob Frege's contributions to the dawn of analytic philosophy; Martin Heidegger's works on Being; Oswald Spengler's historical philosophy; the development of the Frankfurt School has been particularly influential.[259]
|
164 |
+
|
165 |
+
The largest internationally operating media companies in Germany are the Bertelsmann enterprise, Axel Springer SE and ProSiebenSat.1 Media. Germany's television market is the largest in Europe, with some 38 million TV households.[260] Around 90% of German households have cable or satellite TV, with a variety of free-to-view public and commercial channels.[261] There are more than 300 public and private radio stations in Germany; Germany's national radio network is the Deutschlandradio and the public Deutsche Welle is the main German radio and television broadcaster in foreign languages.[261] Germany's print market of newspapers and magazines is the largest in Europe.[261] The papers with the highest circulation are Bild, Süddeutsche Zeitung, Frankfurter Allgemeine Zeitung and Die Welt.[261] The largest magazines include ADAC Motorwelt and Der Spiegel.[261] Germany has a large video gaming market, with over 34 million players nationwide.[262]
|
166 |
+
|
167 |
+
German cinema has made major technical and artistic contributions to film. The first works of the Skladanowsky Brothers were shown to an audience in 1895. The renowned Babelsberg Studio in Potsdam was established in 1912, thus being the first large-scale film studio in the world. Early German cinema was particularly influential with German expressionists such as Robert Wiene and Friedrich Wilhelm Murnau. Director Fritz Lang's Metropolis (1927) is referred to as the first major science-fiction film. After 1945, many of the films of the immediate post-war period can be characterised as Trümmerfilm (rubble film). East German film was dominated by state-owned film studio DEFA, while the dominant genre in West Germany was the Heimatfilm ("homeland film").[263] During the 1970s and 1980s, New German Cinema directors such as Volker Schlöndorff, Werner Herzog, Wim Wenders, and Rainer Werner Fassbinder brought West German auteur cinema to critical acclaim.
|
168 |
+
|
169 |
+
The Academy Award for Best Foreign Language Film ("Oscar") went to the German production Die Blechtrommel (The Tin Drum) in 1979, to Nirgendwo in Afrika (Nowhere in Africa) in 2002, and to Das Leben der Anderen (The Lives of Others) in 2007. Various Germans won an Oscar for their performances in other films. The annual European Film Awards ceremony is held every other year in Berlin, home of the European Film Academy. The Berlin International Film Festival, known as "Berlinale", awarding the "Golden Bear" and held annually since 1951, is one of the world's leading film festivals. The "Lolas" are annually awarded in Berlin, at the German Film Awards.[264]
|
170 |
+
|
171 |
+
German cuisine varies from region to region and often neighbouring regions share some culinary similarities (e.g. the southern regions of Bavaria and Swabia share some traditions with Switzerland and Austria). International varieties such as pizza, sushi, Chinese food, Greek food, Indian cuisine and doner kebab are also popular.
|
172 |
+
|
173 |
+
Bread is a significant part of German cuisine and German bakeries produce about 600 main types of bread and 1,200 types of pastries and rolls (Brötchen).[265] German cheeses account for about 22% of all cheese produced in Europe.[266] In 2012 over 99% of all meat produced in Germany was either pork, chicken or beef. Germans produce their ubiquitous sausages in almost 1,500 varieties, including Bratwursts and Weisswursts.[267] Although wine is becoming more popular in many parts of Germany, especially close to German wine regions,[268] the national alcoholic drink is beer. German beer consumption per person stands at 110 litres (24 imp gal; 29 US gal) in 2013 and remains among the highest in the world.[269] German beer purity regulations date back to the 16th century.[270]
|
174 |
+
|
175 |
+
The 2018 Michelin Guide awarded eleven restaurants in Germany three stars, giving the country a cumulative total of 300 stars.[271]
|
176 |
+
|
177 |
+
Football is the most popular sport in Germany. With more than 7 million official members, the German Football Association (Deutscher Fußball-Bund) is the largest single-sport organisation worldwide,[272] and the German top league, the Bundesliga, attracts the second highest average attendance of all professional sports leagues in the world.[273] The German men's national football team won the FIFA World Cup in 1954, 1974, 1990, and 2014,[274] the UEFA European Championship in 1972, 1980 and 1996,[275] and the FIFA Confederations Cup in 2017.[276]
|
178 |
+
|
179 |
+
Germany is one of the leading motor sports countries in the world. Constructors like BMW and Mercedes are prominent manufacturers in motor sport. Porsche has won the 24 Hours of Le Mans race 19 times, and Audi 13 times (as of 2017[update]). The driver Michael Schumacher has set many motor sport records during his career, having won seven Formula One World Drivers' Championships.[277] Sebastian Vettel is also among the top five most successful Formula One drivers of all time.[278]
|
180 |
+
|
181 |
+
Historically, German athletes have been successful contenders in the Olympic Games, ranking third in an all-time Olympic Games medal count (when combining East and West German medals). Germany was the last country to host both the summer and winter games in the same year, in 1936: the Berlin Summer Games and the Winter Games in Garmisch-Partenkirchen.[279] Munich hosted the Summer Games of 1972.[280]
|
en/1170.html.txt
ADDED
@@ -0,0 +1,209 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Oscar-Claude Monet (UK: /ˈmɒneɪ/, US: /moʊˈneɪ/,[1][2] French: [klod mɔnɛ]; 14 November 1840 – 5 December 1926) was a French painter, a founder of French Impressionist painting and the most consistent and prolific practitioner of the movement's philosophy of expressing one's perceptions before nature, especially as applied to plein air landscape painting.[3][4] The term "Impressionism" is derived from the title of his painting Impression, soleil levant (Impression, Sunrise), which was exhibited in 1874 in the first of the independent exhibitions mounted by Monet and his associates as an alternative to the Salon de Paris.[5]
|
4 |
+
|
5 |
+
Monet's ambition of documenting the French countryside led him to adopt a method of painting the same scene many times in order to capture the changing of light and the passing of the seasons.[6] From 1883, Monet lived in Giverny, where he purchased a house and property and began a vast landscaping project which included lily ponds that would become the subjects of his best-known works. He began painting the water lilies in 1899, first in vertical views with a Japanese bridge as a central feature and later in the series of large-scale paintings that was to occupy him continuously for the next 20 years of his life.
|
6 |
+
|
7 |
+
Claude Monet was born on 14 November 1840 on the fifth floor of 45 rue Laffitte, in the 9th arrondissement of Paris.[7] He was the second son of Claude Adolphe Monet and Louise Justine Aubrée Monet, both of them second-generation Parisians. On 20 May 1841, he was baptized in the local parish church, Notre-Dame-de-Lorette, as Oscar-Claude, but his parents called him simply Oscar.[7][8] (He signed his juvenilia "O. Monet".) Despite being baptized Catholic, Monet later became an atheist.[9][10]
|
8 |
+
|
9 |
+
In 1845, his family moved to Le Havre in Normandy. His father wanted him to go into the family's ship-chandling and grocery business,[11] but Monet wanted to become an artist. His mother was a singer, and supported Monet's desire for a career in art.[12]
|
10 |
+
|
11 |
+
On 1 April 1851, Monet entered Le Havre secondary school of the arts. Locals knew him well for his charcoal caricatures, which he would sell for ten to twenty francs. Monet also undertook his first drawing lessons from Jacques-François Ochard, a former student of Jacques-Louis David. On the beaches of Normandy around 1856 he met fellow artist Eugène Boudin, who became his mentor and taught him to use oil paints. Boudin taught Monet "en plein air" (outdoor) techniques for painting.[13] Both were influenced by Johan Barthold Jongkind.
|
12 |
+
|
13 |
+
On 28 January 1857, his mother died. At the age of sixteen, he left school and went to live with his widowed, childless aunt, Marie-Jeanne Lecadre.
|
14 |
+
|
15 |
+
When Monet traveled to Paris to visit the Louvre, he witnessed painters copying from the old masters. Having brought his paints and other tools with him, he would instead go and sit by a window and paint what he saw.[14] Monet was in Paris for several years and met other young painters, including Édouard Manet and others who would become friends and fellow Impressionists.
|
16 |
+
|
17 |
+
After drawing a low ballot number in March 1861, Monet was drafted into the First Regiment of African Light Cavalry (Chasseurs d'Afrique) in Algeria for a seven-year period of military service. His prosperous father could have purchased Monet's exemption from conscription but declined to do so when his son refused to give up painting. While in Algeria, Monet did only a few sketches of casbah scenes, a single landscape, and several portraits of officers, all of which have been lost. In a Le Temps interview of 1900 however he commented that the light and vivid colours of North Africa "contained the germ of my future researches".[15] After about a year of garrison duty in Algiers, Monet contracted typhoid fever and briefly went absent without leave. Following convalescence, Monet's aunt intervened to remove him from the army if he agreed to complete a course at an art school. It is possible that the Dutch painter Johan Barthold Jongkind, whom Monet knew, may have prompted his aunt on this matter.
|
18 |
+
|
19 |
+
Disillusioned with the traditional art taught at art schools, in 1862 Monet became a student of Charles Gleyre in Paris, where he met Pierre-Auguste Renoir, Frédéric Bazille and Alfred Sisley. Together they shared new approaches to art, painting the effects of light en plein air with broken colour and rapid brushstrokes, in what later came to be known as Impressionism.
|
20 |
+
|
21 |
+
In January 1865 Monet was working on a version of Le déjeuner sur l'herbe, aiming to present it for hanging at the Salon, which had rejected Manet's Le déjeuner sur l'herbe two years earlier.[17] Monet's painting was very large and could not be completed in time. (It was later cut up, with parts now in different galleries.) Monet submitted instead a painting of Camille or The Woman in the Green Dress (La femme à la robe verte), one of many works using his future wife, Camille Doncieux, as his model. Both this painting and a small landscape were hung.[17] The following year Monet used Camille for his model in Women in the Garden, and On the Bank of the Seine, Bennecourt in 1868. Camille became pregnant and gave birth to their first child, Jean, in 1867.[18] Monet and Camille married on 28 June 1870, just before the outbreak of the Franco-Prussian War,[19] and, after their excursion to London and Zaandam, they moved to Argenteuil, in December 1871. During this time Monet painted various works of modern life. He and Camille lived in poverty for most of this period. Following the successful exhibition of some maritime paintings, and the winning of a silver medal at Le Havre, Monet's paintings were seized by creditors, from whom they were bought back by a shipping merchant, Gaudibert, who was also a patron of Boudin.[17]
|
22 |
+
|
23 |
+
From the late 1860s, Monet and other like-minded artists met with rejection from the conservative Académie des Beaux-Arts, which held its annual exhibition at the Salon de Paris. During the latter part of 1873, Monet, Pierre-Auguste Renoir, Camille Pissarro, and Alfred Sisley organized the Société anonyme des artistes peintres, sculpteurs et graveurs (Anonymous Society of Painters, Sculptors, and Engravers) to exhibit their artworks independently. At their first exhibition, held in April 1874, Monet exhibited the work that was to give the group its lasting name. He was inspired by the style and subject matter of previous modern painters Camille Pissarro and Edouard Manet.[20]
|
24 |
+
|
25 |
+
Impression, Sunrise was painted in 1872, depicting a Le Havre port landscape. From the painting's title the art critic Louis Leroy, in his review, "L'Exposition des Impressionnistes," which appeared in Le Charivari, coined the term "Impressionism".[21] It was intended as disparagement but the Impressionists appropriated the term for themselves.[22][23]
|
26 |
+
|
27 |
+
After the outbreak of the Franco-Prussian War (19 July 1870), Monet and his family took refuge in England in September 1870,[24] where he studied the works of John Constable and Joseph Mallord William Turner, both of whose landscapes would serve to inspire Monet's innovations in the study of colour. In the spring of 1871, Monet's works were refused authorisation for inclusion in the Royal Academy exhibition.[19]
|
28 |
+
|
29 |
+
In May 1871, he left London to live in Zaandam, in the Netherlands,[19] where he made twenty-five paintings (and the police suspected him of revolutionary activities).[25] He also paid a first visit to nearby Amsterdam. In October or November 1871, he returned to France. From December 1871 to 1878 he lived at Argenteuil, a village on the right bank of the Seine river near Paris, and a popular Sunday-outing destination for Parisians, where he painted some of his best-known works. In 1873, Monet purchased a small boat equipped to be used as a floating studio.[26] From the boat studio Monet painted landscapes and also portraits of Édouard Manet and his wife; Manet in turn depicted Monet painting aboard the boat, accompanied by Camille, in 1874.[26] In 1874, he briefly returned to Holland.[27]
|
30 |
+
|
31 |
+
The first Impressionist exhibition was held in 1874 at 35 boulevard des Capucines, Paris, from 15 April to 15 May. The primary purpose of the participants was not so much to promote a new style, but to free themselves from the constraints of the Salon de Paris. The exhibition, open to anyone prepared to pay 60 francs, gave artists the opportunity to show their work without the interference of a jury.[28][29][30]
|
32 |
+
|
33 |
+
Renoir chaired the hanging committee and did most of the work himself, as others members failed to present themselves.[28][29]
|
34 |
+
|
35 |
+
In addition to Impression: Sunrise (pictured above), Monet presented four oil paintings and seven pastels. Among the paintings he displayed was The Luncheon (1868), which features Camille Doncieux and Jean Monet, and which had been rejected by the Paris Salon of 1870.[31] Also in this exhibition was a painting titled Boulevard des Capucines, a painting of the boulevard done from the photographer Nadar's apartment at no. 35. Monet painted the subject twice, and it is uncertain which of the two pictures, that now in the Pushkin Museum in Moscow, or that in the Nelson-Atkins Museum of Art in Kansas City, was the painting that appeared in the groundbreaking 1874 exhibition, though more recently the Moscow picture has been favoured.[32][33] Altogether, 165 works were exhibited in the exhibition, including 4 oils, 2 pastels and 3 watercolours by Morisot; 6 oils and 1 pastel by Renoir; 10 works by Degas; 5 by Pissarro; 3 by Cézanne; and 3 by Guillaumin. Several works were on loan, including Cézanne's Modern Olympia, Morisot's Hide and Seek (owned by Manet) and 2 landscapes by Sisley that had been purchased by Durand-Ruel.[28][29][30]
|
36 |
+
|
37 |
+
The total attendance is estimated at 3500, and some works did sell, though some exhibitors had placed their prices too high. Pissarro was asking 1000 francs for The Orchard and Monet the same for Impression: Sunrise, neither of which sold. Renoir failed to obtain the 500 francs he was asking for La Loge, but later sold it for 450 francs to Père Martin, dealer and supporter of the group.[28][29][30]
|
38 |
+
|
39 |
+
View at Rouelles, Le Havre 1858, Private collection; an early work showing the influence of Corot and Courbet
|
40 |
+
|
41 |
+
Mouth of the Seine at Honfleur, 1865, Norton Simon Foundation, Pasadena, CA; indicates the influence of Dutch maritime painting.[34]
|
42 |
+
|
43 |
+
Women in the Garden, 1866–1867, Musée d'Orsay, Paris.[35]
|
44 |
+
|
45 |
+
Woman in the Garden, 1867, Hermitage, St. Petersburg; a study in the effect of sunlight and shadow on colour
|
46 |
+
|
47 |
+
Garden at Sainte-Adresse ("Jardin à Sainte-Adresse"), 1867, Metropolitan Museum of Art, New York.[36]
|
48 |
+
|
49 |
+
The Luncheon, 1868, Städel, which features Camille Doncieux and Jean Monet, was rejected by the Paris Salon of 1870 but included in the first Impressionists' exhibition in 1874.[37]
|
50 |
+
|
51 |
+
La Grenouillére 1869, Metropolitan Museum of Art, New York; a small plein-air painting created with broad strokes of intense colour.[38]
|
52 |
+
|
53 |
+
The Magpie, 1868–1869. Musée d'Orsay, Paris; one of Monet's early attempts at capturing the effect of snow on the landscape. See also Snow at Argenteuil.
|
54 |
+
|
55 |
+
Le port de Trouville (Breakwater at Trouville, Low Tide), 1870, Museum of Fine Arts, Budapest.[39]
|
56 |
+
|
57 |
+
La plage de Trouville, 1870, National Gallery, London. The left figure may be Camille, on the right possibly the wife of Eugène Boudin, whose beach scenes influenced Monet.[40]
|
58 |
+
|
59 |
+
Houses on the Achterzaan, 1871, Metropolitan Museum of Art, New York
|
60 |
+
|
61 |
+
Jean Monet on his hobby horse, 1872, Metropolitan Museum of Art, New York
|
62 |
+
|
63 |
+
Springtime 1872, Walters Art Museum
|
64 |
+
|
65 |
+
In 1876, Camille Monet became ill with tuberculosis. Their second son, Michel, was born on 17 March 1878. This second child weakened her already fading health. In the summer of that year, the family moved to the village of Vétheuil where they shared a house with the family of Ernest Hoschedé, a wealthy department store owner and patron of the arts. In 1878, Camille Monet was diagnosed with uterine cancer.[41][42][43] She died on 5 September 1879 at the age of thirty-two.[44][45]
|
66 |
+
|
67 |
+
Monet made a study in oils of his dead wife. Many years later, Monet confessed to his friend Georges Clemenceau that his need to analyse colours was both the joy and torment of his life. He explained,
|
68 |
+
|
69 |
+
I one day found myself looking at my beloved wife's dead face and just systematically noting the colours according to an automatic reflex!
|
70 |
+
|
71 |
+
John Berger describes the work as "a blizzard of white, grey, purplish paint ... a terrible blizzard of loss which will forever efface her features. In fact there can be very few death-bed paintings which have been so intensely felt or subjectively expressive."[46]
|
72 |
+
|
73 |
+
After several difficult months following the death of Camille, Monet began to create some of his best paintings of the 19th century. During the early 1880s, Monet painted several groups of landscapes and seascapes in what he considered to be campaigns to document the French countryside. These began to evolve into series of pictures in which he documented the same scene many times in order to capture the changing of light and the passing of the seasons.
|
74 |
+
|
75 |
+
Monet's friend Ernest Hoschedé became bankrupt, and left in 1878 for Belgium. After the death of Camille Monet in September 1879, and while Monet continued to live in the house in Vétheuil, Alice Hoschedé helped Monet to raise his two sons, Jean and Michel. She took them to Paris to live alongside her own six children,[47] Blanche (who married Jean Monet), Germaine, Suzanne, Marthe, Jean-Pierre, and Jacques. In the spring of 1880, Alice Hoschedé and all the children left Paris and rejoined Monet at Vétheuil.[48] In 1881, all of them moved to Poissy, which Monet hated. In April 1883, looking out the window of the little train between Vernon and Gasny, he discovered Giverny in Normandy.[47][49][50] Monet, Alice Hoschedé and the children moved to Vernon, then to the house in Giverny, where he planted a large garden and where he painted for much of the rest of his life. Following the death of her estranged husband, Monet married Alice Hoschedé in 1892.[13]
|
76 |
+
|
77 |
+
Camille Monet on a Garden Bench, 1873, Metropolitan Museum of Art, New York
|
78 |
+
|
79 |
+
The Artist's house at Argenteuil, 1873, The Art Institute of Chicago
|
80 |
+
|
81 |
+
Coquelicots, La promenade (Poppies), 1873, Musée d'Orsay, Paris
|
82 |
+
|
83 |
+
Argenteuil, 1874, National Gallery of Art, Washington D.C.
|
84 |
+
|
85 |
+
The Studio Boat, 1874, Kröller-Müller Museum, Otterlo, Netherlands
|
86 |
+
|
87 |
+
Woman with a Parasol - Madame Monet and Her Son, 1875
|
88 |
+
|
89 |
+
Flowers on the riverbank at Argenteuil, 1877, Pola Museum of Art, Japan
|
90 |
+
|
91 |
+
Arrival of the Normandy Train, Gare Saint-Lazare, 1877, The Art Institute of Chicago
|
92 |
+
|
93 |
+
Vétheuil in the Fog, 1879, Musée Marmottan Monet, Paris
|
94 |
+
|
95 |
+
Monet rented and eventually purchased a house and gardens in Giverny. At the beginning of May 1883, Monet and his large family rented the home and 8,000 square metres (2.0 acres) from a local landowner. The house was situated near the main road between the towns of Vernon and Gasny at Giverny. There was a barn that doubled as a painting studio, orchards and a small garden. The house was close enough to the local schools for the children to attend, and the surrounding landscape offered many suitable motifs for Monet's work.
|
96 |
+
|
97 |
+
The family worked and built up the gardens, and Monet's fortunes began to change for the better as his dealer, Paul Durand-Ruel, had increasing success in selling his paintings.[51] By November 1890, Monet was prosperous enough to buy the house, the surrounding buildings and the land for his gardens. During the 1890s, Monet built a greenhouse and a second studio, a spacious building well lit with skylights.
|
98 |
+
|
99 |
+
Monet wrote daily instructions to his gardener, precise designs and layouts for plantings, and invoices for his floral purchases and his collection of botany books. As Monet's wealth grew, his garden evolved. He remained its architect, even after he hired seven gardeners.[52]
|
100 |
+
|
101 |
+
Monet purchased additional land with a water meadow. In 1893 he began a vast landscaping project which included lily ponds that would become the subjects of his best-known works. White water lilies local to France were planted along with imported cultivars from South America and Egypt, resulting in a range of colours including yellow, blue and white lilies that turned pink with age.[53] In 1899 he began painting the water lilies, first in vertical views with a Japanese bridge as a central feature, and later in the series of large-scale paintings that was to occupy him continuously for the next 20 years of his life.[54] This scenery, with its alternating light and mirror-like reflections, became an integral part of his work. By the mid-1910s Monet had achieved:
|
102 |
+
|
103 |
+
"a completely new, fluid, and somewhat audacious style of painting in which the water-lily pond became the point of departure for an almost abstract art".
|
104 |
+
|
105 |
+
In the Garden, 1895, Collection E. G. Buehrle, Zürich
|
106 |
+
|
107 |
+
Agapanthus, between 1914 and 1926, Museum of Modern Art, New York
|
108 |
+
|
109 |
+
Flowering Arches, Giverny, 1913, Phoenix Art Museum
|
110 |
+
|
111 |
+
Water Lilies and the Japanese bridge, 1897–1899, Princeton University Art Museum
|
112 |
+
|
113 |
+
Water Lilies, 1906, Art Institute of Chicago
|
114 |
+
|
115 |
+
Water Lilies, Musée Marmottan Monet
|
116 |
+
|
117 |
+
Water Lilies, c. 1915, Neue Pinakothek, Munich
|
118 |
+
|
119 |
+
Water Lilies, c. 1915, Musée Marmottan Monet
|
120 |
+
|
121 |
+
Monet's second wife, Alice, died in 1911, and his oldest son Jean, who had married Alice's daughter Blanche, Monet's particular favourite, died in 1914.[13] After Alice died, Blanche looked after and cared for Monet. It was during this time that Monet began to develop the first signs of cataracts.[57]
|
122 |
+
|
123 |
+
During World War I, in which his younger son Michel served and his friend and admirer Georges Clemenceau led the French nation, Monet painted a series of weeping willow trees as homage to the French fallen soldiers. In 1923, he underwent two operations to remove his cataracts. The paintings done while the cataracts affected his vision have a general reddish tone, which is characteristic of the vision of cataract victims. It may also be that after surgery he was able to see certain ultraviolet wavelengths of light that are normally excluded by the lens of the eye; this may have had an effect on the colours he perceived. After his operations he even repainted some of these paintings, with bluer water lilies than before.[58]
|
124 |
+
|
125 |
+
Monet died of lung cancer on 5 December 1926 at the age of 86 and is buried in the Giverny church cemetery.[49] Monet had insisted that the occasion be simple; thus only about fifty people attended the ceremony.[59] At his funeral, his long-time friend Georges Clemenceau removed the black cloth draped over the coffin, stating, "No black for Monet!" and replaced it with a flower-patterned cloth.[60] Monet did not leave a will and so his son Michel inherited his entire estate.
|
126 |
+
|
127 |
+
Monet's home, garden, and waterlily pond were bequeathed by Michel to the French Academy of Fine Arts (part of the Institut de France) in 1966. Through the Fondation Claude Monet, the house and gardens were opened for visits in 1980, following restoration.[61] In addition to souvenirs of Monet and other objects of his life, the house contains his collection of Japanese woodcut prints. The house and garden, along with the Museum of Impressionism, are major attractions in Giverny, which hosts tourists from all over the world.
|
128 |
+
|
129 |
+
Water Lilies and Reflections of a Willow (1916–1919), Musée Marmottan Monet
|
130 |
+
|
131 |
+
Water-Lily Pond and Weeping Willow, 1916–1919, Sale Christie's New York, 1998
|
132 |
+
|
133 |
+
Weeping Willow, 1918–19, Columbus Museum of Art
|
134 |
+
|
135 |
+
Weeping Willow, 1918–19, Kimball Art Museum, Fort Worth, Monet's Weeping Willow paintings were an homage to the fallen French soldiers of World War I
|
136 |
+
|
137 |
+
House Among the Roses, between 1917 and 1919, Albertina, Vienna
|
138 |
+
|
139 |
+
The Rose Walk, Giverny, 1920–1922, Musée Marmottan Monet
|
140 |
+
|
141 |
+
The Japanese Footbridge, 1920–1922, Museum of Modern Art
|
142 |
+
|
143 |
+
The Garden at Giverny
|
144 |
+
|
145 |
+
Monet has been described as "the driving force behind Impressionism".[62] Crucial to the art of the Impressionist painters was the understanding of the effects of light on the local colour of objects, and the effects of the juxtaposition of colours with each other.[63] Monet's long career as a painter was spent in the pursuit of this aim.
|
146 |
+
|
147 |
+
In 1856, his chance meeting with Eugene Boudin, a painter of small beach scenes, opened his eyes to the possibility of plein-air painting. From that time, with a short interruption for military service, he dedicated himself to searching for new and improved methods of painterly expression. To this end, as a young man, he visited the Paris Salon and familiarised himself with the works of older painters, and made friends with other young artists.[62] The five years that he spent at Argenteuil, spending much time on the River Seine in a little floating studio, were formative in his study of the effects of light and reflections. He began to think in terms of colours and shapes rather than scenes and objects. He used bright colours in dabs and dashes and squiggles of paint. Having rejected the academic teachings of Gleyre's studio, he freed himself from theory, saying "I like to paint as a bird sings."[64]
|
148 |
+
|
149 |
+
In 1877 a series of paintings at St-Lazare Station had Monet looking at smoke and steam and the way that they affected colour and visibility, being sometimes opaque and sometimes translucent. He was to further use this study in the painting of the effects of mist and rain on the landscape.[65] The study of the effects of atmosphere was to evolve into a number of series of paintings in which Monet repeatedly painted the same subject (such as his water lilies series)[66] in different lights, at different hours of the day, and through the changes of weather and season. This process began in the 1880s and continued until the end of his life in 1926.
|
150 |
+
|
151 |
+
His first series exhibited as such was of Haystacks, painted from different points of view and at different times of the day. Fifteen of the paintings were exhibited at the Galerie Durand-Ruel in 1891. In 1892 he produced what is probably his best-known series, twenty-six views of Rouen Cathedral.[63] In these paintings Monet broke with painterly traditions by cropping the subject so that only a portion of the façade is seen on the canvas. The paintings do not focus on the grand Medieval building, but on the play of light and shade across its surface, transforming the solid masonry.[67]
|
152 |
+
|
153 |
+
Other series include Poplars, Mornings on the Seine, and the Water Lilies that were painted on his property at Giverny. Between 1883 and 1908, Monet traveled to the Mediterranean, where he painted landmarks, landscapes, and seascapes, including a series of paintings in Venice. In London he painted four series: the Houses of Parliament, London, Charing Cross Bridge, Waterloo Bridge, and Views of Westminster Bridge. Helen Gardner writes:
|
154 |
+
|
155 |
+
Monet, with a scientific precision, has given us an unparalleled and unexcelled record of the passing of time as seen in the movement of light over identical forms.[68]
|
156 |
+
|
157 |
+
La Gare Saint-Lazare, 1877, Musée d'Orsay
|
158 |
+
|
159 |
+
Arrival of the Normandy Train, Gare Saint-Lazare, 1877, The Art Institute of Chicago[69]
|
160 |
+
|
161 |
+
The Cliffs at Etretat, 1885, Clark Institute, Williamstown
|
162 |
+
|
163 |
+
Sailboats behind the needle at Etretat, 1885
|
164 |
+
|
165 |
+
Two paintings from a series of grainstacks, 1890–91: Grainstacks in the Sunlight, Morning Effect,
|
166 |
+
|
167 |
+
Grainstacks, end of day, Autumn, 1890–1891, Art Institute of Chicago
|
168 |
+
|
169 |
+
Poplars (Autumn), 1891, Philadelphia Museum of Art
|
170 |
+
|
171 |
+
Poplars at the River Epte, 1891 Tate
|
172 |
+
|
173 |
+
The Seine Near Giverny, 1897, Museum of Fine Arts, Boston
|
174 |
+
|
175 |
+
Morning on the Seine, 1898, National Museum of Western Art
|
176 |
+
|
177 |
+
Charing Cross Bridge, 1899, Thyssen-Bornemisza Museum Madrid
|
178 |
+
|
179 |
+
Charing Cross Bridge, London, 1899–1901, Saint Louis Art Museum
|
180 |
+
|
181 |
+
Two paintings from a series of The Houses of Parliament, London, 1900–01, Art Institute of Chicago
|
182 |
+
|
183 |
+
London, Houses of Parliament. The Sun Shining through the Fog, 1904, Musée d'Orsay
|
184 |
+
|
185 |
+
Grand Canal, Venice, 1908, Museum of Fine Arts, Boston
|
186 |
+
|
187 |
+
Grand Canal, Venice, 1908, Fine Arts Museums of San Francisco
|
188 |
+
|
189 |
+
In 2004, London, the Parliament, Effects of Sun in the Fog (Londres, le Parlement, trouée de soleil dans le brouillard; 1904), sold for US$20.1 million.[70] In 2006, the journal Proceedings of the Royal Society published a paper providing evidence that these were painted in situ at St Thomas' Hospital over the river Thames.[71]
|
190 |
+
|
191 |
+
Falaises près de Dieppe (Cliffs Near Dieppe) has been stolen on two separate occasions: once in 1998 (in which the museum's curator was convicted of the theft and jailed for five years and two months along with two accomplices) and most recently in August 2007.[72] It was recovered in June 2008.[73]
|
192 |
+
|
193 |
+
Monet's Le Pont du chemin de fer à Argenteuil, an 1873 painting of a railway bridge spanning the Seine near Paris, was bought by an anonymous telephone bidder for a record $41.4 million at Christie's auction in New York on 6 May 2008. The previous record for his painting stood at $36.5 million.[74] A few weeks later, Le bassin aux nymphéas (from the water lilies series) sold at Christie's 24 June 2008 auction in London[75] for £40,921,250 ($80,451,178), nearly doubling the record for the artist.[76]
|
194 |
+
|
195 |
+
This purchase represented one of the top 20 highest prices paid for a painting at the time.
|
196 |
+
|
197 |
+
In October 2013, Monet's paintings, L'Eglise de Vetheuil and Le Bassin aux Nympheas, became subjects of a legal case in New York against NY-based Vilma Bautista, one-time aide to Imelda Marcos, wife of dictator Ferdinand Marcos,[77] after she sold Le Bassin aux Nympheas for $32 million to a Swiss buyer. The said Monet paintings, along with two others, were acquired by Imelda during her husband's presidency and allegedly bought using the nation's funds. Bautista's lawyer claimed that the aide sold the painting for Imelda but did not have a chance to give her the money. The Philippine government seeks the return of the painting.[77] Le Bassin aux Nympheas, also known as Japanese Footbridge over the Water-Lily Pond at Giverny, is part of Monet's famed Water Lilies series.
|
198 |
+
|
199 |
+
Le Bassin Aux Nymphéas, 1919. Monet's late series of Waterlily paintings are among his best-known works.
|
200 |
+
|
201 |
+
Water Lilies, 1919, Metropolitan Museum of Art, New York
|
202 |
+
|
203 |
+
Water Lilies, 1917–1919, Honolulu Museum of Art
|
204 |
+
|
205 |
+
Water lilies (Yellow Nirwana), 1920, The National Gallery, London, London
|
206 |
+
|
207 |
+
Water Lilies, c. 1915–1926, Nelson-Atkins Museum of Art
|
208 |
+
|
209 |
+
The Water Lily Pond, c. 1917–1919, Albertina, Vienna
|
en/1171.html.txt
ADDED
@@ -0,0 +1,209 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Oscar-Claude Monet (UK: /ˈmɒneɪ/, US: /moʊˈneɪ/,[1][2] French: [klod mɔnɛ]; 14 November 1840 – 5 December 1926) was a French painter, a founder of French Impressionist painting and the most consistent and prolific practitioner of the movement's philosophy of expressing one's perceptions before nature, especially as applied to plein air landscape painting.[3][4] The term "Impressionism" is derived from the title of his painting Impression, soleil levant (Impression, Sunrise), which was exhibited in 1874 in the first of the independent exhibitions mounted by Monet and his associates as an alternative to the Salon de Paris.[5]
|
4 |
+
|
5 |
+
Monet's ambition of documenting the French countryside led him to adopt a method of painting the same scene many times in order to capture the changing of light and the passing of the seasons.[6] From 1883, Monet lived in Giverny, where he purchased a house and property and began a vast landscaping project which included lily ponds that would become the subjects of his best-known works. He began painting the water lilies in 1899, first in vertical views with a Japanese bridge as a central feature and later in the series of large-scale paintings that was to occupy him continuously for the next 20 years of his life.
|
6 |
+
|
7 |
+
Claude Monet was born on 14 November 1840 on the fifth floor of 45 rue Laffitte, in the 9th arrondissement of Paris.[7] He was the second son of Claude Adolphe Monet and Louise Justine Aubrée Monet, both of them second-generation Parisians. On 20 May 1841, he was baptized in the local parish church, Notre-Dame-de-Lorette, as Oscar-Claude, but his parents called him simply Oscar.[7][8] (He signed his juvenilia "O. Monet".) Despite being baptized Catholic, Monet later became an atheist.[9][10]
|
8 |
+
|
9 |
+
In 1845, his family moved to Le Havre in Normandy. His father wanted him to go into the family's ship-chandling and grocery business,[11] but Monet wanted to become an artist. His mother was a singer, and supported Monet's desire for a career in art.[12]
|
10 |
+
|
11 |
+
On 1 April 1851, Monet entered Le Havre secondary school of the arts. Locals knew him well for his charcoal caricatures, which he would sell for ten to twenty francs. Monet also undertook his first drawing lessons from Jacques-François Ochard, a former student of Jacques-Louis David. On the beaches of Normandy around 1856 he met fellow artist Eugène Boudin, who became his mentor and taught him to use oil paints. Boudin taught Monet "en plein air" (outdoor) techniques for painting.[13] Both were influenced by Johan Barthold Jongkind.
|
12 |
+
|
13 |
+
On 28 January 1857, his mother died. At the age of sixteen, he left school and went to live with his widowed, childless aunt, Marie-Jeanne Lecadre.
|
14 |
+
|
15 |
+
When Monet traveled to Paris to visit the Louvre, he witnessed painters copying from the old masters. Having brought his paints and other tools with him, he would instead go and sit by a window and paint what he saw.[14] Monet was in Paris for several years and met other young painters, including Édouard Manet and others who would become friends and fellow Impressionists.
|
16 |
+
|
17 |
+
After drawing a low ballot number in March 1861, Monet was drafted into the First Regiment of African Light Cavalry (Chasseurs d'Afrique) in Algeria for a seven-year period of military service. His prosperous father could have purchased Monet's exemption from conscription but declined to do so when his son refused to give up painting. While in Algeria, Monet did only a few sketches of casbah scenes, a single landscape, and several portraits of officers, all of which have been lost. In a Le Temps interview of 1900 however he commented that the light and vivid colours of North Africa "contained the germ of my future researches".[15] After about a year of garrison duty in Algiers, Monet contracted typhoid fever and briefly went absent without leave. Following convalescence, Monet's aunt intervened to remove him from the army if he agreed to complete a course at an art school. It is possible that the Dutch painter Johan Barthold Jongkind, whom Monet knew, may have prompted his aunt on this matter.
|
18 |
+
|
19 |
+
Disillusioned with the traditional art taught at art schools, in 1862 Monet became a student of Charles Gleyre in Paris, where he met Pierre-Auguste Renoir, Frédéric Bazille and Alfred Sisley. Together they shared new approaches to art, painting the effects of light en plein air with broken colour and rapid brushstrokes, in what later came to be known as Impressionism.
|
20 |
+
|
21 |
+
In January 1865 Monet was working on a version of Le déjeuner sur l'herbe, aiming to present it for hanging at the Salon, which had rejected Manet's Le déjeuner sur l'herbe two years earlier.[17] Monet's painting was very large and could not be completed in time. (It was later cut up, with parts now in different galleries.) Monet submitted instead a painting of Camille or The Woman in the Green Dress (La femme à la robe verte), one of many works using his future wife, Camille Doncieux, as his model. Both this painting and a small landscape were hung.[17] The following year Monet used Camille for his model in Women in the Garden, and On the Bank of the Seine, Bennecourt in 1868. Camille became pregnant and gave birth to their first child, Jean, in 1867.[18] Monet and Camille married on 28 June 1870, just before the outbreak of the Franco-Prussian War,[19] and, after their excursion to London and Zaandam, they moved to Argenteuil, in December 1871. During this time Monet painted various works of modern life. He and Camille lived in poverty for most of this period. Following the successful exhibition of some maritime paintings, and the winning of a silver medal at Le Havre, Monet's paintings were seized by creditors, from whom they were bought back by a shipping merchant, Gaudibert, who was also a patron of Boudin.[17]
|
22 |
+
|
23 |
+
From the late 1860s, Monet and other like-minded artists met with rejection from the conservative Académie des Beaux-Arts, which held its annual exhibition at the Salon de Paris. During the latter part of 1873, Monet, Pierre-Auguste Renoir, Camille Pissarro, and Alfred Sisley organized the Société anonyme des artistes peintres, sculpteurs et graveurs (Anonymous Society of Painters, Sculptors, and Engravers) to exhibit their artworks independently. At their first exhibition, held in April 1874, Monet exhibited the work that was to give the group its lasting name. He was inspired by the style and subject matter of previous modern painters Camille Pissarro and Edouard Manet.[20]
|
24 |
+
|
25 |
+
Impression, Sunrise was painted in 1872, depicting a Le Havre port landscape. From the painting's title the art critic Louis Leroy, in his review, "L'Exposition des Impressionnistes," which appeared in Le Charivari, coined the term "Impressionism".[21] It was intended as disparagement but the Impressionists appropriated the term for themselves.[22][23]
|
26 |
+
|
27 |
+
After the outbreak of the Franco-Prussian War (19 July 1870), Monet and his family took refuge in England in September 1870,[24] where he studied the works of John Constable and Joseph Mallord William Turner, both of whose landscapes would serve to inspire Monet's innovations in the study of colour. In the spring of 1871, Monet's works were refused authorisation for inclusion in the Royal Academy exhibition.[19]
|
28 |
+
|
29 |
+
In May 1871, he left London to live in Zaandam, in the Netherlands,[19] where he made twenty-five paintings (and the police suspected him of revolutionary activities).[25] He also paid a first visit to nearby Amsterdam. In October or November 1871, he returned to France. From December 1871 to 1878 he lived at Argenteuil, a village on the right bank of the Seine river near Paris, and a popular Sunday-outing destination for Parisians, where he painted some of his best-known works. In 1873, Monet purchased a small boat equipped to be used as a floating studio.[26] From the boat studio Monet painted landscapes and also portraits of Édouard Manet and his wife; Manet in turn depicted Monet painting aboard the boat, accompanied by Camille, in 1874.[26] In 1874, he briefly returned to Holland.[27]
|
30 |
+
|
31 |
+
The first Impressionist exhibition was held in 1874 at 35 boulevard des Capucines, Paris, from 15 April to 15 May. The primary purpose of the participants was not so much to promote a new style, but to free themselves from the constraints of the Salon de Paris. The exhibition, open to anyone prepared to pay 60 francs, gave artists the opportunity to show their work without the interference of a jury.[28][29][30]
|
32 |
+
|
33 |
+
Renoir chaired the hanging committee and did most of the work himself, as others members failed to present themselves.[28][29]
|
34 |
+
|
35 |
+
In addition to Impression: Sunrise (pictured above), Monet presented four oil paintings and seven pastels. Among the paintings he displayed was The Luncheon (1868), which features Camille Doncieux and Jean Monet, and which had been rejected by the Paris Salon of 1870.[31] Also in this exhibition was a painting titled Boulevard des Capucines, a painting of the boulevard done from the photographer Nadar's apartment at no. 35. Monet painted the subject twice, and it is uncertain which of the two pictures, that now in the Pushkin Museum in Moscow, or that in the Nelson-Atkins Museum of Art in Kansas City, was the painting that appeared in the groundbreaking 1874 exhibition, though more recently the Moscow picture has been favoured.[32][33] Altogether, 165 works were exhibited in the exhibition, including 4 oils, 2 pastels and 3 watercolours by Morisot; 6 oils and 1 pastel by Renoir; 10 works by Degas; 5 by Pissarro; 3 by Cézanne; and 3 by Guillaumin. Several works were on loan, including Cézanne's Modern Olympia, Morisot's Hide and Seek (owned by Manet) and 2 landscapes by Sisley that had been purchased by Durand-Ruel.[28][29][30]
|
36 |
+
|
37 |
+
The total attendance is estimated at 3500, and some works did sell, though some exhibitors had placed their prices too high. Pissarro was asking 1000 francs for The Orchard and Monet the same for Impression: Sunrise, neither of which sold. Renoir failed to obtain the 500 francs he was asking for La Loge, but later sold it for 450 francs to Père Martin, dealer and supporter of the group.[28][29][30]
|
38 |
+
|
39 |
+
View at Rouelles, Le Havre 1858, Private collection; an early work showing the influence of Corot and Courbet
|
40 |
+
|
41 |
+
Mouth of the Seine at Honfleur, 1865, Norton Simon Foundation, Pasadena, CA; indicates the influence of Dutch maritime painting.[34]
|
42 |
+
|
43 |
+
Women in the Garden, 1866–1867, Musée d'Orsay, Paris.[35]
|
44 |
+
|
45 |
+
Woman in the Garden, 1867, Hermitage, St. Petersburg; a study in the effect of sunlight and shadow on colour
|
46 |
+
|
47 |
+
Garden at Sainte-Adresse ("Jardin à Sainte-Adresse"), 1867, Metropolitan Museum of Art, New York.[36]
|
48 |
+
|
49 |
+
The Luncheon, 1868, Städel, which features Camille Doncieux and Jean Monet, was rejected by the Paris Salon of 1870 but included in the first Impressionists' exhibition in 1874.[37]
|
50 |
+
|
51 |
+
La Grenouillére 1869, Metropolitan Museum of Art, New York; a small plein-air painting created with broad strokes of intense colour.[38]
|
52 |
+
|
53 |
+
The Magpie, 1868–1869. Musée d'Orsay, Paris; one of Monet's early attempts at capturing the effect of snow on the landscape. See also Snow at Argenteuil.
|
54 |
+
|
55 |
+
Le port de Trouville (Breakwater at Trouville, Low Tide), 1870, Museum of Fine Arts, Budapest.[39]
|
56 |
+
|
57 |
+
La plage de Trouville, 1870, National Gallery, London. The left figure may be Camille, on the right possibly the wife of Eugène Boudin, whose beach scenes influenced Monet.[40]
|
58 |
+
|
59 |
+
Houses on the Achterzaan, 1871, Metropolitan Museum of Art, New York
|
60 |
+
|
61 |
+
Jean Monet on his hobby horse, 1872, Metropolitan Museum of Art, New York
|
62 |
+
|
63 |
+
Springtime 1872, Walters Art Museum
|
64 |
+
|
65 |
+
In 1876, Camille Monet became ill with tuberculosis. Their second son, Michel, was born on 17 March 1878. This second child weakened her already fading health. In the summer of that year, the family moved to the village of Vétheuil where they shared a house with the family of Ernest Hoschedé, a wealthy department store owner and patron of the arts. In 1878, Camille Monet was diagnosed with uterine cancer.[41][42][43] She died on 5 September 1879 at the age of thirty-two.[44][45]
|
66 |
+
|
67 |
+
Monet made a study in oils of his dead wife. Many years later, Monet confessed to his friend Georges Clemenceau that his need to analyse colours was both the joy and torment of his life. He explained,
|
68 |
+
|
69 |
+
I one day found myself looking at my beloved wife's dead face and just systematically noting the colours according to an automatic reflex!
|
70 |
+
|
71 |
+
John Berger describes the work as "a blizzard of white, grey, purplish paint ... a terrible blizzard of loss which will forever efface her features. In fact there can be very few death-bed paintings which have been so intensely felt or subjectively expressive."[46]
|
72 |
+
|
73 |
+
After several difficult months following the death of Camille, Monet began to create some of his best paintings of the 19th century. During the early 1880s, Monet painted several groups of landscapes and seascapes in what he considered to be campaigns to document the French countryside. These began to evolve into series of pictures in which he documented the same scene many times in order to capture the changing of light and the passing of the seasons.
|
74 |
+
|
75 |
+
Monet's friend Ernest Hoschedé became bankrupt, and left in 1878 for Belgium. After the death of Camille Monet in September 1879, and while Monet continued to live in the house in Vétheuil, Alice Hoschedé helped Monet to raise his two sons, Jean and Michel. She took them to Paris to live alongside her own six children,[47] Blanche (who married Jean Monet), Germaine, Suzanne, Marthe, Jean-Pierre, and Jacques. In the spring of 1880, Alice Hoschedé and all the children left Paris and rejoined Monet at Vétheuil.[48] In 1881, all of them moved to Poissy, which Monet hated. In April 1883, looking out the window of the little train between Vernon and Gasny, he discovered Giverny in Normandy.[47][49][50] Monet, Alice Hoschedé and the children moved to Vernon, then to the house in Giverny, where he planted a large garden and where he painted for much of the rest of his life. Following the death of her estranged husband, Monet married Alice Hoschedé in 1892.[13]
|
76 |
+
|
77 |
+
Camille Monet on a Garden Bench, 1873, Metropolitan Museum of Art, New York
|
78 |
+
|
79 |
+
The Artist's house at Argenteuil, 1873, The Art Institute of Chicago
|
80 |
+
|
81 |
+
Coquelicots, La promenade (Poppies), 1873, Musée d'Orsay, Paris
|
82 |
+
|
83 |
+
Argenteuil, 1874, National Gallery of Art, Washington D.C.
|
84 |
+
|
85 |
+
The Studio Boat, 1874, Kröller-Müller Museum, Otterlo, Netherlands
|
86 |
+
|
87 |
+
Woman with a Parasol - Madame Monet and Her Son, 1875
|
88 |
+
|
89 |
+
Flowers on the riverbank at Argenteuil, 1877, Pola Museum of Art, Japan
|
90 |
+
|
91 |
+
Arrival of the Normandy Train, Gare Saint-Lazare, 1877, The Art Institute of Chicago
|
92 |
+
|
93 |
+
Vétheuil in the Fog, 1879, Musée Marmottan Monet, Paris
|
94 |
+
|
95 |
+
Monet rented and eventually purchased a house and gardens in Giverny. At the beginning of May 1883, Monet and his large family rented the home and 8,000 square metres (2.0 acres) from a local landowner. The house was situated near the main road between the towns of Vernon and Gasny at Giverny. There was a barn that doubled as a painting studio, orchards and a small garden. The house was close enough to the local schools for the children to attend, and the surrounding landscape offered many suitable motifs for Monet's work.
|
96 |
+
|
97 |
+
The family worked and built up the gardens, and Monet's fortunes began to change for the better as his dealer, Paul Durand-Ruel, had increasing success in selling his paintings.[51] By November 1890, Monet was prosperous enough to buy the house, the surrounding buildings and the land for his gardens. During the 1890s, Monet built a greenhouse and a second studio, a spacious building well lit with skylights.
|
98 |
+
|
99 |
+
Monet wrote daily instructions to his gardener, precise designs and layouts for plantings, and invoices for his floral purchases and his collection of botany books. As Monet's wealth grew, his garden evolved. He remained its architect, even after he hired seven gardeners.[52]
|
100 |
+
|
101 |
+
Monet purchased additional land with a water meadow. In 1893 he began a vast landscaping project which included lily ponds that would become the subjects of his best-known works. White water lilies local to France were planted along with imported cultivars from South America and Egypt, resulting in a range of colours including yellow, blue and white lilies that turned pink with age.[53] In 1899 he began painting the water lilies, first in vertical views with a Japanese bridge as a central feature, and later in the series of large-scale paintings that was to occupy him continuously for the next 20 years of his life.[54] This scenery, with its alternating light and mirror-like reflections, became an integral part of his work. By the mid-1910s Monet had achieved:
|
102 |
+
|
103 |
+
"a completely new, fluid, and somewhat audacious style of painting in which the water-lily pond became the point of departure for an almost abstract art".
|
104 |
+
|
105 |
+
In the Garden, 1895, Collection E. G. Buehrle, Zürich
|
106 |
+
|
107 |
+
Agapanthus, between 1914 and 1926, Museum of Modern Art, New York
|
108 |
+
|
109 |
+
Flowering Arches, Giverny, 1913, Phoenix Art Museum
|
110 |
+
|
111 |
+
Water Lilies and the Japanese bridge, 1897–1899, Princeton University Art Museum
|
112 |
+
|
113 |
+
Water Lilies, 1906, Art Institute of Chicago
|
114 |
+
|
115 |
+
Water Lilies, Musée Marmottan Monet
|
116 |
+
|
117 |
+
Water Lilies, c. 1915, Neue Pinakothek, Munich
|
118 |
+
|
119 |
+
Water Lilies, c. 1915, Musée Marmottan Monet
|
120 |
+
|
121 |
+
Monet's second wife, Alice, died in 1911, and his oldest son Jean, who had married Alice's daughter Blanche, Monet's particular favourite, died in 1914.[13] After Alice died, Blanche looked after and cared for Monet. It was during this time that Monet began to develop the first signs of cataracts.[57]
|
122 |
+
|
123 |
+
During World War I, in which his younger son Michel served and his friend and admirer Georges Clemenceau led the French nation, Monet painted a series of weeping willow trees as homage to the French fallen soldiers. In 1923, he underwent two operations to remove his cataracts. The paintings done while the cataracts affected his vision have a general reddish tone, which is characteristic of the vision of cataract victims. It may also be that after surgery he was able to see certain ultraviolet wavelengths of light that are normally excluded by the lens of the eye; this may have had an effect on the colours he perceived. After his operations he even repainted some of these paintings, with bluer water lilies than before.[58]
|
124 |
+
|
125 |
+
Monet died of lung cancer on 5 December 1926 at the age of 86 and is buried in the Giverny church cemetery.[49] Monet had insisted that the occasion be simple; thus only about fifty people attended the ceremony.[59] At his funeral, his long-time friend Georges Clemenceau removed the black cloth draped over the coffin, stating, "No black for Monet!" and replaced it with a flower-patterned cloth.[60] Monet did not leave a will and so his son Michel inherited his entire estate.
|
126 |
+
|
127 |
+
Monet's home, garden, and waterlily pond were bequeathed by Michel to the French Academy of Fine Arts (part of the Institut de France) in 1966. Through the Fondation Claude Monet, the house and gardens were opened for visits in 1980, following restoration.[61] In addition to souvenirs of Monet and other objects of his life, the house contains his collection of Japanese woodcut prints. The house and garden, along with the Museum of Impressionism, are major attractions in Giverny, which hosts tourists from all over the world.
|
128 |
+
|
129 |
+
Water Lilies and Reflections of a Willow (1916–1919), Musée Marmottan Monet
|
130 |
+
|
131 |
+
Water-Lily Pond and Weeping Willow, 1916–1919, Sale Christie's New York, 1998
|
132 |
+
|
133 |
+
Weeping Willow, 1918–19, Columbus Museum of Art
|
134 |
+
|
135 |
+
Weeping Willow, 1918–19, Kimball Art Museum, Fort Worth, Monet's Weeping Willow paintings were an homage to the fallen French soldiers of World War I
|
136 |
+
|
137 |
+
House Among the Roses, between 1917 and 1919, Albertina, Vienna
|
138 |
+
|
139 |
+
The Rose Walk, Giverny, 1920–1922, Musée Marmottan Monet
|
140 |
+
|
141 |
+
The Japanese Footbridge, 1920–1922, Museum of Modern Art
|
142 |
+
|
143 |
+
The Garden at Giverny
|
144 |
+
|
145 |
+
Monet has been described as "the driving force behind Impressionism".[62] Crucial to the art of the Impressionist painters was the understanding of the effects of light on the local colour of objects, and the effects of the juxtaposition of colours with each other.[63] Monet's long career as a painter was spent in the pursuit of this aim.
|
146 |
+
|
147 |
+
In 1856, his chance meeting with Eugene Boudin, a painter of small beach scenes, opened his eyes to the possibility of plein-air painting. From that time, with a short interruption for military service, he dedicated himself to searching for new and improved methods of painterly expression. To this end, as a young man, he visited the Paris Salon and familiarised himself with the works of older painters, and made friends with other young artists.[62] The five years that he spent at Argenteuil, spending much time on the River Seine in a little floating studio, were formative in his study of the effects of light and reflections. He began to think in terms of colours and shapes rather than scenes and objects. He used bright colours in dabs and dashes and squiggles of paint. Having rejected the academic teachings of Gleyre's studio, he freed himself from theory, saying "I like to paint as a bird sings."[64]
|
148 |
+
|
149 |
+
In 1877 a series of paintings at St-Lazare Station had Monet looking at smoke and steam and the way that they affected colour and visibility, being sometimes opaque and sometimes translucent. He was to further use this study in the painting of the effects of mist and rain on the landscape.[65] The study of the effects of atmosphere was to evolve into a number of series of paintings in which Monet repeatedly painted the same subject (such as his water lilies series)[66] in different lights, at different hours of the day, and through the changes of weather and season. This process began in the 1880s and continued until the end of his life in 1926.
|
150 |
+
|
151 |
+
His first series exhibited as such was of Haystacks, painted from different points of view and at different times of the day. Fifteen of the paintings were exhibited at the Galerie Durand-Ruel in 1891. In 1892 he produced what is probably his best-known series, twenty-six views of Rouen Cathedral.[63] In these paintings Monet broke with painterly traditions by cropping the subject so that only a portion of the façade is seen on the canvas. The paintings do not focus on the grand Medieval building, but on the play of light and shade across its surface, transforming the solid masonry.[67]
|
152 |
+
|
153 |
+
Other series include Poplars, Mornings on the Seine, and the Water Lilies that were painted on his property at Giverny. Between 1883 and 1908, Monet traveled to the Mediterranean, where he painted landmarks, landscapes, and seascapes, including a series of paintings in Venice. In London he painted four series: the Houses of Parliament, London, Charing Cross Bridge, Waterloo Bridge, and Views of Westminster Bridge. Helen Gardner writes:
|
154 |
+
|
155 |
+
Monet, with a scientific precision, has given us an unparalleled and unexcelled record of the passing of time as seen in the movement of light over identical forms.[68]
|
156 |
+
|
157 |
+
La Gare Saint-Lazare, 1877, Musée d'Orsay
|
158 |
+
|
159 |
+
Arrival of the Normandy Train, Gare Saint-Lazare, 1877, The Art Institute of Chicago[69]
|
160 |
+
|
161 |
+
The Cliffs at Etretat, 1885, Clark Institute, Williamstown
|
162 |
+
|
163 |
+
Sailboats behind the needle at Etretat, 1885
|
164 |
+
|
165 |
+
Two paintings from a series of grainstacks, 1890–91: Grainstacks in the Sunlight, Morning Effect,
|
166 |
+
|
167 |
+
Grainstacks, end of day, Autumn, 1890–1891, Art Institute of Chicago
|
168 |
+
|
169 |
+
Poplars (Autumn), 1891, Philadelphia Museum of Art
|
170 |
+
|
171 |
+
Poplars at the River Epte, 1891 Tate
|
172 |
+
|
173 |
+
The Seine Near Giverny, 1897, Museum of Fine Arts, Boston
|
174 |
+
|
175 |
+
Morning on the Seine, 1898, National Museum of Western Art
|
176 |
+
|
177 |
+
Charing Cross Bridge, 1899, Thyssen-Bornemisza Museum Madrid
|
178 |
+
|
179 |
+
Charing Cross Bridge, London, 1899–1901, Saint Louis Art Museum
|
180 |
+
|
181 |
+
Two paintings from a series of The Houses of Parliament, London, 1900–01, Art Institute of Chicago
|
182 |
+
|
183 |
+
London, Houses of Parliament. The Sun Shining through the Fog, 1904, Musée d'Orsay
|
184 |
+
|
185 |
+
Grand Canal, Venice, 1908, Museum of Fine Arts, Boston
|
186 |
+
|
187 |
+
Grand Canal, Venice, 1908, Fine Arts Museums of San Francisco
|
188 |
+
|
189 |
+
In 2004, London, the Parliament, Effects of Sun in the Fog (Londres, le Parlement, trouée de soleil dans le brouillard; 1904), sold for US$20.1 million.[70] In 2006, the journal Proceedings of the Royal Society published a paper providing evidence that these were painted in situ at St Thomas' Hospital over the river Thames.[71]
|
190 |
+
|
191 |
+
Falaises près de Dieppe (Cliffs Near Dieppe) has been stolen on two separate occasions: once in 1998 (in which the museum's curator was convicted of the theft and jailed for five years and two months along with two accomplices) and most recently in August 2007.[72] It was recovered in June 2008.[73]
|
192 |
+
|
193 |
+
Monet's Le Pont du chemin de fer à Argenteuil, an 1873 painting of a railway bridge spanning the Seine near Paris, was bought by an anonymous telephone bidder for a record $41.4 million at Christie's auction in New York on 6 May 2008. The previous record for his painting stood at $36.5 million.[74] A few weeks later, Le bassin aux nymphéas (from the water lilies series) sold at Christie's 24 June 2008 auction in London[75] for £40,921,250 ($80,451,178), nearly doubling the record for the artist.[76]
|
194 |
+
|
195 |
+
This purchase represented one of the top 20 highest prices paid for a painting at the time.
|
196 |
+
|
197 |
+
In October 2013, Monet's paintings, L'Eglise de Vetheuil and Le Bassin aux Nympheas, became subjects of a legal case in New York against NY-based Vilma Bautista, one-time aide to Imelda Marcos, wife of dictator Ferdinand Marcos,[77] after she sold Le Bassin aux Nympheas for $32 million to a Swiss buyer. The said Monet paintings, along with two others, were acquired by Imelda during her husband's presidency and allegedly bought using the nation's funds. Bautista's lawyer claimed that the aide sold the painting for Imelda but did not have a chance to give her the money. The Philippine government seeks the return of the painting.[77] Le Bassin aux Nympheas, also known as Japanese Footbridge over the Water-Lily Pond at Giverny, is part of Monet's famed Water Lilies series.
|
198 |
+
|
199 |
+
Le Bassin Aux Nymphéas, 1919. Monet's late series of Waterlily paintings are among his best-known works.
|
200 |
+
|
201 |
+
Water Lilies, 1919, Metropolitan Museum of Art, New York
|
202 |
+
|
203 |
+
Water Lilies, 1917–1919, Honolulu Museum of Art
|
204 |
+
|
205 |
+
Water lilies (Yellow Nirwana), 1920, The National Gallery, London, London
|
206 |
+
|
207 |
+
Water Lilies, c. 1915–1926, Nelson-Atkins Museum of Art
|
208 |
+
|
209 |
+
The Water Lily Pond, c. 1917–1919, Albertina, Vienna
|
en/1172.html.txt
ADDED
@@ -0,0 +1,209 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Oscar-Claude Monet (UK: /ˈmɒneɪ/, US: /moʊˈneɪ/,[1][2] French: [klod mɔnɛ]; 14 November 1840 – 5 December 1926) was a French painter, a founder of French Impressionist painting and the most consistent and prolific practitioner of the movement's philosophy of expressing one's perceptions before nature, especially as applied to plein air landscape painting.[3][4] The term "Impressionism" is derived from the title of his painting Impression, soleil levant (Impression, Sunrise), which was exhibited in 1874 in the first of the independent exhibitions mounted by Monet and his associates as an alternative to the Salon de Paris.[5]
|
4 |
+
|
5 |
+
Monet's ambition of documenting the French countryside led him to adopt a method of painting the same scene many times in order to capture the changing of light and the passing of the seasons.[6] From 1883, Monet lived in Giverny, where he purchased a house and property and began a vast landscaping project which included lily ponds that would become the subjects of his best-known works. He began painting the water lilies in 1899, first in vertical views with a Japanese bridge as a central feature and later in the series of large-scale paintings that was to occupy him continuously for the next 20 years of his life.
|
6 |
+
|
7 |
+
Claude Monet was born on 14 November 1840 on the fifth floor of 45 rue Laffitte, in the 9th arrondissement of Paris.[7] He was the second son of Claude Adolphe Monet and Louise Justine Aubrée Monet, both of them second-generation Parisians. On 20 May 1841, he was baptized in the local parish church, Notre-Dame-de-Lorette, as Oscar-Claude, but his parents called him simply Oscar.[7][8] (He signed his juvenilia "O. Monet".) Despite being baptized Catholic, Monet later became an atheist.[9][10]
|
8 |
+
|
9 |
+
In 1845, his family moved to Le Havre in Normandy. His father wanted him to go into the family's ship-chandling and grocery business,[11] but Monet wanted to become an artist. His mother was a singer, and supported Monet's desire for a career in art.[12]
|
10 |
+
|
11 |
+
On 1 April 1851, Monet entered Le Havre secondary school of the arts. Locals knew him well for his charcoal caricatures, which he would sell for ten to twenty francs. Monet also undertook his first drawing lessons from Jacques-François Ochard, a former student of Jacques-Louis David. On the beaches of Normandy around 1856 he met fellow artist Eugène Boudin, who became his mentor and taught him to use oil paints. Boudin taught Monet "en plein air" (outdoor) techniques for painting.[13] Both were influenced by Johan Barthold Jongkind.
|
12 |
+
|
13 |
+
On 28 January 1857, his mother died. At the age of sixteen, he left school and went to live with his widowed, childless aunt, Marie-Jeanne Lecadre.
|
14 |
+
|
15 |
+
When Monet traveled to Paris to visit the Louvre, he witnessed painters copying from the old masters. Having brought his paints and other tools with him, he would instead go and sit by a window and paint what he saw.[14] Monet was in Paris for several years and met other young painters, including Édouard Manet and others who would become friends and fellow Impressionists.
|
16 |
+
|
17 |
+
After drawing a low ballot number in March 1861, Monet was drafted into the First Regiment of African Light Cavalry (Chasseurs d'Afrique) in Algeria for a seven-year period of military service. His prosperous father could have purchased Monet's exemption from conscription but declined to do so when his son refused to give up painting. While in Algeria, Monet did only a few sketches of casbah scenes, a single landscape, and several portraits of officers, all of which have been lost. In a Le Temps interview of 1900 however he commented that the light and vivid colours of North Africa "contained the germ of my future researches".[15] After about a year of garrison duty in Algiers, Monet contracted typhoid fever and briefly went absent without leave. Following convalescence, Monet's aunt intervened to remove him from the army if he agreed to complete a course at an art school. It is possible that the Dutch painter Johan Barthold Jongkind, whom Monet knew, may have prompted his aunt on this matter.
|
18 |
+
|
19 |
+
Disillusioned with the traditional art taught at art schools, in 1862 Monet became a student of Charles Gleyre in Paris, where he met Pierre-Auguste Renoir, Frédéric Bazille and Alfred Sisley. Together they shared new approaches to art, painting the effects of light en plein air with broken colour and rapid brushstrokes, in what later came to be known as Impressionism.
|
20 |
+
|
21 |
+
In January 1865 Monet was working on a version of Le déjeuner sur l'herbe, aiming to present it for hanging at the Salon, which had rejected Manet's Le déjeuner sur l'herbe two years earlier.[17] Monet's painting was very large and could not be completed in time. (It was later cut up, with parts now in different galleries.) Monet submitted instead a painting of Camille or The Woman in the Green Dress (La femme à la robe verte), one of many works using his future wife, Camille Doncieux, as his model. Both this painting and a small landscape were hung.[17] The following year Monet used Camille for his model in Women in the Garden, and On the Bank of the Seine, Bennecourt in 1868. Camille became pregnant and gave birth to their first child, Jean, in 1867.[18] Monet and Camille married on 28 June 1870, just before the outbreak of the Franco-Prussian War,[19] and, after their excursion to London and Zaandam, they moved to Argenteuil, in December 1871. During this time Monet painted various works of modern life. He and Camille lived in poverty for most of this period. Following the successful exhibition of some maritime paintings, and the winning of a silver medal at Le Havre, Monet's paintings were seized by creditors, from whom they were bought back by a shipping merchant, Gaudibert, who was also a patron of Boudin.[17]
|
22 |
+
|
23 |
+
From the late 1860s, Monet and other like-minded artists met with rejection from the conservative Académie des Beaux-Arts, which held its annual exhibition at the Salon de Paris. During the latter part of 1873, Monet, Pierre-Auguste Renoir, Camille Pissarro, and Alfred Sisley organized the Société anonyme des artistes peintres, sculpteurs et graveurs (Anonymous Society of Painters, Sculptors, and Engravers) to exhibit their artworks independently. At their first exhibition, held in April 1874, Monet exhibited the work that was to give the group its lasting name. He was inspired by the style and subject matter of previous modern painters Camille Pissarro and Edouard Manet.[20]
|
24 |
+
|
25 |
+
Impression, Sunrise was painted in 1872, depicting a Le Havre port landscape. From the painting's title the art critic Louis Leroy, in his review, "L'Exposition des Impressionnistes," which appeared in Le Charivari, coined the term "Impressionism".[21] It was intended as disparagement but the Impressionists appropriated the term for themselves.[22][23]
|
26 |
+
|
27 |
+
After the outbreak of the Franco-Prussian War (19 July 1870), Monet and his family took refuge in England in September 1870,[24] where he studied the works of John Constable and Joseph Mallord William Turner, both of whose landscapes would serve to inspire Monet's innovations in the study of colour. In the spring of 1871, Monet's works were refused authorisation for inclusion in the Royal Academy exhibition.[19]
|
28 |
+
|
29 |
+
In May 1871, he left London to live in Zaandam, in the Netherlands,[19] where he made twenty-five paintings (and the police suspected him of revolutionary activities).[25] He also paid a first visit to nearby Amsterdam. In October or November 1871, he returned to France. From December 1871 to 1878 he lived at Argenteuil, a village on the right bank of the Seine river near Paris, and a popular Sunday-outing destination for Parisians, where he painted some of his best-known works. In 1873, Monet purchased a small boat equipped to be used as a floating studio.[26] From the boat studio Monet painted landscapes and also portraits of Édouard Manet and his wife; Manet in turn depicted Monet painting aboard the boat, accompanied by Camille, in 1874.[26] In 1874, he briefly returned to Holland.[27]
|
30 |
+
|
31 |
+
The first Impressionist exhibition was held in 1874 at 35 boulevard des Capucines, Paris, from 15 April to 15 May. The primary purpose of the participants was not so much to promote a new style, but to free themselves from the constraints of the Salon de Paris. The exhibition, open to anyone prepared to pay 60 francs, gave artists the opportunity to show their work without the interference of a jury.[28][29][30]
|
32 |
+
|
33 |
+
Renoir chaired the hanging committee and did most of the work himself, as others members failed to present themselves.[28][29]
|
34 |
+
|
35 |
+
In addition to Impression: Sunrise (pictured above), Monet presented four oil paintings and seven pastels. Among the paintings he displayed was The Luncheon (1868), which features Camille Doncieux and Jean Monet, and which had been rejected by the Paris Salon of 1870.[31] Also in this exhibition was a painting titled Boulevard des Capucines, a painting of the boulevard done from the photographer Nadar's apartment at no. 35. Monet painted the subject twice, and it is uncertain which of the two pictures, that now in the Pushkin Museum in Moscow, or that in the Nelson-Atkins Museum of Art in Kansas City, was the painting that appeared in the groundbreaking 1874 exhibition, though more recently the Moscow picture has been favoured.[32][33] Altogether, 165 works were exhibited in the exhibition, including 4 oils, 2 pastels and 3 watercolours by Morisot; 6 oils and 1 pastel by Renoir; 10 works by Degas; 5 by Pissarro; 3 by Cézanne; and 3 by Guillaumin. Several works were on loan, including Cézanne's Modern Olympia, Morisot's Hide and Seek (owned by Manet) and 2 landscapes by Sisley that had been purchased by Durand-Ruel.[28][29][30]
|
36 |
+
|
37 |
+
The total attendance is estimated at 3500, and some works did sell, though some exhibitors had placed their prices too high. Pissarro was asking 1000 francs for The Orchard and Monet the same for Impression: Sunrise, neither of which sold. Renoir failed to obtain the 500 francs he was asking for La Loge, but later sold it for 450 francs to Père Martin, dealer and supporter of the group.[28][29][30]
|
38 |
+
|
39 |
+
View at Rouelles, Le Havre 1858, Private collection; an early work showing the influence of Corot and Courbet
|
40 |
+
|
41 |
+
Mouth of the Seine at Honfleur, 1865, Norton Simon Foundation, Pasadena, CA; indicates the influence of Dutch maritime painting.[34]
|
42 |
+
|
43 |
+
Women in the Garden, 1866–1867, Musée d'Orsay, Paris.[35]
|
44 |
+
|
45 |
+
Woman in the Garden, 1867, Hermitage, St. Petersburg; a study in the effect of sunlight and shadow on colour
|
46 |
+
|
47 |
+
Garden at Sainte-Adresse ("Jardin à Sainte-Adresse"), 1867, Metropolitan Museum of Art, New York.[36]
|
48 |
+
|
49 |
+
The Luncheon, 1868, Städel, which features Camille Doncieux and Jean Monet, was rejected by the Paris Salon of 1870 but included in the first Impressionists' exhibition in 1874.[37]
|
50 |
+
|
51 |
+
La Grenouillére 1869, Metropolitan Museum of Art, New York; a small plein-air painting created with broad strokes of intense colour.[38]
|
52 |
+
|
53 |
+
The Magpie, 1868–1869. Musée d'Orsay, Paris; one of Monet's early attempts at capturing the effect of snow on the landscape. See also Snow at Argenteuil.
|
54 |
+
|
55 |
+
Le port de Trouville (Breakwater at Trouville, Low Tide), 1870, Museum of Fine Arts, Budapest.[39]
|
56 |
+
|
57 |
+
La plage de Trouville, 1870, National Gallery, London. The left figure may be Camille, on the right possibly the wife of Eugène Boudin, whose beach scenes influenced Monet.[40]
|
58 |
+
|
59 |
+
Houses on the Achterzaan, 1871, Metropolitan Museum of Art, New York
|
60 |
+
|
61 |
+
Jean Monet on his hobby horse, 1872, Metropolitan Museum of Art, New York
|
62 |
+
|
63 |
+
Springtime 1872, Walters Art Museum
|
64 |
+
|
65 |
+
In 1876, Camille Monet became ill with tuberculosis. Their second son, Michel, was born on 17 March 1878. This second child weakened her already fading health. In the summer of that year, the family moved to the village of Vétheuil where they shared a house with the family of Ernest Hoschedé, a wealthy department store owner and patron of the arts. In 1878, Camille Monet was diagnosed with uterine cancer.[41][42][43] She died on 5 September 1879 at the age of thirty-two.[44][45]
|
66 |
+
|
67 |
+
Monet made a study in oils of his dead wife. Many years later, Monet confessed to his friend Georges Clemenceau that his need to analyse colours was both the joy and torment of his life. He explained,
|
68 |
+
|
69 |
+
I one day found myself looking at my beloved wife's dead face and just systematically noting the colours according to an automatic reflex!
|
70 |
+
|
71 |
+
John Berger describes the work as "a blizzard of white, grey, purplish paint ... a terrible blizzard of loss which will forever efface her features. In fact there can be very few death-bed paintings which have been so intensely felt or subjectively expressive."[46]
|
72 |
+
|
73 |
+
After several difficult months following the death of Camille, Monet began to create some of his best paintings of the 19th century. During the early 1880s, Monet painted several groups of landscapes and seascapes in what he considered to be campaigns to document the French countryside. These began to evolve into series of pictures in which he documented the same scene many times in order to capture the changing of light and the passing of the seasons.
|
74 |
+
|
75 |
+
Monet's friend Ernest Hoschedé became bankrupt, and left in 1878 for Belgium. After the death of Camille Monet in September 1879, and while Monet continued to live in the house in Vétheuil, Alice Hoschedé helped Monet to raise his two sons, Jean and Michel. She took them to Paris to live alongside her own six children,[47] Blanche (who married Jean Monet), Germaine, Suzanne, Marthe, Jean-Pierre, and Jacques. In the spring of 1880, Alice Hoschedé and all the children left Paris and rejoined Monet at Vétheuil.[48] In 1881, all of them moved to Poissy, which Monet hated. In April 1883, looking out the window of the little train between Vernon and Gasny, he discovered Giverny in Normandy.[47][49][50] Monet, Alice Hoschedé and the children moved to Vernon, then to the house in Giverny, where he planted a large garden and where he painted for much of the rest of his life. Following the death of her estranged husband, Monet married Alice Hoschedé in 1892.[13]
|
76 |
+
|
77 |
+
Camille Monet on a Garden Bench, 1873, Metropolitan Museum of Art, New York
|
78 |
+
|
79 |
+
The Artist's house at Argenteuil, 1873, The Art Institute of Chicago
|
80 |
+
|
81 |
+
Coquelicots, La promenade (Poppies), 1873, Musée d'Orsay, Paris
|
82 |
+
|
83 |
+
Argenteuil, 1874, National Gallery of Art, Washington D.C.
|
84 |
+
|
85 |
+
The Studio Boat, 1874, Kröller-Müller Museum, Otterlo, Netherlands
|
86 |
+
|
87 |
+
Woman with a Parasol - Madame Monet and Her Son, 1875
|
88 |
+
|
89 |
+
Flowers on the riverbank at Argenteuil, 1877, Pola Museum of Art, Japan
|
90 |
+
|
91 |
+
Arrival of the Normandy Train, Gare Saint-Lazare, 1877, The Art Institute of Chicago
|
92 |
+
|
93 |
+
Vétheuil in the Fog, 1879, Musée Marmottan Monet, Paris
|
94 |
+
|
95 |
+
Monet rented and eventually purchased a house and gardens in Giverny. At the beginning of May 1883, Monet and his large family rented the home and 8,000 square metres (2.0 acres) from a local landowner. The house was situated near the main road between the towns of Vernon and Gasny at Giverny. There was a barn that doubled as a painting studio, orchards and a small garden. The house was close enough to the local schools for the children to attend, and the surrounding landscape offered many suitable motifs for Monet's work.
|
96 |
+
|
97 |
+
The family worked and built up the gardens, and Monet's fortunes began to change for the better as his dealer, Paul Durand-Ruel, had increasing success in selling his paintings.[51] By November 1890, Monet was prosperous enough to buy the house, the surrounding buildings and the land for his gardens. During the 1890s, Monet built a greenhouse and a second studio, a spacious building well lit with skylights.
|
98 |
+
|
99 |
+
Monet wrote daily instructions to his gardener, precise designs and layouts for plantings, and invoices for his floral purchases and his collection of botany books. As Monet's wealth grew, his garden evolved. He remained its architect, even after he hired seven gardeners.[52]
|
100 |
+
|
101 |
+
Monet purchased additional land with a water meadow. In 1893 he began a vast landscaping project which included lily ponds that would become the subjects of his best-known works. White water lilies local to France were planted along with imported cultivars from South America and Egypt, resulting in a range of colours including yellow, blue and white lilies that turned pink with age.[53] In 1899 he began painting the water lilies, first in vertical views with a Japanese bridge as a central feature, and later in the series of large-scale paintings that was to occupy him continuously for the next 20 years of his life.[54] This scenery, with its alternating light and mirror-like reflections, became an integral part of his work. By the mid-1910s Monet had achieved:
|
102 |
+
|
103 |
+
"a completely new, fluid, and somewhat audacious style of painting in which the water-lily pond became the point of departure for an almost abstract art".
|
104 |
+
|
105 |
+
In the Garden, 1895, Collection E. G. Buehrle, Zürich
|
106 |
+
|
107 |
+
Agapanthus, between 1914 and 1926, Museum of Modern Art, New York
|
108 |
+
|
109 |
+
Flowering Arches, Giverny, 1913, Phoenix Art Museum
|
110 |
+
|
111 |
+
Water Lilies and the Japanese bridge, 1897–1899, Princeton University Art Museum
|
112 |
+
|
113 |
+
Water Lilies, 1906, Art Institute of Chicago
|
114 |
+
|
115 |
+
Water Lilies, Musée Marmottan Monet
|
116 |
+
|
117 |
+
Water Lilies, c. 1915, Neue Pinakothek, Munich
|
118 |
+
|
119 |
+
Water Lilies, c. 1915, Musée Marmottan Monet
|
120 |
+
|
121 |
+
Monet's second wife, Alice, died in 1911, and his oldest son Jean, who had married Alice's daughter Blanche, Monet's particular favourite, died in 1914.[13] After Alice died, Blanche looked after and cared for Monet. It was during this time that Monet began to develop the first signs of cataracts.[57]
|
122 |
+
|
123 |
+
During World War I, in which his younger son Michel served and his friend and admirer Georges Clemenceau led the French nation, Monet painted a series of weeping willow trees as homage to the French fallen soldiers. In 1923, he underwent two operations to remove his cataracts. The paintings done while the cataracts affected his vision have a general reddish tone, which is characteristic of the vision of cataract victims. It may also be that after surgery he was able to see certain ultraviolet wavelengths of light that are normally excluded by the lens of the eye; this may have had an effect on the colours he perceived. After his operations he even repainted some of these paintings, with bluer water lilies than before.[58]
|
124 |
+
|
125 |
+
Monet died of lung cancer on 5 December 1926 at the age of 86 and is buried in the Giverny church cemetery.[49] Monet had insisted that the occasion be simple; thus only about fifty people attended the ceremony.[59] At his funeral, his long-time friend Georges Clemenceau removed the black cloth draped over the coffin, stating, "No black for Monet!" and replaced it with a flower-patterned cloth.[60] Monet did not leave a will and so his son Michel inherited his entire estate.
|
126 |
+
|
127 |
+
Monet's home, garden, and waterlily pond were bequeathed by Michel to the French Academy of Fine Arts (part of the Institut de France) in 1966. Through the Fondation Claude Monet, the house and gardens were opened for visits in 1980, following restoration.[61] In addition to souvenirs of Monet and other objects of his life, the house contains his collection of Japanese woodcut prints. The house and garden, along with the Museum of Impressionism, are major attractions in Giverny, which hosts tourists from all over the world.
|
128 |
+
|
129 |
+
Water Lilies and Reflections of a Willow (1916–1919), Musée Marmottan Monet
|
130 |
+
|
131 |
+
Water-Lily Pond and Weeping Willow, 1916–1919, Sale Christie's New York, 1998
|
132 |
+
|
133 |
+
Weeping Willow, 1918–19, Columbus Museum of Art
|
134 |
+
|
135 |
+
Weeping Willow, 1918–19, Kimball Art Museum, Fort Worth, Monet's Weeping Willow paintings were an homage to the fallen French soldiers of World War I
|
136 |
+
|
137 |
+
House Among the Roses, between 1917 and 1919, Albertina, Vienna
|
138 |
+
|
139 |
+
The Rose Walk, Giverny, 1920–1922, Musée Marmottan Monet
|
140 |
+
|
141 |
+
The Japanese Footbridge, 1920–1922, Museum of Modern Art
|
142 |
+
|
143 |
+
The Garden at Giverny
|
144 |
+
|
145 |
+
Monet has been described as "the driving force behind Impressionism".[62] Crucial to the art of the Impressionist painters was the understanding of the effects of light on the local colour of objects, and the effects of the juxtaposition of colours with each other.[63] Monet's long career as a painter was spent in the pursuit of this aim.
|
146 |
+
|
147 |
+
In 1856, his chance meeting with Eugene Boudin, a painter of small beach scenes, opened his eyes to the possibility of plein-air painting. From that time, with a short interruption for military service, he dedicated himself to searching for new and improved methods of painterly expression. To this end, as a young man, he visited the Paris Salon and familiarised himself with the works of older painters, and made friends with other young artists.[62] The five years that he spent at Argenteuil, spending much time on the River Seine in a little floating studio, were formative in his study of the effects of light and reflections. He began to think in terms of colours and shapes rather than scenes and objects. He used bright colours in dabs and dashes and squiggles of paint. Having rejected the academic teachings of Gleyre's studio, he freed himself from theory, saying "I like to paint as a bird sings."[64]
|
148 |
+
|
149 |
+
In 1877 a series of paintings at St-Lazare Station had Monet looking at smoke and steam and the way that they affected colour and visibility, being sometimes opaque and sometimes translucent. He was to further use this study in the painting of the effects of mist and rain on the landscape.[65] The study of the effects of atmosphere was to evolve into a number of series of paintings in which Monet repeatedly painted the same subject (such as his water lilies series)[66] in different lights, at different hours of the day, and through the changes of weather and season. This process began in the 1880s and continued until the end of his life in 1926.
|
150 |
+
|
151 |
+
His first series exhibited as such was of Haystacks, painted from different points of view and at different times of the day. Fifteen of the paintings were exhibited at the Galerie Durand-Ruel in 1891. In 1892 he produced what is probably his best-known series, twenty-six views of Rouen Cathedral.[63] In these paintings Monet broke with painterly traditions by cropping the subject so that only a portion of the façade is seen on the canvas. The paintings do not focus on the grand Medieval building, but on the play of light and shade across its surface, transforming the solid masonry.[67]
|
152 |
+
|
153 |
+
Other series include Poplars, Mornings on the Seine, and the Water Lilies that were painted on his property at Giverny. Between 1883 and 1908, Monet traveled to the Mediterranean, where he painted landmarks, landscapes, and seascapes, including a series of paintings in Venice. In London he painted four series: the Houses of Parliament, London, Charing Cross Bridge, Waterloo Bridge, and Views of Westminster Bridge. Helen Gardner writes:
|
154 |
+
|
155 |
+
Monet, with a scientific precision, has given us an unparalleled and unexcelled record of the passing of time as seen in the movement of light over identical forms.[68]
|
156 |
+
|
157 |
+
La Gare Saint-Lazare, 1877, Musée d'Orsay
|
158 |
+
|
159 |
+
Arrival of the Normandy Train, Gare Saint-Lazare, 1877, The Art Institute of Chicago[69]
|
160 |
+
|
161 |
+
The Cliffs at Etretat, 1885, Clark Institute, Williamstown
|
162 |
+
|
163 |
+
Sailboats behind the needle at Etretat, 1885
|
164 |
+
|
165 |
+
Two paintings from a series of grainstacks, 1890–91: Grainstacks in the Sunlight, Morning Effect,
|
166 |
+
|
167 |
+
Grainstacks, end of day, Autumn, 1890–1891, Art Institute of Chicago
|
168 |
+
|
169 |
+
Poplars (Autumn), 1891, Philadelphia Museum of Art
|
170 |
+
|
171 |
+
Poplars at the River Epte, 1891 Tate
|
172 |
+
|
173 |
+
The Seine Near Giverny, 1897, Museum of Fine Arts, Boston
|
174 |
+
|
175 |
+
Morning on the Seine, 1898, National Museum of Western Art
|
176 |
+
|
177 |
+
Charing Cross Bridge, 1899, Thyssen-Bornemisza Museum Madrid
|
178 |
+
|
179 |
+
Charing Cross Bridge, London, 1899–1901, Saint Louis Art Museum
|
180 |
+
|
181 |
+
Two paintings from a series of The Houses of Parliament, London, 1900–01, Art Institute of Chicago
|
182 |
+
|
183 |
+
London, Houses of Parliament. The Sun Shining through the Fog, 1904, Musée d'Orsay
|
184 |
+
|
185 |
+
Grand Canal, Venice, 1908, Museum of Fine Arts, Boston
|
186 |
+
|
187 |
+
Grand Canal, Venice, 1908, Fine Arts Museums of San Francisco
|
188 |
+
|
189 |
+
In 2004, London, the Parliament, Effects of Sun in the Fog (Londres, le Parlement, trouée de soleil dans le brouillard; 1904), sold for US$20.1 million.[70] In 2006, the journal Proceedings of the Royal Society published a paper providing evidence that these were painted in situ at St Thomas' Hospital over the river Thames.[71]
|
190 |
+
|
191 |
+
Falaises près de Dieppe (Cliffs Near Dieppe) has been stolen on two separate occasions: once in 1998 (in which the museum's curator was convicted of the theft and jailed for five years and two months along with two accomplices) and most recently in August 2007.[72] It was recovered in June 2008.[73]
|
192 |
+
|
193 |
+
Monet's Le Pont du chemin de fer à Argenteuil, an 1873 painting of a railway bridge spanning the Seine near Paris, was bought by an anonymous telephone bidder for a record $41.4 million at Christie's auction in New York on 6 May 2008. The previous record for his painting stood at $36.5 million.[74] A few weeks later, Le bassin aux nymphéas (from the water lilies series) sold at Christie's 24 June 2008 auction in London[75] for £40,921,250 ($80,451,178), nearly doubling the record for the artist.[76]
|
194 |
+
|
195 |
+
This purchase represented one of the top 20 highest prices paid for a painting at the time.
|
196 |
+
|
197 |
+
In October 2013, Monet's paintings, L'Eglise de Vetheuil and Le Bassin aux Nympheas, became subjects of a legal case in New York against NY-based Vilma Bautista, one-time aide to Imelda Marcos, wife of dictator Ferdinand Marcos,[77] after she sold Le Bassin aux Nympheas for $32 million to a Swiss buyer. The said Monet paintings, along with two others, were acquired by Imelda during her husband's presidency and allegedly bought using the nation's funds. Bautista's lawyer claimed that the aide sold the painting for Imelda but did not have a chance to give her the money. The Philippine government seeks the return of the painting.[77] Le Bassin aux Nympheas, also known as Japanese Footbridge over the Water-Lily Pond at Giverny, is part of Monet's famed Water Lilies series.
|
198 |
+
|
199 |
+
Le Bassin Aux Nymphéas, 1919. Monet's late series of Waterlily paintings are among his best-known works.
|
200 |
+
|
201 |
+
Water Lilies, 1919, Metropolitan Museum of Art, New York
|
202 |
+
|
203 |
+
Water Lilies, 1917–1919, Honolulu Museum of Art
|
204 |
+
|
205 |
+
Water lilies (Yellow Nirwana), 1920, The National Gallery, London, London
|
206 |
+
|
207 |
+
Water Lilies, c. 1915–1926, Nelson-Atkins Museum of Art
|
208 |
+
|
209 |
+
The Water Lily Pond, c. 1917–1919, Albertina, Vienna
|
en/1173.html.txt
ADDED
@@ -0,0 +1,58 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Claudius Ptolemy (/ˈtɒləmi/; Koinē Greek: Κλαύδιος Πτολεμαῖος, Klaúdios Ptolemaîos [kláwdios ptolɛmɛ́os]; Latin: Claudius Ptolemaeus; c. 100 – c. 170)[2] was a mathematician, astronomer, geographer and astrologer who wrote several scientific treatises, three of which were of importance to later Byzantine, Islamic and Western European science. The first is the astronomical treatise now known as the Almagest, although it was originally entitled the Mathematical Treatise (Μαθηματικὴ Σύνταξις) and then known as The Great Treatise (Ἡ Μεγάλη Σύνταξις). The second is the Geography, which is a thorough discussion of the geographic knowledge of the Greco-Roman world. The third is the astrological treatise in which he attempted to adapt horoscopic astrology to the Aristotelian natural philosophy of his day. This is sometimes known as the Apotelesmatiká (Ἀποτελεσματικά) but more commonly known as the Tetrábiblos from the Koine Greek (Τετράβιβλος) meaning "Four Books" or by the Latin Quadripartitum.
|
4 |
+
|
5 |
+
Ptolemy lived in the city of Alexandria in the Roman province of Egypt under the rule of the Roman Empire,[3] had a Latin name (which several historians have taken to imply he was also a Roman citizen),[4] cited Greek philosophers, and used Babylonian observations and Babylonian lunar theory. The 14th century astronomer Theodore Meliteniotes gave his birthplace as the prominent Greek city Ptolemais Hermiou (Πτολεμαΐς ‘Ερμείου) in the Thebaid (Θηβᾱΐς). This attestation is quite late, however, and there is no other evidence to confirm or contradict it.[5] He died in Alexandria around 168.[6]
|
6 |
+
|
7 |
+
Ptolemaeus (Πτολεμαῖος Ptolemaîos) is an ancient Greek personal name. It occurs once in Greek mythology and is of Homeric form.[7] It was common among the Macedonian upper class at the time of Alexander the Great and there were several of this name among Alexander's army, one of whom made himself pharaoh in 323 BCE: Ptolemy I Soter, the first pharaoh of the Ptolemaic Kingdom. All subsequent pharaohs of Egypt until Egypt became a Roman province in 30 BCE, ending the Macedonian family's rule, were also Ptolemies.[8][citation needed]
|
8 |
+
|
9 |
+
The name Claudius is a Roman name, belonging to the gens Claudia; the peculiar multipart form of the whole name Claudius Ptolemaeus is a Roman custom, characteristic of Roman citizens. Several historians have made the deduction that this indicates that Ptolemy would have been a Roman citizen.[10] Gerald Toomer, the translator of Ptolemy's Almagest into English, suggests that citizenship was probably granted to one of Ptolemy's ancestors by either the emperor Claudius or the emperor Nero.[11]
|
10 |
+
|
11 |
+
The 9th century Persian astronomer Abu Maʻshar presents Ptolemy as a member of Egypt's royal lineage, stating that the descendants of the Alexandrine general and Pharaoh Ptolemy I Soter, were wise "and included Ptolemy the Wise, who composed the book of the Almagest". Abu Maʻshar recorded a belief that a different member of this royal line "composed the book on astrology and attributed it to Ptolemy". We can evidence historical confusion on this point from Abu Maʿshar's subsequent remark: "It is sometimes said that the very learned man who wrote the book of astrology also wrote the book of the Almagest. The correct answer is not known."[12] Not much positive evidence is known on the subject of Ptolemy's ancestry, apart from what can be drawn from the details of his name (see above), although modern scholars have concluded that Abu Maʻshar's account is erroneous.[13] It is no longer doubted that the astronomer who wrote the Almagest also wrote the Tetrabiblos as its astrological counterpart.[14]
|
12 |
+
|
13 |
+
Ptolemy wrote in ancient Greek and can be shown to have utilized Babylonian astronomical data.[15][16] He might have been a Roman citizen, but was ethnically either a Greek[2][17][18] or a Hellenized Egyptian.[17][19][20] He was often known in later Arabic sources as "the Upper Egyptian",[21] suggesting he may have had origins in southern Egypt.[22] Later Arabic astronomers, geographers and physicists referred to him as Baṭlumyus (Arabic: بَطْلُمْيوس).[23]
|
14 |
+
|
15 |
+
Ptolemy's Almagest is the only surviving comprehensive ancient treatise on astronomy. Babylonian astronomers had developed arithmetical techniques for calculating astronomical phenomena; Greek astronomers such as Hipparchus had produced geometric models for calculating celestial motions. Ptolemy, however, claimed to have derived his geometrical models from selected astronomical observations by his predecessors spanning more than 800 years, though astronomers have for centuries suspected that his models' parameters were adopted independently of observations.[24] Ptolemy presented his astronomical models in convenient tables, which could be used to compute the future or past position of the planets.[25] The Almagest also contains a star catalogue, which is a version of a catalogue created by Hipparchus. Its list of forty-eight constellations is ancestral to the modern system of constellations, but unlike the modern system they did not cover the whole sky (only the sky Hipparchus could see). Across Europe, the Middle East and North Africa in the Medieval period, it was the authoritative text on astronomy, with its author becoming an almost mythical figure, called Ptolemy, King of Alexandria.[26] The Almagest was preserved, like most of extant Classical Greek science, in Arabic manuscripts (hence its familiar name). Because of its reputation, it was widely sought and was translated twice into Latin in the 12th century, once in Sicily and again in Spain.[27] Ptolemy's model, like those of his predecessors, was geocentric and was almost universally accepted until the appearance of simpler heliocentric models during the scientific revolution.
|
16 |
+
|
17 |
+
His Planetary Hypotheses went beyond the mathematical model of the Almagest to present a physical realization of the universe as a set of nested spheres,[28] in which he used the epicycles of his planetary model to compute the dimensions of the universe. He estimated the Sun was at an average distance of 1,210 Earth radii, while the radius of the sphere of the fixed stars was 20,000 times the radius of the Earth.[29]
|
18 |
+
|
19 |
+
Ptolemy presented a useful tool for astronomical calculations in his Handy Tables, which tabulated all the data needed to compute the positions of the Sun, Moon and planets, the rising and setting of the stars, and eclipses of the Sun and Moon. Ptolemy's Handy Tables provided the model for later astronomical tables or zījes. In the Phaseis (Risings of the Fixed Stars), Ptolemy gave a parapegma, a star calendar or almanac, based on the appearances and disappearances of stars over the course of the solar year.[30]
|
20 |
+
|
21 |
+
Ptolemy's second main work is his Geography (also called the Geographia), a compilation of geographical coordinates of the part of the world known to the Roman Empire during his time. He relied somewhat on the work of an earlier geographer, Marinos of Tyre, and on gazetteers of the Roman and ancient Persian Empire.[citation needed] He also acknowledged ancient astronomer Hipparchus for having provided the elevation of the north celestial pole[31] for a few cities.[32]
|
22 |
+
|
23 |
+
The first part of the Geography is a discussion of the data and of the methods he used. As with the model of the Solar System in the Almagest, Ptolemy put all this information into a grand scheme. Following Marinos, he assigned coordinates to all the places and geographic features he knew, in a grid that spanned the globe. Latitude was measured from the equator, as it is today, but Ptolemy preferred[33] to express it as climata, the length of the longest day rather than degrees of arc: the length of the midsummer day increases from 12h to 24h as one goes from the equator to the polar circle. In books 2 through 7, he used degrees and put the meridian of 0 longitude at the most western land he knew, the "Blessed Islands", often identified as the Canary Islands, as suggested by the location of the six dots labelled the "FORTUNATA" islands near the left extreme of the blue sea of Ptolemy's map here reproduced.
|
24 |
+
|
25 |
+
Ptolemy also devised and provided instructions on how to create maps both of the whole inhabited world (oikoumenè) and of the Roman provinces. In the second part of the Geography, he provided the necessary topographic lists, and captions for the maps. His oikoumenè spanned 180 degrees of longitude from the Blessed Islands in the Atlantic Ocean to the middle of China, and about 80 degrees of latitude from Shetland to anti-Meroe (east coast of Africa); Ptolemy was well aware that he knew about only a quarter of the globe, and an erroneous extension of China southward suggests his sources did not reach all the way to the Pacific Ocean.
|
26 |
+
|
27 |
+
The maps in surviving manuscripts of Ptolemy's Geography, however, only date from about 1300, after the text was rediscovered by Maximus Planudes. It seems likely that the topographical tables in books 2–7 are cumulative texts – texts which were altered and added to as new knowledge became available in the centuries after Ptolemy.[34] This means that information contained in different parts of the Geography is likely to be of different dates.
|
28 |
+
|
29 |
+
Maps based on scientific principles had been made since the time of Eratosthenes, in the 3rd century BC, but Ptolemy improved map projections. It is known from a speech by Eumenius that a world map, an orbis pictus, doubtless based on the Geography, was on display in a school in Augustodunum, Gaul in the 3rd century.[35] In the 15th century, Ptolemy's Geography began to be printed with engraved maps; the earliest printed edition with engraved maps was produced in Bologna in 1477, followed quickly by a Roman edition in 1478 (Campbell, 1987). An edition printed at Ulm in 1482, including woodcut maps, was the first one printed north of the Alps. The maps look distorted when compared to modern maps, because Ptolemy's data were inaccurate. One reason is that Ptolemy estimated the size of the Earth as too small: while Eratosthenes found 700 stadia for a great circle degree on the globe, Ptolemy uses 500 stadia in the Geography. It is highly probable that these were the same stadion, since Ptolemy switched from the former scale to the latter between the Syntaxis and the Geography, and severely readjusted longitude degrees accordingly. See also Ancient Greek units of measurement and History of geodesy.
|
30 |
+
|
31 |
+
Because Ptolemy derived many of his key latitudes from crude longest day values, his latitudes are erroneous on average by roughly a degree (2 degrees for Byzantium, 4 degrees for Carthage), though capable ancient astronomers knew their latitudes to more like a minute. (Ptolemy's own latitude was in error by 14'.) He agreed (Geography 1.4) that longitude was best determined by simultaneous observation of lunar eclipses, yet he was so out of touch with the scientists of his day that he knew of no such data more recent than 500 years before (Arbela eclipse). When switching from 700 stadia per degree to 500, he (or Marinos) expanded longitude differences between cities accordingly (a point first realized by P. Gosselin in 1790), resulting in serious over-stretching of the Earth's east-west scale in degrees, though not distance. Achieving highly precise longitude remained a problem in geography until the application of Galileo's Jovian moon method in the 18th century. It must be added that his original topographic list cannot be reconstructed: the long tables with numbers were transmitted to posterity through copies containing many scribal errors, and people have always been adding or improving the topographic data: this is a testimony to the persistent popularity of this influential work in the history of cartography.
|
32 |
+
|
33 |
+
Ptolemy has been referred to as "a pro-astrological authority of the highest magnitude".[36] His astrological treatise, a work in four parts, is known by the Greek term Tetrabiblos, or the Latin equivalent Quadripartitum: "Four Books". Ptolemy's own title is unknown, but may have been the term found in some Greek manuscripts: Apotelesmatika, roughly meaning "Astrological Outcomes", "Effects" or "Prognostics".[37][38]
|
34 |
+
|
35 |
+
As a source of reference, the Tetrabiblos is said to have "enjoyed almost the authority of a Bible among the astrological writers of a thousand years or more".[39] It was first translated from Arabic into Latin by Plato of Tivoli (Tiburtinus) in 1138, while he was in Spain.[40] The Tetrabiblos is an extensive and continually reprinted treatise on the ancient principles of horoscopic astrology. That it did not quite attain the unrivaled status of the Almagest was, perhaps, because it did not cover some popular areas of the subject, particularly electional astrology (interpreting astrological charts for a particular moment to determine the outcome of a course of action to be initiated at that time), and medical astrology, which were later adoptions.
|
36 |
+
|
37 |
+
The great popularity that the Tetrabiblos did possess might be attributed to its nature as an exposition of the art of astrology, and as a compendium of astrological lore, rather than as a manual. It speaks in general terms, avoiding illustrations and details of practice. Ptolemy was concerned to defend astrology by defining its limits, compiling astronomical data that he believed was reliable and dismissing practices (such as considering the numerological significance of names) that he believed to be without sound basis.
|
38 |
+
|
39 |
+
Much of the content of the Tetrabiblos was collected from earlier sources; Ptolemy's achievement was to order his material in a systematic way, showing how the subject could, in his view, be rationalized. It is, indeed, presented as the second part of the study of astronomy of which the Almagest was the first, concerned with the influences of the celestial bodies in the sublunary sphere. Thus explanations of a sort are provided for the astrological effects of the planets, based upon their combined effects of heating, cooling, moistening, and drying.
|
40 |
+
|
41 |
+
Ptolemy's astrological outlook was quite practical: he thought that astrology was like medicine, that is conjectural, because of the many variable factors to be taken into account: the race, country, and upbringing of a person affects an individual's personality as much as, if not more than, the positions of the Sun, Moon, and planets at the precise moment of their birth, so Ptolemy saw astrology as something to be used in life but in no way relied on entirely.
|
42 |
+
|
43 |
+
A collection of one hundred aphorisms about astrology called the Centiloquium, ascribed to Ptolemy, was widely reproduced and commented on by Arabic, Latin and Hebrew scholars, and often bound together in medieval manuscripts after the Tetrabiblos as a kind of summation. It is now believed to be a much later pseudepigraphical composition. The identity and date of the actual author of the work, referred to now as Pseudo-Ptolemy, remains the subject of conjecture.[dubious – discuss]
|
44 |
+
|
45 |
+
Despite Ptolemy's prominence as a philosopher, the Dutch historian of science Eduard Jan Dijksterhuis criticizes the Tetrabiblos, stating that "it only remain puzzling that the very writer of the Almagest, who had taught how to develop astronomy from accurate observations and mathematical constructions, could put together such a system of superficial analogies and unfounded assertions."[41]
|
46 |
+
|
47 |
+
Ptolemy also wrote an influential work, Harmonics, on music theory and the mathematics of music.[42] After criticizing the approaches of his predecessors, Ptolemy argued for basing musical intervals on mathematical ratios (in contrast to the followers of Aristoxenus and in agreement with the followers of Pythagoras), backed up by empirical observation (in contrast to the overly theoretical approach of the Pythagoreans). Ptolemy wrote about how musical notes could be translated into mathematical equations and vice versa in Harmonics. This is called Pythagorean tuning because it was first discovered by Pythagoras. However, Pythagoras believed that the mathematics of music should be based on the specific ratio of 3:2, whereas Ptolemy merely believed that it should just generally involve tetrachords and octaves. He presented his own divisions of the tetrachord and the octave, which he derived with the help of a monochord. His Harmonics never had the influence of his Almagest or Planetary Hypotheses, but a part of it (Book III) did encourage Kepler in his own musings on the harmony of the world (Kepler, Harmonice Mundi, Appendix to Book V).[43] Ptolemy's astronomical interests also appeared in a discussion of the "music of the spheres".
|
48 |
+
|
49 |
+
His Optics is a work that survives only in a poor Arabic translation and in about twenty manuscripts of a Latin version of the Arabic, which was translated by Eugenius of Palermo (c. 1154). In it, Ptolemy writes about properties of light, including reflection, refraction, and colour. The work is a significant part of the early history of optics[44]
|
50 |
+
and influenced the more famous 11th-century Book of Optics by Ibn al-Haytham. It contains the earliest surviving table of refraction from air to water, for which the values (with the exception of the 60° angle of incidence), although historically praised as experimentally derived, appear to have been obtained from an arithmetic progression.[45]
|
51 |
+
|
52 |
+
The work is also important for the early history of perception. Ptolemy combined the mathematical, philosophical and physiological traditions. He held an extramission-intromission theory of vision: the rays (or flux) from the eye formed a cone, the vertex being within the eye, and the base defining the visual field. The rays were sensitive, and conveyed information back to the observer's intellect about the distance and orientation of surfaces. Size and shape were determined by the visual angle subtended at the eye combined with perceived distance and orientation. This was one of the early statements of size-distance invariance as a cause of perceptual size and shape constancy, a view supported by the Stoics.[46] Ptolemy offered explanations for many phenomena concerning illumination and colour, size, shape, movement and binocular vision. He also divided illusions into those caused by physical or optical factors and those caused by judgmental factors. He offered an obscure explanation of the sun or moon illusion (the enlarged apparent size on the horizon) based on the difficulty of looking upwards.[47][48]
|
53 |
+
|
54 |
+
There are several characters or items named after Ptolemy, including:
|
55 |
+
|
56 |
+
[T]he only place mentioned in any of Ptolemy's observations is Alexandria, and there is no reason to suppose that he ever lived anywhere else. The statement by Theodore Meliteniotes that he was born in Ptolemais Hermiou (in Upper Egypt) could be correct, but it is late (ca. 1360) and unsupported.
|
57 |
+
|
58 |
+
But what we really want to know is to what extent the Alexandrian mathematicians of the period from the 1st to the 5th centuries AD were Greek. Certainly, all of them wrote in Greek and were part of the Greek intellectual community of Alexandria. Most modern studies conclude that the Greek community coexisted ... So should we assume that Ptolemy and Diophantus, Pappus and Hypatia were ethnically Greek, that their ancestors had come from Greece at some point in the past but had remained effectively isolated from the Egyptians? It is, of course, impossible to answer this question definitively. But research in papyri dating from the early centuries of the common era demonstrates that a significant amount of intermarriage took place between the Greek and Egyptian communities ... And it is known that Greek marriage contracts increasingly came to resemble Egyptian ones. In addition, even from the founding of Alexandria, small numbers of Egyptians were admitted to the privileged classes in the city to fulfill numerous civic roles. Of course, it was essential in such cases for the Egyptians to become "Hellenized", to adopt Greek habits and the Greek language. Given that the Alexandrian mathematicians mentioned here were active several hundred years after the founding of the city, it would seem at least equally possible that they were ethnically Egyptian as that they remained ethnically Greek. In any case, it is unreasonable to portray them with purely European features when no physical descriptions exist.
|
en/1174.html.txt
ADDED
@@ -0,0 +1,95 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Claus Philipp Maria Schenk Graf von Stauffenberg (German: [klaʊ̯s ʃɛŋk ɡʁaːf fɔn ˈʃtaʊ̯fn̩.bɛʁk] (listen); 15 November 1907 – 21 July 1944) was a German army officer. He held the hereditary titles of "Graf" (count) and "Schenk" (cupbearer). He took part in the attack on Poland, the German invasion of the Soviet Union and the Tunisian Campaign during the Second World War.
|
4 |
+
|
5 |
+
Stauffenberg was one of the leading members of the failed 20 July plot of 1944 to assassinate Adolf Hitler and remove the Nazi Party from power. Along with Henning von Tresckow and Hans Oster, he was one of the central figures of the German Resistance movement within the Wehrmacht. For his involvement in the movement, he was executed by firing squad shortly after the failed attempt known as Operation Valkyrie.
|
6 |
+
|
7 |
+
Stauffenberg's full name was Claus Philipp Maria Justinian, followed by the noble title of "Count of Stauffenberg". He was born in the Stauffenberg Castle Jettingen between Ulm and Augsburg, in the eastern part of Swabia, at that time in the Kingdom of Bavaria, part of the German Empire.[1][2] Stauffenberg was the third of four sons, including the twins Berthold and Alexander and his own twin brother Konrad Maria, who died in Jettingen one day after birth on 16 November 1907. His father was Alfred Klemens Philipp Friedrich Justinian, the last Oberhofmarschall of the Kingdom of Württemberg. Stauffenberg's mother was Caroline Schenk Gräfin von Stauffenberg, née Gräfin von Üxküll-Gyllenband, the daughter of Alfred Richard August Graf von Üxküll-Gyllenband and Valerie Gräfin von
|
8 |
+
Hohenthal.[3]
|
9 |
+
|
10 |
+
The titles "Graf" and "Gräfin" define count and countess, respectively. Schenk (i.e., cupbearer/butler) was an additional hereditary noble title. The ancestral castle of the nobility was the last part of the title, which was Schenk Graf von Stauffenberg and used as part of the name. The Stauffenberg family is one of the oldest and most distinguished aristocratic Catholic families of southern Germany. Among his maternal Protestant ancestors were several famous Prussians, including Field Marshal August von Gneisenau.[4]
|
11 |
+
|
12 |
+
On 11 November 1919, a new constitutional law, as part of the Weimar Republic, abolished the privileges of nobility. Article 109 also stated, "Legal privileges or disadvantages based on birth or social standing are to be abolished. Noble titles form part of the name only; noble titles may not be granted any more."[5]
|
13 |
+
|
14 |
+
In his youth, he and his brothers were members of the Neupfadfinder, a German Scout association and part of the German Youth movement.[6][7][8][9]
|
15 |
+
|
16 |
+
Like his brothers, he was carefully educated and inclined toward literature, but eventually took up a military career. In 1926, he joined the family's traditional regiment, the Bamberger Reiter- und Kavallerieregiment 17 (17th Cavalry Regiment) in Bamberg.[10] It was around this time that the three brothers were introduced by Albrecht von Blumenthal to the poet Stefan George's influential circle, Georgekreis, from which many notable members of the German resistance later emerged. George dedicated Das neue Reich ("the new Empire") in 1928, including the Geheimes Deutschland ("secret Germany") written in 1922, to Berthold.[11]
|
17 |
+
|
18 |
+
Stauffenberg was commissioned as a leutnant (second lieutenant) in 1930. He studied modern weapons at the Kriegsakademie in Berlin-Moabit, but remained focused on the use of horses—which continued to carry out a large part of transportation duties throughout World War II—in modern warfare. His regiment became part of the German 1st Light Division under General Erich Hoepner, who had taken part in the plans for the September 1938 German Resistance coup, cut short by Hitler's unexpected diplomatic success in the Munich Agreement. The unit was among the Wehrmacht troops that moved into Sudetenland following its annexation to the Reich as per the Munich Agreement.[12]
|
19 |
+
|
20 |
+
Although Stauffenberg agreed with the Nazi Party's racist and nationalistic aspects and had supported the German colonization of Poland and made extremist remarks regarding Polish Jews,[13][14][15] he never became a member. During the German presidential election, 1932, he voiced support for Hitler:
|
21 |
+
|
22 |
+
The idea of the Führer principle [...] bound together with a Volksgemeinschaft, the principle "The community good before the individual good," and the fight against corruption, the fight against the spirit of the large urban cities, the racial thought (Rassengedanke), and the will towards a new German-formed legal order appears to us healthy and auspicious.[16]
|
23 |
+
|
24 |
+
Moreover, Stauffenberg remained a practicing Catholic. Stauffenberg vacillated between a strong dislike of Hitler's policies and a respect for what he perceived to be Hitler's military acumen. Stauffenberg became even more disassociated with the party after The Night of the Long Knives and Kristallnacht proved Hitler had no intentions to pursue justice.[17] On top of this, the growing systematic ill-treatment of Jews and suppression of religion had offended Stauffenberg's strong sense of Catholic morality and justice.[18][19]
|
25 |
+
|
26 |
+
Following the outbreak of war in 1939, Stauffenberg and his regiment took part in the attack on Poland. He supported the occupation of Poland and its handling by the Nazi regime and the use of Poles as slave workers to achieve German prosperity[13] as well as German colonization and exploitation of Poland. The deeply rooted belief common in the German aristocracy was that the Eastern territories, populated predominantly by Poles and partly absorbed by Prussia in partitions of Poland, but taken from the German Empire after World War I, should be colonized as the Teutonic Knights had done in the Middle Ages. Stauffenberg said, "It is essential that we begin a systemic colonization in Poland. But I have no fear that this will not occur".[14]
|
27 |
+
|
28 |
+
While his uncle, Nikolaus Graf von Üxküll-Gyllenband, together with Fritz-Dietlof von der Schulenburg, had approached him before to join the resistance movement against the Hitler regime, it was only after the Polish campaign that Stauffenberg began to consider it. Peter Yorck von Wartenburg and Ulrich Schwerin von Schwanenfeld urged him to become the adjutant of Walther von Brauchitsch, then Supreme Commander of the Army, in order to participate in a coup against Hitler. Stauffenberg declined at the time, reasoning that all German soldiers had pledged allegiance not to the institution of the presidency of the German Reich, but to the person of Adolf Hitler, due to the Führereid introduced in 1934.[20]
|
29 |
+
|
30 |
+
Stauffenberg's unit was reorganized into the 6th Panzer Division, and he served as an officer on its General Staff in the Battle of France, for which he was awarded the Iron Cross First Class.[21]
|
31 |
+
|
32 |
+
Operation Barbarossa, the German invasion of the Soviet Union, began on 22 June 1941. Oberkommando des Heeres ("Army High Command"; OKH) directed operations on the Eastern Front. Stauffenberg had been transferred to the organizational department of OKH during the idle months of the so-called Phoney War (1939–1940, before the Battle of France). Stauffenberg did not engage in any coup plotting at this time. However, the Stauffenberg brothers (Berthold and Claus) maintained contact with anti-regime figures such as the Kreisau Circle and former commanders like Hoepner. They also included civilians, even social democrats like Julius Leber, in their scenarios for an administration after Hitler.[22]
|
33 |
+
|
34 |
+
According to Hoffman (p. 131, 1988), citing Brigadier (ret.) Oskar Alfred-Berger's letters, Stauffenberg had commented openly on the ill-treatment of the Jews when he "expressed outrage and shock on this subject to fellow officers in the General Staff Headquarters in Vinnitsa (Ukraine) during the summer of 1942."[23] Stauffenberg's friend, Major Joachim Kuhn, was captured by the Red Army. During interrogation on 2 September 1944, Kuhn claimed that Stauffenberg had told him in August 1942 that "They are shooting Jews in masses. These crimes must not be allowed to continue."[24] After his arrest in July 1944, Stauffenberg's older brother Berthold told the Gestapo that: "He and his brother had basically approved of the racial principle of National Socialism, but considered it to be exaggerated and excessive."[25]
|
35 |
+
|
36 |
+
In November 1942, the Allies landed in French North Africa, and the 10th Panzer Division occupied Vichy France (Case Anton) before being transferred to fight in the Tunisia Campaign, as part of the Afrika Korps. In 1943, Stauffenberg was promoted to Oberstleutnant i.G.[26] (lieutenant-colonel of the general staff), and was sent to Africa to join the 10th Panzer Division as its Operations Officer in the General Staff (Ia). On 19 February, Rommel launched his counter-offensive against British, American and French forces in Tunisia. The Axis commanders hoped to break rapidly through either the Sbiba or Kasserine Pass into the rear of the British 1st Army. The assault at Sbiba was halted, so Rommel concentrated on Kasserine Pass where primarily the Italian 7th Bersaglieri Regiment and 131st Armoured Division Centauro had defeated the American defenders.[27] During the fighting, Stauffenberg drove up to be with the leading tanks and troops of the 10th Panzer Division.[28] The division, together with the 21st Panzer Division, took up defensive positions near Mezzouna on 8 April.[29]
|
37 |
+
|
38 |
+
On 7 April 1943, Stauffenberg was involved in driving from one unit to another, directing their movement.[30] Near Mezzouna, his vehicle was part of a column strafed by Kittyhawk (P-40) fighter bombers of the Desert Air Force – most likely from No. 3 Squadron, Royal Australian Air Force [31] – and he received multiple severe wounds. Stauffenberg spent three months in a hospital in Munich, where he was treated by Ferdinand Sauerbruch. Stauffenberg lost his left eye, his right hand, and two fingers on his left hand.[32] He jokingly remarked to friends never to have really known what to do with so many fingers when he still had all of them. For his injuries, Stauffenberg was awarded the Wound Badge in Gold on 14 April and for his courage the German Cross in Gold on 8 May.[33]
|
39 |
+
|
40 |
+
For rehabilitation, Stauffenberg was sent to his home, Schloss Lautlingen (today a museum), then still one of the Stauffenberg castles in southern Germany. The Torfels near Meßstetten Bueloch had been visited many times.[34] Initially, he felt frustrated not to be in a position to stage a coup himself. But by the beginning of September 1943, after a somewhat slow recovery from his wounds, he was propositioned by the conspirators and was introduced to Henning von Tresckow as a staff officer to the headquarters of the Ersatzheer ("Replacement Army" – charged with training soldiers to reinforce first line divisions at the front), located on the Bendlerstrasse (later Stauffenbergstrasse) in Berlin.[35]
|
41 |
+
|
42 |
+
There, one of Stauffenberg's superiors was General Friedrich Olbricht, a committed member of the resistance movement. The Ersatzheer had a unique opportunity to launch a coup, as one of its functions was to have Operation Valkyrie in place. This was a contingency measure to let it assume control of the Reich in the event that internal disturbances blocked communications to the military high command. The Valkyrie plan had been agreed to by Hitler but was secretly changed to sweep the rest of his regime from power in the event of his death. In 1943, Henning von Tresckow was deployed on the Eastern Front, giving Stauffenberg control of the resistance. Tresckow did not return to Germany, as he committed suicide at Królowy Most, Poland in 1944, after learning of the plot's failure.[36]
|
43 |
+
|
44 |
+
A detailed military plan was developed not only to occupy Berlin, but also to take the different headquarters of the German army and of Hitler in East Prussia by military force after the suicide assassination attempt by Axel von dem Bussche in late November 1943. Stauffenberg had von dem Bussche transmit these written orders personally to Major Kuhn once he had arrived at Wolfsschanze (Wolf's Lair) near Rastenburg, East Prussia. However, von dem Bussche had left the Wolfsschanze for the eastern front, after the meeting with Hitler was cancelled, and the attempt could not be made.[37]
|
45 |
+
|
46 |
+
Kuhn became a prisoner of war of the Soviets after the 20 July plot. He led the Soviets to the hiding place of the documents in February 1945. In 1989, Soviet leader Mikhail Gorbachev presented these documents to then-German chancellor Dr. Helmut Kohl. The conspirators' motivations have been a matter of discussion for years in Germany after the war. Many thought the plotters wanted to kill Hitler in order to end the war and to avoid the loss of their privileges as professional officers and members of the nobility.[38]
|
47 |
+
|
48 |
+
On D-Day, 6 June 1944, the Allies had landed in France. Stauffenberg, like most other German professional military officers, had absolutely no doubt that the war was lost. Only an immediate armistice could avoid more unnecessary bloodshed and further damage to Germany, its people, and other European nations. However, in late 1943, he had written out demands with which he felt the Allies had to comply in order for Germany to agree to an immediate peace. These demands included Germany retaining its 1914 eastern borders, including the Polish territories of Wielkopolska and Poznań.[39] Other demands included keeping such territorial gains as Austria and the Sudetenland within the Reich, giving autonomy to Alsace-Lorraine, and even expansion of the current wartime borders of Germany in the south by annexing Tyrol as far as Bozen and Meran. Non-territorial demands included such points as refusal of any occupation of Germany by the Allies, as well as refusal to hand over war criminals by demanding the right of "nations to deal with its own criminals". These proposals were only directed to the Western Allies – Stauffenberg wanted Germany only to retreat from western, southern and northern positions, while demanding the right to continue military occupation of German territorial gains in the east.[40]
|
49 |
+
|
50 |
+
As early as September 1942 Stauffenberg was considering Hans Georg Schmidt von Altenstadt, author of Unser Weg zum Meer, as a replacement for Hitler.
|
51 |
+
From the beginning of September 1943 until 20 July 1944, Stauffenberg was the driving force behind the plot to assassinate Hitler and take control of Germany. His resolve, organisational abilities, and radical approach put an end to inactivity caused by doubts and long discussions on whether military virtues had been made obsolete by Hitler's behaviour. With the help of his friend Henning von Tresckow, he united the conspirators and drove them into action.[41]
|
52 |
+
|
53 |
+
Stauffenberg was aware that, under German law, he was committing high treason. He openly told young conspirator Axel von dem Bussche in late 1943, "ich betreibe mit allen mir zur Verfügung stehenden Mitteln den Hochverrat..." ("I am committing high treason with all means at my disposal....").[42] He justified himself to Bussche by referring to the right under natural law (Naturrecht) to defend millions of people's lives from the criminal aggressions of Hitler.[43]
|
54 |
+
|
55 |
+
Only after the conspirator General Helmuth Stieff on 7 July 1944 had declared himself unable to assassinate Hitler on a uniforms display at Klessheim castle near Salzburg, Stauffenberg decided to personally kill Hitler and to run the plot in Berlin. By then, Stauffenberg had great doubts about the possibility of success. Tresckow convinced him to go on with it even if it had no chance of success at all, "The assassination must be attempted. Even if it fails, we must take action in Berlin", as this was the only way to prove to the world that the Hitler regime and Germany were not one and the same and that not all Germans supported the regime.[43]
|
56 |
+
|
57 |
+
Stauffenberg's part in the original plan required him to stay at the Bendlerstraße offices in Berlin, so he could phone regular army units all over Europe in an attempt to convince them to arrest leaders of Nazi political organisations such as the Sicherheitsdienst (SD) and the Gestapo. Unfortunately, when General Helmuth Stieff, Chief of Operation at Army High Command, who had regular access to Hitler, backtracked from his earlier commitment to assassinate Hitler, Stauffenberg was forced to take on two critical roles: kill Hitler far from Berlin and trigger the military machine in Berlin during office hours of the very same day. Beside Stieff, he was the only conspirator who had regular access to Hitler (during his briefings) by mid-1944, as well as being the only officer among the conspirators thought to have the resolve and persuasiveness to convince German military leaders to throw in with the coup once Hitler was dead. This requirement greatly reduced the chance of a successful coup.[35]
|
58 |
+
|
59 |
+
After several unsuccessful attempts by Stauffenberg to encounter Hitler, Göring and Himmler at the same time, he went ahead with the attempt at Wolfsschanze on 20 July 1944. Stauffenberg entered the briefing room carrying a briefcase containing two small bombs. The location had unexpectedly been changed from the subterranean Führerbunker to Albert Speer's wooden hut due to the heat on this summer's day. He left the room to arm the first bomb with specially adapted pliers. This was a difficult task for him as he had lost his right hand and had only three fingers on his left hand. A guard knocked and opened the door, urging him to hurry as the meeting was about to begin. As a result, Stauffenberg was able to arm only one of the bombs. He left the second bomb with his aide-de-camp, Werner von Haeften, and returned to the briefing room, where he placed the briefcase under the conference table, as close as he could to Hitler. Some minutes later, he excused himself and left the room. After his exit, the briefcase was moved by Colonel Heinz Brandt.[44]
|
60 |
+
|
61 |
+
When the explosion tore through the hut, Stauffenberg was convinced that no one in the room could have survived. Although four people were killed and almost all survivors were injured, Hitler himself was shielded from the blast by the heavy, solid-oak conference table leg, which Colonel Brandt had placed the briefcase bomb behind, and was only slightly wounded.[44]
|
62 |
+
|
63 |
+
Stauffenberg and Haeften quickly left and drove to the nearby airfield. After his return to Berlin, Stauffenberg immediately began to motivate his friends to initiate the second phase: the military coup against the Nazi leaders. When Joseph Goebbels announced by radio that Hitler had survived and later, after Hitler spoke on the state radio, the conspirators realised that the coup had failed. They were tracked to their Bendlerstrasse offices and overpowered after a brief shoot-out, during which Stauffenberg was wounded in the shoulder.[45]
|
64 |
+
|
65 |
+
In an attempt to save his own life, co-conspirator General Friedrich Fromm, Commander-in-Chief of the Replacement Army present in the Bendlerblock (Headquarters of the Army), charged other conspirators in an impromptu court martial and condemned the ringleaders of the conspiracy to death. Stauffenberg, his aide 1st Lieutenant Werner von Haeften, General Friedrich Olbricht, and Colonel Albrecht Mertz von Quirnheim were executed before 1:00 in the morning (21 July 1944) by a makeshift firing squad in the courtyard of the Bendlerblock, which was lit by the headlights of a truck.[45]
|
66 |
+
|
67 |
+
Stauffenberg was third in line to be executed, with Lieutenant von Haeften after. However, when it was Stauffenberg's turn, Lieutenant von Haeften placed himself between the firing squad and Stauffenberg, and received the bullets meant for Stauffenberg. When his turn came, Stauffenberg spoke his last words, "Es lebe das heilige Deutschland!" ("Long live our sacred Germany!"),[46][47] or, possibly, "Es lebe das geheime Deutschland!" ("Long live the secret Germany!"), in reference to Stefan George and the anti-Nazi circle.[47][48]
|
68 |
+
Fromm ordered that the executed officers (his former co-conspirators) receive an immediate burial with military honours in the Alter St.-Matthäus-Kirchhof in Berlin's Schöneberg district. The next day, however, Stauffenberg's body was exhumed by the SS, stripped of his medals and insignia, and cremated.[49]
|
69 |
+
|
70 |
+
Another central figure in the plot was Stauffenberg's eldest brother, Berthold Schenk Graf von Stauffenberg. On 10 August 1944, Berthold was tried before Judge-President Roland Freisler in the special "People's Court" (Volksgerichtshof). This court was established by Hitler for political offences. Berthold was one of eight conspirators executed by slow strangulation in Plötzensee Prison, Berlin, later that day. Before he was killed, Berthold was strangled and then revived multiple times.[50] The entire execution and multiple resuscitations were filmed for Hitler to view at his leisure.[50] More than 200 were condemned in show trials and executed. Hitler used the 20 July Plot as an excuse to destroy anyone he feared would oppose him. The traditional military salute was replaced with the Nazi salute. Eventually, over 20,000 Germans were killed or sent to concentration camps in the purge.[51]
|
71 |
+
|
72 |
+
One of the few surviving members of the German resistance, Hans Bernd Gisevius portrays Colonel Stauffenberg, whom he met in July 1944, as a man driven by reasons which had little to do with Christian ideals or repugnance of Nazi ideology. In his autobiographical Bis zum bitteren Ende ("To the Bitter End"), Gisevius writes:
|
73 |
+
|
74 |
+
Stauffenberg wanted to retain all the totalitarian, militaristic and socialistic elements of National Socialism (p. 504). What he had in mind was the salvation of Germany by military men who could break with corruption and maladministration, provide an orderly military government and inspire the people to make one last great effort. Reduced to a formula, he wanted the nation to remain soldierly and become socialistic (p. 503).
|
75 |
+
|
76 |
+
Stauffenberg was motivated by the impulsive passions of the disillusioned military man whose eyes had been opened by the defeat of German arms (p. 510). Stauffenberg had shifted to the rebel side only after Stalingrad (p. 512).
|
77 |
+
|
78 |
+
The difference between Stauffenberg, Helldorf and Schulenburg – all of them counts – was that Helldorf had come to the Nazi Movement as a primitive, I might almost say an unpolitical revolutionary. The other two had been attracted primarily by a political ideology. Therefore, it was possible for Helldorf to throw everything overboard at once: Hitler, the Party, the entire system. Stauffenberg, Schulenberg and their clique wanted to drop no more ballast than was absolutely necessary; then they would paint the ship of state a military gray and set it afloat again (p. 513–514).[52]
|
79 |
+
|
80 |
+
Historian Peter Hoffman questions Gisevius's evaluations based on the latter's brief acquaintance with Stauffenberg, misreporting of Stauffenberg's actions, and apparent rivalry with him:
|
81 |
+
|
82 |
+
Gisevius met Stauffenberg for the first time in Berlin on July 12, 1944, eight days before the colonel's last assassination attempt against Hitler. ... In view of Gisevius's own record as a transmitter of historical information for which he had displayed strong personal feelings, and in light of what is known about both Gisevius's alleged sources and Stauffenberg himself, Gisevius's account is at best questionable hearsay.
|
83 |
+
Gisevius disliked Stauffenberg. He sensed that this dynamic leader would be an obstacle to his own far-reaching ambitions and intrigues. In his book he mocked Stauffenberg as a presumptuous and ignorant amateur. ... Stauffenberg must have been informed of Gisevius's background and it cannot have inspired his confidence. Gisevius was understandably upset by Stauffenberg's attitude toward him. ... Stauffenberg seemed to regard him merely as an incidental source of background information.[53]
|
84 |
+
|
85 |
+
British historian Richard J. Evans, in his books on the Third Reich,[54] covered various aspects of Stauffenberg's beliefs and philosophy. He wrote an article originally published in Süddeutsche Zeitung, 23 January 2009[55] entitled "Why did Stauffenberg plant the bomb?" which states, "Was it because Hitler was losing the war? Was it to put an end to the mass murder of the Jews. Or was it to save Germany's honour? The overwhelming support, toleration, or silent acquiescence" from the people of his country for Hitler, which was also being heavily censored and constantly fed propaganda,[56][57] meant any action must be swift and successful. Evans writes, "Had Stauffenberg's bomb succeeded in killing Hitler, it is unlikely that the military coup planned to follow it would have moved the leading conspirators smoothly into power".[54]
|
86 |
+
|
87 |
+
However, Karl Heinz Bohrer, a cultural critic, literary scholar, publisher,[58] criticized Evans' views in an article originally published in the Süddeutsche Zeitung, 30 January 2010.[59] Although agreeing that Evans is historically correct in much of his writing, Bohrer feels that Evans twists time lines and misrepresents certain aspects. He wrote of Evans, "In the course of his problematic argument he walks into two traps: 1. by contesting Stauffenberg's "moral motivation"; 2. by contesting Stauffenberg's suitability as role model." He further writes, "If then, as Evans notes with initial objectivity, Stauffenberg had a strong moral imperative – whether this stemmed from an aristocratic code of honour, Catholic doctrine or Romantic poetry – then this also underpinned his initial affinity for National Socialism which Stauffenberg misinterpreted as 'spiritual renewal'".[59]
|
88 |
+
|
89 |
+
In 1980, the German government established a memorial for the failed anti-Nazi resistance movement in a part of the Bendlerblock, the remainder of which currently houses the Berlin offices of the German Ministry of Defense (whose main offices remain in Bonn). The Bendlerstrasse was renamed the Stauffenbergstrasse, and the Bendlerblock now houses the Memorial to the German Resistance, a permanent exhibition with more than 5,000 photographs and documents showing the various resistance organizations at work during the Hitler era. The courtyard where the officers were shot on 21 July 1944 is now a memorial site, with a plaque commemorating the events and a bronze figure of a young man with his hands symbolically bound which resembles Count von Stauffenberg.[60]
|
90 |
+
|
91 |
+
Stauffenberg married Nina Freiin von Lerchenfeld on 26 September 1933 in Bamberg.[61] They had five children: Berthold; Heimeran; Franz-Ludwig; Valerie; and Konstanze, who was born in Frankfurt on the Oder after Stauffenberg's execution. Berthold, Heimeran, Franz-Ludwig and Valerie, who were not told of their father's deed,[62] were placed in a foster home for the remainder of the war and were forced to use new surnames, as Stauffenberg became considered taboo.[63]
|
92 |
+
|
93 |
+
Nina died at the age of 92 on 2 April 2006 at Kirchlauter near Bamberg, and was buried there on 8 April. Berthold went on to become a general in West Germany's post-war Bundeswehr. Franz-Ludwig became a member of both the German and European parliaments, representing the Christian Social Union in Bavaria. In 2008, Konstanze von Schulthess-Rechberg wrote a best-selling book about her mother, Nina Schenk Gräfin von Stauffenberg.
|
94 |
+
|
95 |
+
He let things come to him, and then he made up his mind ... one of his characteristics was that he really enjoyed playing the devil's advocate. Conservatives were convinced that he was a ferocious Nazi, and ferocious Nazis were convinced he was an unreconstructed conservative. He was neither.[64]
|
en/1175.html.txt
ADDED
@@ -0,0 +1,85 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
A harpsichord (Italian: clavicembalo, French: clavecin, German: Cembalo, Spanish: clavecín, Portuguese: cravo) is a musical instrument played by means of a keyboard. Like a pipe organ, a harpsichord may have more than one keyboard manual and harpsichords may have stop buttons which add or remove additional octaves. Some harpsichords may have a lute stop, which simulates the sound of a plucked lute. This activates a row of levers that turn a trigger mechanism that plucks one or more strings with a small plectrum made from quill or plastic. The strings are under tension on a soundboard, which is mounted in a wooden case; the soundboard amplifies the vibrations from the strings so that the listeners can hear it.
|
2 |
+
|
3 |
+
The term denotes the whole family of similar plucked-keyboard instruments, including the smaller virginals, muselar, and spinet. The harpsichord was widely used in Renaissance and Baroque music, both as an accompaniment instrument and as a soloing instrument. During the Baroque era, the harpsichord was a standard part of the continuo group, the musicians who performed the basso continuo part that acted as the foundation for many musical pieces in this era. During the late 18th century, with the development of the fortepiano (and then the increasing use of the piano in the 19th century) the harpsichord gradually disappeared from the musical scene (except in opera, where it continued to be used to accompany recitative). In the 20th century, it made a resurgence, being used in historically informed performances of older music, in new compositions, and, in rare cases, in certain styles of popular music (e.g., Baroque pop).
|
4 |
+
|
5 |
+
The harpsichord was most likely invented in the late Middle Ages. By the 16th century, harpsichord makers in Italy were making lightweight instruments with low string tension. A different approach was taken in the Southern Netherlands starting in the late 16th century, notably by the Ruckers family. Their harpsichords used a heavier construction and produced a more powerful and distinctive tone. They included the first harpsichords with two keyboards, used for transposition.
|
6 |
+
|
7 |
+
The Flemish instruments served as the model for 18th-century harpsichord construction in other nations. In France, the double keyboards were adapted to control different choirs of strings, making a more musically flexible instrument. Instruments from the peak of the French tradition, by makers such as the Blanchet family and Pascal Taskin, are among the most widely admired of all harpsichords, and are frequently used as models for the construction of modern instruments. In England, the Kirkman and Shudi firms produced sophisticated harpsichords of great power and sonority. German builders extended the sound repertoire of the instrument by adding sixteen foot and two foot choirs; these instruments have recently served as models for modern builders.
|
8 |
+
|
9 |
+
In the late 18th century the harpsichord was supplanted by the piano and almost disappeared from view for most of the 19th century: an exception was its continued use in opera for accompanying recitative, but the piano sometimes displaced it even there. Twentieth-century efforts to revive the harpsichord began with instruments that used piano technology, with heavy strings and metal frames. Starting in the middle of the 20th century, ideas about harpsichord making underwent a major change, when builders such as Frank Hubbard, William Dowd, and Martin Skowroneck sought to re-establish the building traditions of the Baroque period. Harpsichords of this type of historically informed building practice dominate the current scene.
|
10 |
+
|
11 |
+
Harpsichords vary in size and shape, but all have the same basic mechanism. The player depresses a key that rocks over a pivot in the middle of its length. The other end of the key lifts a jack (a long strip of wood) that holds a small plectrum (a wedge-shaped piece of quill, often made of plastic in the 21st century), which plucks the string. When the player releases the key, the far end returns to its rest position, and the jack falls back; the plectrum, mounted on a tongue mechanism that can swivel backwards away from the string, passes the string without plucking it again. As the key reaches its rest position, a felt damper atop the jack stops the string's vibrations. These basic principles are explained in detail below.
|
12 |
+
|
13 |
+
Each string is wound around a tuning pin, normally at the end of the string closer to the player. When rotated with a wrench or tuning hammer, the tuning pin adjusts the tension so that the string sounds the correct pitch. Tuning pins are held tightly in holes drilled in the pinblock or wrestplank, an oblong hardwood plank. Proceeding from the tuning pin, a string next passes over the nut, a sharp edge that is made of hardwood and is normally attached to the wrestplank. The section of the string beyond the nut forms its vibrating length, which is plucked and creates sound.
|
14 |
+
|
15 |
+
At the other end of its vibrating length, the string passes over the bridge, another sharp edge made of hardwood. As with the nut, the horizontal position of the string along the bridge is determined by a vertical metal pin inserted into the bridge, against which the string rests. The bridge itself rests on a soundboard, a thin panel of wood usually made of spruce, fir or—in some Italian harpsichords—cypress. The soundboard efficiently transmits the vibrations of the strings into vibrations in the air; without a soundboard, the strings would produce only a very feeble sound. A string is attached at its far end by a loop to a hitchpin that secures it to the case.
|
16 |
+
|
17 |
+
While many harpsichords have one string per note, more elaborate harpsichords can have two or more strings for each note. When there are multiple strings for each note, these additional strings are called "choirs" of strings. This provides two advantages: the ability to vary volume and ability to vary tonal quality. Volume is increased when the mechanism of the instrument is set up by the player (see below) so that the press of a single key plucks more than one string. Tonal quality can be varied in two ways. First, different choirs of strings can be designed to have distinct tonal qualities, usually by having one set of strings plucked closer to the nut, which emphasizes the higher harmonics, and produces a "nasal" sound quality. The mechanism of the instrument, called "stops" (following the use of the term in pipe organs) permits the player to select one choir or the other. Second, having one key pluck two strings at once changes not just volume but also tonal quality; for instance, when two strings tuned to the same pitch are plucked simultaneously, the note is not just louder but also richer and more complex.
|
18 |
+
|
19 |
+
A particularly vivid effect is obtained when the strings plucked simultaneously are an octave apart. This is normally heard by the ear not as two pitches but as one: the sound of the higher string is blended with that of the lower one, and the ear hears the lower pitch, enriched in tonal quality by the additional strength in the upper harmonics of the note sounded by the higher string.
|
20 |
+
|
21 |
+
When describing a harpsichord it is customary to specify its choirs of strings, often called its disposition. To describe the pitch of the choirs of strings, pipe organ terminology is used. Strings at eight foot pitch (8') sound at the normal expected pitch, strings at four foot pitch (4') sound an octave higher. Harpsichords occasionally include a sixteen-foot (16') choir (one octave lower than eight-foot) or a two-foot (2') choir (two octaves higher; quite rare). When there are multiple choirs of strings, the player is often able to control which choirs sound. This is usually done by having a set of jacks for each choir, and a mechanism for "turning off" each set, often by moving the upper register (through which the jacks slide) sideways a short distance, so that their plectra miss the strings. In simpler instruments this is done by manually moving the registers, but as the harpsichord evolved, builders invented levers, knee levers and pedal mechanisms to make it easier to change registration.
|
22 |
+
|
23 |
+
Harpsichords with more than one keyboard (this usually means two keyboards, stacked one on top of the other in a step-wise fashion, as with pipe organs)[2] provide flexibility in selecting which strings play, since each manual can be set to control the plucking of a different set of strings. This means that a player can have, say, an 8' manual and a 4' manual ready for use, enabling him to switch between them to obtain higher (or lower) pitches or different tone. In addition, such harpsichords often have a mechanism (the "coupler") that couples manuals together, so that a single manual plays both sets of strings.
|
24 |
+
|
25 |
+
The most flexible system is the French "shove coupler", in which the lower manual slides forward and backward. In the backward position, "dogs" attached to the upper surface of the lower manual engage the lower surface of the upper manual's keys. Depending on choice of keyboard and coupler position, the player can select any of the sets of jacks labeled in "figure 4" as A, or B and C, or all three.
|
26 |
+
|
27 |
+
The English "dogleg" jack system (also used in Baroque Flanders) does not require a coupler. The jacks labeled A in Figure 5 have a "dogleg" shape that permits either keyboard to play A. If the player wishes to play the upper 8' from the upper manual only and not from the lower manual, a stop handle disengages the jacks labeled A and engages instead an alternative row of jacks called "lute stop" (not shown in the Figure). A lute stop is used to imitate the gentle sound of a plucked lute.[3]
|
28 |
+
|
29 |
+
The use of multiple manuals in a harpsichord was not originally provided for the flexibility in choosing which strings would sound, but rather for transposition of the instrument to play in different keys (see History of the harpsichord).
|
30 |
+
|
31 |
+
Some early harpsichords used a short octave for the lowest register. The rationale behind this system was that the low notes F♯ and G♯ are seldom needed in early music. Deep bass notes typically form the root of the chord, and F♯ and G♯ chords were seldom used at this time. In contrast, low C and D, both roots of very common chords, are sorely missed if a harpsichord with lowest key E is tuned to match the keyboard layout. When scholars specify the pitch range of instruments with this kind of short octave, they write "C/E", meaning that the lowest note is a C, played on a key that normally would sound E.
|
32 |
+
|
33 |
+
The wooden case holds in position all of the important structural members: pinblock, soundboard, hitchpins, keyboard, and the jack action. It usually includes a solid bottom, and also internal bracing to maintain its form without warping under the tension of the strings. Cases vary greatly in weight and sturdiness: Italian harpsichords are often of light construction; heavier construction is found in the later Flemish instruments and those derived from them.
|
34 |
+
|
35 |
+
The case also gives the harpsichord its external appearance and protects the instrument. A large harpsichord is, in a sense, a piece of furniture, as it stands alone on legs and may be styled in the manner of other furniture of its place and period. Early Italian instruments, on the other hand, were so light in construction that they were treated rather like a violin: kept for storage in a protective outer case, and played after taking it out of its case and placing it on a table.[4] Such tables were often quite high – until the late 18th century people usually played standing up.[4] Eventually, harpsichords came to be built with just a single case, though an intermediate stage also existed: the false inner–outer, which for purely aesthetic reasons was built to look as if the outer case contained an inner one, in the old style.[5] Even after harpsichords became self-encased objects, they often were supported by separate stands, and some modern harpsichords have separate legs for improved portability.
|
36 |
+
|
37 |
+
Many harpsichords have a lid that can be raised, a cover for the keyboard, and a music stand for holding sheet music and scores.
|
38 |
+
|
39 |
+
Harpsichords have been decorated in a great many different ways: with plain buff paint (e.g. some Flemish instruments), with paper printed with patterns, with leather or velvet coverings, with chinoiserie, or occasionally with highly elaborate painted artwork.[6]
|
40 |
+
|
41 |
+
The virginal is a smaller and simpler rectangular form of the harpsichord having only one string per note; the strings run parallel to the keyboard, which is on the long side of the case.
|
42 |
+
|
43 |
+
A spinet is a harpsichord with the strings set at an angle (usually about 30 degrees) to the keyboard. The strings are too close together for the jacks to fit between them. Instead, the strings are arranged in pairs, and the jacks are in the larger gaps between the pairs. The two jacks in each gap face in opposite directions, and each plucks a string adjacent to the gap.
|
44 |
+
|
45 |
+
The English diarist Samuel Pepys mentions his "tryangle" several times. This was not the percussion instrument that we call triangle today; rather, it was a name for octave-pitched spinets, which were triangular in shape.
|
46 |
+
|
47 |
+
A clavicytherium is a harpsichord with the soundboard and strings mounted vertically facing the player, the same space-saving principle as an upright piano.[7] In a clavicytherium, the jacks move horizontally without the assistance of gravity, so that clavicytherium actions are more complex than those of other harpsichords.
|
48 |
+
|
49 |
+
Ottavini are small spinets or virginals at four-foot pitch. Harpsichords at octave pitch were more common in the early Renaissance, but lessened in popularity later on. However, the ottavino remained very popular as a domestic instrument in Italy until the 19th century. In the Low Countries, an ottavino was commonly paired with an 8' virginals, encased in a small cubby under the soundboard of the larger instrument. The ottavino could be removed and placed on top of the virginal, making, in effect, a double manual instrument. These are sometimes called 'mother-and-child'[8] or 'double' virginals.[9][10]
|
50 |
+
|
51 |
+
Occasionally, harpsichords were built which included another set or sets of strings underneath and played by foot-operated pedal keyboard which trigger the plucking of the lowest-pitched keys of the harpsichord. Although there are no known extant pedal harpsichords from the 18th century or before, from Adlung (1758): the lower set of usually 8' strings "...is built like an ordinary harpsichord, but with an extent of two octaves only. The jacks are similar, but they will benefit from being arranged back to back, since the two [bass] octaves take as much space as four in an ordinary harpsichord[11] Prior to 1980 when Keith Hill introduced his design for a pedal harpsichord, most pedal harpsichords were built based on the designs of extant pedal pianos from the 19th century, in which the instrument is as wide as the pedalboard.[12] While these were mostly intended as practice instruments for organists, a few pieces are believed to have been written specifically for the pedal harpsichord. However, the set of pedals can augment the sound from any piece performed on the instrument, as demonstrated on several albums by E. Power Biggs.[13]
|
52 |
+
|
53 |
+
The archicembalo, built in the 16th century, had an unusual keyboard layout, designed to accommodate variant tuning systems demanded by compositional practice and theoretical experimentation. More common were instruments with split sharps, also designed to accommodate the tuning systems of the time.
|
54 |
+
|
55 |
+
The folding harpsichord was an instrument that could be folded up to make it more compact, thus facilitating travelling with it.
|
56 |
+
|
57 |
+
On the whole, earlier harpsichords have smaller ranges than later ones, although there are many exceptions. The largest harpsichords have a range of just over five octaves, and the smallest have under four. Usually, the shortest keyboards were given extended range in the bass with a "short octave". The traditional pitch range for a 5-octave instrument is F1–F6 (FF–f‴).
|
58 |
+
|
59 |
+
Tuning pitch is often taken to be A4 = 415 Hz, roughly a semitone lower than the modern standard concert pitch of A4 = 440 Hz. An accepted exception is for French baroque repertoire, which is often performed with a = 392 Hz, approximately a semitone lower again. See Jean-Philippe Rameau's Treatise on Harmony (1722) [Dover Publications], Book One, chapter five, for insight into French baroque tuning; "Since most of these semitones are absolutely necessary in the tuning of organs and other similar instruments, the following chromatic system has been drawn up." Tuning an instrument nowadays usually starts with setting an A; historically it would commence from a C or an F.
|
60 |
+
|
61 |
+
Some modern instruments are built with keyboards that can shift sideways, allowing the player to align the mechanism with strings at either A = 415 Hz or A = 440 Hz. If a tuning other than equal temperament is used, the instrument requires retuning once the keyboard is shifted.[14]
|
62 |
+
|
63 |
+
The great bulk of the standard repertoire for the harpsichord was written during its first historical flowering, the Renaissance and Baroque eras.
|
64 |
+
|
65 |
+
The first music written specifically for solo harpsichord was published around the early 16th century. Composers who wrote solo harpsichord music were numerous during the whole Baroque era in European countries including Italy, Germany, England and France. Solo harpsichord compositions included dance suites, fantasias, and fugues. Among the most famous composers who wrote for the harpsichord were the members of English virginal school of the late Renaissance, notably William Byrd (ca. 1540–1623). In France, a great number of highly characteristic solo works were created and compiled into four books of ordres by François Couperin (1668–1733). Domenico Scarlatti (1685–1757) began his career in Italy but wrote most of his solo harpsichord works in Spain; his most famous work is his series of 555 harpsichord sonatas. Perhaps the most celebrated composers who wrote for the harpsichord were Georg Friedrich Händel (1685–1759), who composed numerous suites for harpsichord, and especially J. S. Bach (1685–1750), whose solo works (for instance, the Well-Tempered Clavier and the Goldberg Variations), continue to be performed very widely, often on the piano. Bach was also a pioneer of the harpsichord concerto, both in works designated as such, and in the harpsichord part of his Fifth Brandenburg Concerto.
|
66 |
+
|
67 |
+
Two of the most prominent composers of the Classical era, Joseph Haydn (1732–1809) and Wolfgang Amadeus Mozart (1756–1791), wrote harpsichord music. For both, the instrument featured in the earlier period of their careers, and although they had come into contact with the piano later on, they nonetheless continued to play the harpsichord and clavichord for the rest of their lives. Mozart was noted to have played his second last keyboard concerto (the "Coronation") on the harpsichord.[citation needed]
|
68 |
+
|
69 |
+
Through the 19th century, the harpsichord was almost completely supplanted by the piano. In the 20th century, composers returned to the instrument, as they sought out variation in the sounds available to them. Under the influence of Arnold Dolmetsch, the harpsichordists Violet Gordon-Woodhouse (1872–1951) and in France, Wanda Landowska (1879–1959), were at the forefront of the instrument's renaissance. Concertos for the instrument were written by Francis Poulenc (the Concert champêtre, 1927–28), and Manuel de Falla. Elliott Carter's Double Concerto is scored for harpsichord, piano and two chamber orchestras. For a detailed account of music composed for the revived harpsichord, see Contemporary harpsichord.
|
70 |
+
|
71 |
+
Instruments
|
72 |
+
|
73 |
+
History
|
74 |
+
|
75 |
+
Listen
|
76 |
+
|
77 |
+
Images
|
78 |
+
|
79 |
+
Organisations
|
80 |
+
|
81 |
+
Craftsman insights
|
82 |
+
|
83 |
+
Music
|
84 |
+
|
85 |
+
Technical
|
en/1176.html.txt
ADDED
@@ -0,0 +1,199 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
A USB flash drive[note 1] is a data storage device that includes flash memory with an integrated USB interface. It is typically removable, rewritable and much smaller than an optical disc. Most weigh less than 30 g (1 oz). Since first appearing on the market in late 2000, as with virtually all other computer memory devices, storage capacities have risen while prices have dropped. As of March 2016[update], flash drives with anywhere from 8 to 256 gigabytes (GB[2]) were frequently sold, while 512 GB and 1 terabyte (TB[3]) units were less frequent.[4][5] As of 2018, 2 TB flash drives were the largest available in terms of storage capacity.[6] Some allow up to 100,000 write/erase cycles, depending on the exact type of memory chip used, and are thought to last between 10 and 100 years under normal circumstances (shelf storage time[7]).
|
2 |
+
|
3 |
+
USB flash drives are often used for storage, data back-up and transfering of computer files. Compared with floppy disks or CDs, they are smaller, faster, have significantly more capacity, and are more durable due to a lack of moving parts. Additionally, they are immune to electromagnetic interference (unlike floppy disks), and are unharmed by surface scratches (unlike CDs). Until about 2005, most desktop and laptop computers were supplied with floppy disk drives in addition to USB ports, but floppy disk drives became obsolete after widespread adoption of USB ports and the larger USB drive capacity compared to the "1.44 megabyte" (1440 kibibyte) 3.5-inch floppy disk.
|
4 |
+
|
5 |
+
USB flash drives use the USB mass storage device class standard, supported natively by modern operating systems such as Windows, Linux, macOS and other Unix-like systems, as well as many BIOS boot ROMs. USB drives with USB 2.0 support can store more data and transfer faster than much larger optical disc drives like CD-RW or DVD-RW drives and can be read by many other systems such as the Xbox One, PlayStation 4, DVD players, automobile entertainment systems, and in a number of handheld devices such as smartphones and tablet computers, though the electronically similar SD card is better suited for those devices.
|
6 |
+
|
7 |
+
A flash drive consists of a small printed circuit board carrying the circuit elements and a USB connector, insulated electrically and protected inside a plastic, metal, or rubberized case, which can be carried in a pocket or on a key chain, for example. The USB connector may be protected by a removable cap or by retracting into the body of the drive, although it is not likely to be damaged if unprotected. Most flash drives use a standard type-A USB connection allowing connection with a port on a personal computer, but drives for other interfaces also exist. USB flash drives draw power from the computer via the USB connection. Some devices combine the functionality of a portable media player with USB flash storage; they require a battery only when used to play music on the go.
|
8 |
+
|
9 |
+
The basis for USB flash drives is flash memory, a type of floating-gate semiconductor memory invented by Fujio Masuoka in the early 1980s. Flash memory uses floating-gate MOSFET transistors as memory cells.[8][9]
|
10 |
+
|
11 |
+
M-Systems, an Israeli company, were granted a US patent on November 14, 2000, titled "Architecture for a [USB]-based Flash Disk", and crediting the invention to Amir Ban, Dov Moran and Oron Ogdan, all M-Systems employees at the time. The patent application was filed by M-Systems in April 1999.[10][1][11] Later in 1999, IBM filed an invention disclosure by one of its employees.[1] Flash drives were sold initially by Trek 2000 International, a company in Singapore, which began selling in early 2000. IBM became the first to sell USB flash drives in the United States in 2000.[1] The initial storage capacity of a flash drive was 8 MB.[12][11] Another version of the flash drive, described as a pen drive, was also developed. Pua Khein-Seng from Malaysia has been credited with this invention.[13] Patent disputes have arisen over the years, with competing companies including Singaporean company Trek Technology and Chinese company Netac Technology, attempting to enforce their patents.[14] Trek won a suit in Singapore,[15][16] but has lost battles in other countries.[17] Netac Technology has brought lawsuits against PNY Technologies,[18] Lenovo,[19] aigo,[20] Sony,[21][22][23] and Taiwan's Acer and Tai Guen Enterprise Co.[23]
|
12 |
+
|
13 |
+
Flash drives are often measured by the rate at which they transfer data. Transfer rates may be given in megabytes per second (MB/s), megabits per second (Mbit/s), or in optical drive multipliers such as "180X" (180 times 150 KiB/s).[24] File transfer rates vary considerably among devices. Second generation flash drives have claimed to read at up to 30 MB/s and write at about half that rate, which was about 20 times faster than the theoretical transfer rate achievable by the previous model, USB 1.1, which is limited to 12 Mbit/s (1.5 MB/s) with accounted overhead.[25] The effective transfer rate of a device is significantly affected by the data access pattern.[26]
|
14 |
+
|
15 |
+
By 2002, USB flash drives had USB 2.0 connectivity, which has 480 Mbit/s as the transfer rate upper bound; after accounting for the protocol overhead that translates to a 35 MB/s effective throughput.[27] That same year, Intel sparked widespread use of second generation USB by including them within its laptops.[28]
|
16 |
+
|
17 |
+
Third generation USB flash drives were announced in late 2008 and became available for purchase in 2010.[citation needed] Like USB 2.0 before it, USB 3.0 dramatically improved data transfer rates compared to its predecessor. The USB 3.0 interface specified transfer rates up to 5 Gbit/s (625 MB/s), compared to USB 2.0's 480 Mbit/s (60 MB/s).[citation needed] By 2010 the maximum available storage capacity for the devices had reached upwards of 128 GB.[11] USB 3.0 was slow to appear in laptops. As of 2010, the majority of laptop models still contained the 2.0.[28]
|
18 |
+
|
19 |
+
In January 2013, tech company Kingston, released a flash drive with 1 TB of storage.[29] The first USB 3.1 type-C flash drives, with read/write speeds of around 530 MB/s, were announced in March 2015.[30] As of July 2016, flash drives with 8 to 256 GB capacity were sold more frequently than those with capacities between 512 GB and 1 TB.[4][5] In 2017, Kingston Technology announced the release of a 2-TB flash drive.[31] In 2018, SanDisk announced a 1TB USB-C flash drive, the smallest of its kind.[32]
|
20 |
+
|
21 |
+
Internals of a typical USB flash drive
|
22 |
+
|
23 |
+
On a USB flash drive, one end of the device is fitted with a single Standard-A USB plug; some flash drives additionally offer a micro USB plug, facilitating data transfers between different devices.[33]
|
24 |
+
|
25 |
+
On a USB flash drive, one end of the device is fitted with a single Standard-A USB plug; some flash drives additionally offer a micro USB plug, facilitating data transfers between different devices.
|
26 |
+
|
27 |
+
Inside the plastic casing is a small printed circuit board, which has some power circuitry and a small number of surface-mounted integrated circuits (ICs).[citation needed] Typically, one of these ICs provides an interface between the USB connector and the onboard memory, while the other is the flash memory. Drives typically use the USB mass storage device class to communicate with the host.[34]
|
28 |
+
|
29 |
+
Flash memory combines a number of older technologies, with lower cost, lower power consumption and small size made possible by advances in semiconductor device fabrication technology. The memory storage was based on earlier EPROM and EEPROM technologies. These had limited capacity, were slow for both reading and writing, required complex high-voltage drive circuitry, and could be re-written only after erasing the entire contents of the chip.
|
30 |
+
|
31 |
+
Hardware designers later developed EEPROMs with the erasure region broken up into smaller "fields" that could be erased individually without affecting the others. Altering the contents of a particular memory location involved copying the entire field into an off-chip buffer memory, erasing the field, modifying the data as required in the buffer, and re-writing it into the same field. This required considerable computer support, and PC-based EEPROM flash memory systems often carried their own dedicated microprocessor system. Flash drives are more or less a miniaturized version of this.
|
32 |
+
|
33 |
+
The development of high-speed serial data interfaces such as USB made semiconductor memory systems with serially accessed storage viable, and the simultaneous development of small, high-speed, low-power microprocessor systems allowed this to be incorporated into extremely compact systems. Serial access requires far fewer electrical connections for the memory chips than does parallel access, which has simplified the manufacture of multi-gigabyte drives.
|
34 |
+
|
35 |
+
Computers access modern[update] flash memory systems very much like hard disk drives, where the controller system has full control over where information is actually stored. The actual EEPROM writing and erasure processes are, however, still very similar to the earlier systems described above.
|
36 |
+
|
37 |
+
Many low-cost MP3 players simply add extra software and a battery to a standard flash memory control microprocessor so it can also serve as a music playback decoder. Most of these players can also be used as a conventional flash drive, for storing files of any type.
|
38 |
+
|
39 |
+
There are typically five parts to a flash drive:
|
40 |
+
|
41 |
+
The typical device may also include:
|
42 |
+
|
43 |
+
Most USB flash drives weigh less than 30 g (1 oz).[37] While some manufacturers are competing for the smallest size,[38] with the biggest memory, offering drives only a few millimeters larger than the USB plug itself,[39] some manufacturers differentiate their products by using elaborate housings, which are often bulky and make the drive difficult to connect to the USB port. Because the USB port connectors on a computer housing are often closely spaced, plugging a flash drive into a USB port may block an adjacent port. Such devices may carry the USB logo only if sold with a separate extension cable. Such cables are USB-compatible but do not conform to the USB standard.[40][41]
|
44 |
+
|
45 |
+
USB flash drives have been integrated into other commonly carried items, such as watches, pens, laser pointers, and even the Swiss Army Knife; others have been fitted with novelty cases such as toy cars or Lego bricks. USB flash drives with images of dragons, cats or aliens are very popular in Asia.[42] The small size, robustness and cheapness of USB flash drives make them an increasingly popular peripheral for case modding.
|
46 |
+
|
47 |
+
Most flash drives ship preformatted with the FAT32, or exFAT file systems. The ubiquity of the FAT32 file system allows the drive to be accessed on virtually any host device with USB support. Also, standard FAT maintenance utilities (e.g., ScanDisk) can be used to repair or retrieve corrupted data. However, because a flash drive appears as a USB-connected hard drive to the host system, the drive can be reformatted to any file system supported by the host operating system.
|
48 |
+
|
49 |
+
The memory in flash drives is commonly engineered with multi-level cell (MLC) based memory that is good for around 3,000-5,000 program-erase cycles,[46] but some flash drives have single-level cell (SLC) based memory that is good for around 100,000 writes. There is virtually no limit to the number of reads from such flash memory, so a well-worn USB drive may be write-protected to help ensure the life of individual cells.
|
50 |
+
|
51 |
+
Estimation of flash memory endurance is a challenging subject that depends on the SLC/MLC/TLC memory type, size of the flash memory chips, and actual usage pattern. As a result, a USB flash drive can last from a few days to several hundred years.[47]
|
52 |
+
|
53 |
+
Regardless of the endurance of the memory itself, the USB connector hardware is specified to withstand only around 1,500 insert-removal cycles.[48]
|
54 |
+
|
55 |
+
Counterfeit USB flash drives are sometimes sold with claims of having higher capacities than they actually have. These are typically low capacity USB drives whose flash memory controller firmware is modified so that they emulate larger capacity drives (for example, a 2 GB drive being marketed as a 64 GB drive). When plugged into a computer, they report themselves as being the larger capacity they were sold as, but when data is written to them, either the write fails, the drive freezes up, or it overwrites existing data. Software tools exist to check and detect fake USB drives,[49][50] and in some cases it is possible to repair these devices to remove the false capacity information and use its real storage limit.[51]
|
56 |
+
|
57 |
+
Transfer speeds are technically determined by the slowest of three factors: the USB version used, the speed in which the USB controller device can read and write data onto the flash memory, and the speed of the hardware bus, especially in the case of add-on USB ports.
|
58 |
+
|
59 |
+
USB flash drives usually specify their read and write speeds in megabytes per second (MB/s); read speed is usually faster. These speeds are for optimal conditions; real-world speeds are usually slower. In particular, circumstances that often lead to speeds much lower than advertised are transfer (particularly writing) of many small files rather than a few very large ones, and mixed reading and writing to the same device.
|
60 |
+
|
61 |
+
In a typical well-conducted review of a number of high-performance USB 3.0 drives, a drive that could read large files at 68 MB/s and write at 46 MB/s, could only manage 14 MB/s and 0.3 MB/s with many small files. When combining streaming reads and writes the speed of another drive, that could read at 92 MB/s and write at 70 MB/s, was 8 MB/s. These differences differ radically from one drive to another; some drives could write small files at over 10% of the speed for large ones. The examples given are chosen to illustrate extremes....′[52]
|
62 |
+
|
63 |
+
The most common use of flash drives is to transport and store personal files, such as documents, pictures and videos. Individuals also store medical information on flash drives for emergencies and disaster preparation.
|
64 |
+
|
65 |
+
With wide deployment(s) of flash drives being used in various environments (secured or otherwise), the issue of data and information security remains important. The use of biometrics and encryption is becoming the norm with the need for increased security for data; on-the-fly encryption systems are particularly useful in this regard, as they can transparently encrypt large amounts of data. In some cases a secure USB drive may use a hardware-based encryption mechanism that uses a hardware module instead of software for strongly encrypting data. IEEE 1667 is an attempt to create a generic authentication platform for USB drives. It is supported in Windows 7 and Windows Vista (Service Pack 2 with a hotfix).[53]
|
66 |
+
|
67 |
+
A recent development for the use of a USB Flash Drive as an application carrier is to carry the Computer Online Forensic Evidence Extractor (COFEE) application developed by Microsoft. COFEE is a set of applications designed to search for and extract digital evidence on computers confiscated from suspects.[54] Forensic software is required not to alter, in any way, the information stored on the computer being examined. Other forensic suites run from CD-ROM or DVD-ROM, but cannot store data on the media they are run from (although they can write to other attached devices, such as external drives or memory sticks).
|
68 |
+
|
69 |
+
Motherboard firmware (including BIOS and UEFI) can be updated using USB flash drives. Usually, new firmware image is downloaded and placed onto a FAT16- or FAT32-formatted USB flash drive connected to a system which is to be updated, and path to the new firmware image is selected within the update component of system's firmware.[55] Some motherboard manufacturers are also allowing such updates to be performed without the need for entering system's firmware update component, making it possible to easily recover systems with corrupted firmware.[56]
|
70 |
+
|
71 |
+
Also, HP has introduced a USB floppy drive key, which is an ordinary USB flash drive with additional possibility for performing floppy drive emulation, allowing its usage for updating system firmware where direct usage of USB flash drives is not supported. Desired mode of operation (either regular USB mass storage device or of floppy drive emulation) is made selectable by a sliding switch on the device's housing.[57][58]
|
72 |
+
|
73 |
+
Most current PC firmware permits booting from a USB drive, allowing the launch of an operating system from a bootable flash drive. Such a configuration is known as a Live USB.[59]
|
74 |
+
|
75 |
+
Original flash memory designs had very limited estimated lifetimes. The failure mechanism for flash memory cells is analogous to a metal fatigue mode; the device fails by refusing to write new data to specific cells that have been subject to many read-write cycles over the device's lifetime. Premature failure of a "live USB" could be circumvented by using a flash drive with a write-lock switch as a WORM device, identical to a live CD. Originally, this potential failure mode limited the use of "live USB" system to special-purpose applications or temporary tasks, such as:
|
76 |
+
|
77 |
+
As of 2011[update], newer flash memory designs have much higher estimated lifetimes. Several manufacturers are now offering warranties of 5 years or more. Such warranties should make the device more attractive for more applications. By reducing the probability of the device's premature failure, flash memory devices can now be considered for use where a magnetic disk would normally have been required. Flash drives have also experienced an exponential growth in their storage capacity over time (following the Moore's Law growth curve). As of 2013, single-packaged devices with capacities of 1 TB are readily available,[60] and devices with 16 GB capacity are very economical. Storage capacities in this range have traditionally been considered to offer adequate space, because they allow enough space for both the operating system software and some free space for the user's data.
|
78 |
+
|
79 |
+
Installers of some operating systems can be stored to a flash drive instead of a CD or DVD, including various Linux distributions, Windows 7 and newer versions, and macOS. In particular, Mac OS X 10.7 is distributed only online, through the Mac App Store, or on flash drives; for a MacBook Air with Boot Camp and no external optical drive, a flash drive can be used to run installation of Windows or Linux.
|
80 |
+
|
81 |
+
However, for installation of Windows 7 and later versions, using USB flash drive with hard disk drive emulation as detected in PC's firmware is recommended in order to boot from it. Transcend is the only manufacturer of USB flash drives containing such feature.
|
82 |
+
|
83 |
+
Furthermore, for installation of Windows XP, using USB flash drive with storage limit of at most 2 GB is recommended in order to boot from it.
|
84 |
+
|
85 |
+
In Windows Vista and later versions, ReadyBoost feature allows flash drives (from 4 GB in case of Windows Vista) to augment operating system memory.[61]
|
86 |
+
|
87 |
+
Flash drives are used to carry applications that run on the host computer without requiring installation. While any standalone application can in principle be used this way, many programs store data, configuration information, etc. on the hard drive and registry of the host computer.
|
88 |
+
|
89 |
+
The U3 company works with drive makers (parent company SanDisk as well as others) to deliver custom versions of applications designed for Microsoft Windows from a special flash drive; U3-compatible devices are designed to autoload a menu when plugged into a computer running Windows. Applications must be modified for the U3 platform not to leave any data on the host machine. U3 also provides a software framework for independent software vendors interested in their platform.
|
90 |
+
|
91 |
+
Ceedo is an alternative product, with the key difference that it does not require Windows applications to be modified in order for them to be carried and run on the drive.
|
92 |
+
|
93 |
+
Similarly, other application virtualization solutions and portable application creators, such as VMware ThinApp (for Windows) or RUNZ (for Linux) can be used to run software from a flash drive without installation.
|
94 |
+
|
95 |
+
In October 2010, Apple Inc. released their newest iteration of the MacBook Air, which had the system's restore files contained on a USB hard drive rather than the traditional install CDs, due to the Air not coming with an optical drive.[62]
|
96 |
+
|
97 |
+
A wide range of portable applications which are all free of charge, and able to run off a computer running Windows without storing anything on the host computer's drives or registry, can be found in the list of portable software.
|
98 |
+
|
99 |
+
Some value-added resellers are now using a flash drive as part of small-business turnkey solutions (e.g., point-of-sale systems). The drive is used as a backup medium: at the close of business each night, the drive is inserted, and a database backup is saved to the drive. Alternatively, the drive can be left inserted through the business day, and data regularly updated. In either case, the drive is removed at night and taken offsite.
|
100 |
+
|
101 |
+
Flash drives also have disadvantages. They are easy to lose and facilitate unauthorized backups. A lesser setback for flash drives is that they have only one tenth the capacity of hard drives manufactured around their time of distribution.
|
102 |
+
|
103 |
+
Many companies make small solid-state digital audio players, essentially producing flash drives with sound output and a simple user interface. Examples include the Creative MuVo, Philips GoGear and the first generation iPod shuffle. Some of these players are true USB flash drives as well as music players; others do not support general-purpose data storage. Other applications requiring storage, such as digital voice or sound recording, can also be combined with flash drive functionality.[63]
|
104 |
+
|
105 |
+
Many of the smallest players are powered by a permanently fitted rechargeable battery, charged from the USB interface. Fancier devices that function as a digital audio player have a USB host port (type A female typically).
|
106 |
+
|
107 |
+
Digital audio files can be transported from one computer to another like any other file, and played on a compatible media player (with caveats for DRM-locked files). In addition, many home Hi-Fi and car stereo head units are now equipped with a USB port. This allows a USB flash drive containing media files in a variety of formats to be played directly on devices which support the format. Some LCD monitors for consumer HDTV viewing have a dedicated USB port through which music and video files can also be played without use of a personal computer.
|
108 |
+
|
109 |
+
Artists have sold or given away USB flash drives, with the first instance believed to be in 2004 when the German punk band Wizo released the Stick EP, only as a USB drive. In addition to five high-bitrate MP3s, it also included a video, pictures, lyrics, and guitar tablature.[64] Subsequently, artists including Nine Inch Nails and Kylie Minogue[65] have released music and promotional material on USB flash drives. The first USB album to be released in the UK was Kiss Does... Rave, a compilation album released by the Kiss Network in April 2007.[66]
|
110 |
+
|
111 |
+
The availability of inexpensive flash drives has enabled them to be used for promotional and marketing purposes, particularly within technical and computer-industry circles (e.g., technology trade shows). They may be given away for free, sold at less than wholesale price, or included as a bonus with another purchased product.
|
112 |
+
|
113 |
+
Usually, such drives will be custom-stamped with a company's logo, as a form of advertising. The drive may be blank, or preloaded with graphics, documentation, web links, Flash animation or other multimedia, and free or demonstration software. Some preloaded drives are read-only, while others are configured with both read-only and user-writable segments. Such dual-partition drives are more expensive.[67]
|
114 |
+
|
115 |
+
Flash drives can be set up to automatically launch stored presentations, websites, articles, and any other software immediately on insertion of the drive using the Microsoft Windows AutoRun feature.[68] Autorunning software this way does not work on all computers, and it is normally disabled by security-conscious users.
|
116 |
+
|
117 |
+
In the arcade game In the Groove and more commonly In The Groove 2, flash drives are used to transfer high scores, screenshots, dance edits, and combos throughout sessions. As of software revision 21 (R21), players can also store custom songs and play them on any machine on which this feature is enabled. While use of flash drives is common, the drive must be Linux compatible.
|
118 |
+
|
119 |
+
In the arcade games Pump it Up NX2 and Pump it Up NXA, a specially produced flash drive is used as a "save file" for unlocked songs, as well as for progressing in the WorldMax and Brain Shower sections of the game.
|
120 |
+
|
121 |
+
In the arcade game Dance Dance Revolution X, an exclusive USB flash drive was made by Konami for the purpose of the link feature from its Sony PlayStation 2 counterpart. However, any USB flash drive can be used in this arcade game.
|
122 |
+
|
123 |
+
Flash drives use little power, have no fragile moving parts, and for most capacities are small and light. Data stored on flash drives is impervious to mechanical shock, magnetic fields, scratches and dust. These properties make them suitable for transporting data from place to place and keeping the data readily at hand.
|
124 |
+
|
125 |
+
Flash drives also store data densely compared to many removable media. In mid-2009, 256 GB drives became available, with the ability to hold many times more data than a DVD (54 DVDs) or even a Blu-ray (10 BDs).[69]
|
126 |
+
|
127 |
+
Flash drives implement the USB mass storage device class so that most modern operating systems can read and write to them without installing device drivers. The flash drives present a simple block-structured logical unit to the host operating system, hiding the individual complex implementation details of the various underlying flash memory devices. The operating system can use any file system or block addressing scheme. Some computers can boot up from flash drives.
|
128 |
+
|
129 |
+
Specially manufactured flash drives are available that have a tough rubber or metal casing designed to be waterproof and virtually "unbreakable". These flash drives retain their memory after being submerged in water, and even through a machine wash. Leaving such a flash drive out to dry completely before allowing current to run through it has been known to result in a working drive with no future problems. Channel Five's Gadget Show cooked one of these flash drives with propane, froze it with dry ice, submerged it in various acidic liquids, ran over it with a jeep and fired it against a wall with a mortar. A company specializing in recovering lost data from computer drives managed to recover all the data on the drive.[70] All data on the other removable storage devices tested, using optical or magnetic technologies, were destroyed.
|
130 |
+
|
131 |
+
The applications of current data tape cartridges hardly overlap those of flash drives: on tape, cost per gigabyte is very low for large volumes, but the individual drives and media are expensive. Media have a very high capacity and very fast transfer speeds, but store data sequentially and are very slow for random access of data. While disk-based backup is now the primary medium of choice for most companies, tape backup is still popular for taking data off-site for worst-case scenarios and for very large volumes (more than a few hundreds of TB). See LTO tapes.
|
132 |
+
|
133 |
+
Floppy disk drives are rarely fitted to modern computers and are obsolete for normal purposes, although internal and external drives can be fitted if required. Floppy disks may be the method of choice for transferring data to and from very old computers without USB or booting from floppy disks, and so they are sometimes used to change the firmware on, for example, BIOS chips. Devices with removable storage like older Yamaha music keyboards are also dependent on floppy disks, which require computers to process them. Newer devices are built with USB flash drive support.
|
134 |
+
|
135 |
+
Floppy disk hardware emulators exist which effectively utilize the internal connections and physical attributes of a floppy disk drive to utilize a device where a USB flash drive emulates the storage space of a floppy disk in a solid state form, and can be divided into a number of individual virtual floppy disk images using individual data channels.
|
136 |
+
|
137 |
+
The various writable and re-writable forms of CD and DVD are portable storage media supported by the vast majority of computers as of 2008. CD-R, DVD-R, and DVD+R can be written to only once, RW varieties up to about 1,000 erase/write cycles, while modern NAND-based flash drives often last for 500,000 or more erase/write cycles. DVD-RAM discs are the most suitable optical discs for data storage involving much rewriting.
|
138 |
+
|
139 |
+
Optical storage devices are among the cheapest methods of mass data storage after the hard drive. They are slower than their flash-based counterparts. Standard 120 mm optical discs are larger than flash drives and more subject to damage. Smaller optical media do exist, such as business card CD-Rs which have the same dimensions as a credit card, and the slightly less convenient but higher capacity 80 mm recordable MiniCD and Mini DVD. The small discs are more expensive than the standard size, and do not work in all drives.
|
140 |
+
|
141 |
+
Universal Disk Format (UDF) version 1.50 and above has facilities to support rewritable discs like sparing tables and virtual allocation tables, spreading usage over the entire surface of a disc and maximising life, but many older operating systems do not support this format. Packet-writing utilities such as DirectCD and InCD are available but produce discs that are not universally readable (although based on the UDF standard). The Mount Rainier standard addresses this shortcoming in CD-RW media by running the older file systems on top of it and performing defect management for those standards, but it requires support from both the CD/DVD burner and the operating system. Many drives made today do not support Mount Rainier, and many older operating systems such as Windows XP and below, and Linux kernels older than 2.6.2, do not support it (later versions do). Essentially CDs/DVDs are a good way to record a great deal of information cheaply and have the advantage of being readable by most standalone players, but they are poor at making ongoing small changes to a large collection of information. Flash drives' ability to do this is their major advantage over optical media.
|
142 |
+
|
143 |
+
Flash memory cards, e.g., Secure Digital cards, are available in various formats and capacities, and are used by many consumer devices. However, while virtually all PCs have USB ports, allowing the use of USB flash drives, memory card readers are not commonly supplied as standard equipment (particularly with desktop computers). Although inexpensive card readers are available that read many common formats, this results in two pieces of portable equipment (card plus reader) rather than one.
|
144 |
+
|
145 |
+
Some manufacturers, aiming at a "best of both worlds" solution, have produced card readers that approach the size and form of USB flash drives (e.g., Kingston MobileLite,[71] SanDisk MobileMate[72]) These readers are limited to a specific subset of memory card formats (such as SD, microSD, or Memory Stick), and often completely enclose the card, offering durability and portability approaching, if not quite equal to, that of a flash drive. Although the combined cost of a mini-reader and a memory card is usually slightly higher than a USB flash drive of comparable capacity, the reader + card solution offers additional flexibility of use, and virtually "unlimited" capacity. The ubiquity of SD cards is such that, circa 2011, due to economies of scale, their price is now less than an equivalent-capacity USB flash drive, even with the added cost of a USB SD card reader.
|
146 |
+
|
147 |
+
An additional advantage of memory cards is that many consumer devices (e.g., digital cameras, portable music players) cannot make use of USB flash drives (even if the device has a USB port), whereas the memory cards used by the devices can be read by PCs with a card reader.
|
148 |
+
|
149 |
+
Particularly with the advent of USB, external hard disks have become widely available and inexpensive. External hard disk drives currently cost less per gigabyte than flash drives and are available in larger capacities. Some hard drives support alternative and faster interfaces than USB 2.0 (e.g., Thunderbolt, FireWire and eSATA). For consecutive sector writes and reads (for example, from an unfragmented file), most hard drives can provide a much higher sustained data rate than current NAND flash memory, though mechanical latencies seriously impact hard drive performance.
|
150 |
+
|
151 |
+
Unlike solid-state memory, hard drives are susceptible to damage by shock (e.g., a short fall) and vibration, have limitations on use at high altitude, and although they are shielded by their casings, they are vulnerable when exposed to strong magnetic fields. In terms of overall mass, hard drives are usually larger and heavier than flash drives; however, hard disks sometimes weigh less per unit of storage. Like flash drives, hard disks also suffer from file fragmentation, which can reduce access speed.
|
152 |
+
|
153 |
+
Audio tape cassettes and high-capacity floppy disks (e.g., Imation SuperDisk), and other forms of drives with removable magnetic media, such as the Iomega Zip and Jaz drives, are now largely obsolete and rarely used. There are products in today's market that will emulate these legacy drives for both tape and disk (SCSI1/SCSI2, SASI, Magneto optic, Ricoh ZIP, Jaz, IBM3590/ Fujitsu 3490E and Bernoulli for example) in state-of-the-art Compact Flash storage devices – CF2SCSI.
|
154 |
+
|
155 |
+
As highly portable media, USB flash drives are easily lost or stolen. All USB flash drives can have their contents encrypted using third-party disk encryption software, which can often be run directly from the USB drive without installation (for example, FreeOTFE), although some, such as BitLocker, require the user to have administrative rights on every computer it is run on.
|
156 |
+
|
157 |
+
Archiving software can achieve a similar result by creating encrypted ZIP or RAR files.[73][74]
|
158 |
+
|
159 |
+
Some manufacturers have produced USB flash drives which use hardware-based encryption as part of the design,[75] removing the need for third-party encryption software. In limited circumstances these drives have been shown to have security problems, and are typically more expensive than software-based systems, which are available for free.
|
160 |
+
|
161 |
+
A minority of flash drives support biometric fingerprinting to confirm the user's identity. As of mid-2005[update],[needs update] this was an expensive alternative to standard password protection offered on many new USB flash storage devices. Most fingerprint scanning drives rely upon the host operating system to validate the fingerprint via a software driver, often restricting the drive to Microsoft Windows computers. However, there are USB drives with fingerprint scanners which use controllers that allow access to protected data without any authentication.[76]
|
162 |
+
|
163 |
+
Some manufacturers deploy physical authentication tokens in the form of a flash drive. These are used to control access to a sensitive system by containing encryption keys or, more commonly, communicating with security software on the target machine. The system is designed so the target machine will not operate except when the flash drive device is plugged into it. Some of these "PC lock" devices also function as normal flash drives when plugged into other machines.
|
164 |
+
|
165 |
+
Like all flash memory devices, flash drives can sustain only a limited number of write and erase cycles before the drive fails.[77][unreliable source?][78] This should be a consideration when using a flash drive to run application software or an operating system. To address this, as well as space limitations, some developers have produced special versions of operating systems (such as Linux in Live USB)[79] or commonplace applications (such as Mozilla Firefox) designed to run from flash drives. These are typically optimized for size and configured to place temporary or intermediate files in the computer's main RAM rather than store them temporarily on the flash drive.
|
166 |
+
|
167 |
+
When used in the same manner as external rotating drives (hard drives, optical drives, or floppy drives), i.e. in ignorance of their technology, USB drives' failure is more likely to be sudden: while rotating drives can fail instantaneously, they more frequently give some indication (noises, slowness) that they are about to fail, often with enough advance warning that data can be removed before total failure. USB drives give little or no advance warning of failure. Furthermore, when internal wear-leveling is applied to prolong life of the flash drive, once failure of even part of the memory occurs it can be difficult or impossible to use the remainder of the drive, which differs from magnetic media, where bad sectors can be marked permanently not to be used.[80]
|
168 |
+
|
169 |
+
Most USB flash drives do not include a write protection mechanism. This feature, which gradually became less common, consists of a switch on the housing of the drive itself, that prevents the host computer from writing or modifying data on the drive. For example, write protection makes a device suitable for repairing virus-contaminated host computers without the risk of infecting a USB flash drive itself. In contrast to SD cards, write protection on USB flash drives (when available) is connected to the drive circuitry, and is handled by the drive itself instead of the host (on SD cards handling of the write-protection notch is optional).
|
170 |
+
|
171 |
+
A drawback to the small physical size of flash drives is that they are easily misplaced or otherwise lost. This is a particular problem if they contain sensitive data (see data security). As a consequence, some manufacturers have added encryption hardware to their drives, although software encryption systems which can be used in conjunction with any mass storage medium will achieve the same result. Most drives can be attached to keychains or lanyards. The USB plug is usually retractable or fitted with a removable protective cap.
|
172 |
+
|
173 |
+
Storage capacity of USB flash drives in 2019 was up to 2 TB while hard disks can be as large as 16 TB. As of 2011, USB flash drives were more expensive per unit of storage than large hard drives, but were less expensive in capacities of a few tens of gigabytes.[81]
|
174 |
+
|
175 |
+
Most USB-based flash technology integrates a printed circuit board with a metal tip, which is simply soldered on. As a result, the stress point is where the two pieces join. The quality control of some manufacturers does not ensure a proper solder temperature, further weakening the stress point.[82][83] Since many flash drives stick out from computers, they are likely to be bumped repeatedly and may break at the stress point. Most of the time, a break at the stress point tears the joint from the printed circuit board and results in permanent damage. However, some manufacturers produce discreet flash drives that do not stick out, and others use a solid metal or plastic uni-body that has no easily discernible stress point. SD cards serve as a good alternative to USB drives since they can be inserted flush.
|
176 |
+
|
177 |
+
Flash drives may present a significant security challenge for some organizations. Their small size and ease of use allows unsupervised visitors or employees to store and smuggle out confidential data with little chance of detection. Both corporate and public computers are vulnerable to attackers connecting a flash drive to a free USB port and using malicious software such as keyboard loggers or packet sniffers.
|
178 |
+
|
179 |
+
For computers set up to be bootable from a USB drive, it is possible to use a flash drive containing a bootable portable operating system to access the files of the computer, even if the computer is password protected. The password can then be changed, or it may be possible to crack the password with a password cracking program and gain full control over the computer. Encrypting files provides considerable protection against this type of attack.
|
180 |
+
|
181 |
+
USB flash drives may also be used deliberately or unwittingly to transfer malware and autorun worms onto a network.
|
182 |
+
|
183 |
+
Some organizations forbid the use of flash drives, and some computers are configured to disable the mounting of USB mass storage devices by users other than administrators; others use third-party software to control USB usage. The use of software allows the administrator to not only provide a USB lock but also control the use of CD-RW, SD cards and other memory devices. This enables companies with policies forbidding the use of USB flash drives in the workplace to enforce these policies. In a lower-tech security solution, some organizations disconnect USB ports inside the computer or fill the USB sockets with epoxy.
|
184 |
+
|
185 |
+
Some of the security measures taken to prevent confidential data from being taken have presented some side effects such as curtailing user privileges of recharging mobile devices off the USB ports on the systems.
|
186 |
+
|
187 |
+
In appearance similar to a USB flash drive, a USB killer is a circuit that charges up capacitors to a high voltage using the power supply pins of a USB port then discharges high voltage pulses onto the data pins. This completely standalone device can instantly and permanently damage or destroy any host hardware that it is connected to.[84]
|
188 |
+
|
189 |
+
The New York-based Human Rights Foundation collaborated with Forum 280 and USB Memory Direct to launch the "Flash Drives for Freedom" program.[85][86] The program was created in 2016 to smuggle flash drives with American and South Korean movies and television shows, as well as a copy of the Korean Wikipedia, into North Korea to spread pro-Western sentiment.[87][88]
|
190 |
+
|
191 |
+
In 2005, Microsoft was using the term "USB Flash Drive" as the common name for these devices when they introduced the Microsoft USB Flash Drive Manager.[89] Alternative names are commonly used, many of which are trademarks of various manufacturers.
|
192 |
+
|
193 |
+
Semiconductor corporations have worked to reduce the cost of the components in a flash drive by integrating various flash drive functions in a single chip, thereby reducing the part-count and overall package-cost.
|
194 |
+
|
195 |
+
Flash drive capacities on the market increase continually. High speed has become a standard for modern flash drives. Capacities exceeding 256 GB were available on the market as early as 2009.[69]
|
196 |
+
|
197 |
+
Lexar is attempting to introduce a USB FlashCard, which would be a compact USB flash drive intended to replace various kinds of flash memory cards. Pretec introduced a similar card, which also plugs into any USB port, but is just one quarter the thickness of the Lexar model.[90] Until 2008, SanDisk manufactured a product called SD Plus, which was a SecureDigital card with a USB connector.[91]
|
198 |
+
|
199 |
+
SanDisk has also introduced a new technology to allow controlled storage and usage of copyrighted materials on flash drives, primarily for use by students. This technology is termed FlashCP.
|
en/1177.html.txt
ADDED
@@ -0,0 +1,199 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
A USB flash drive[note 1] is a data storage device that includes flash memory with an integrated USB interface. It is typically removable, rewritable and much smaller than an optical disc. Most weigh less than 30 g (1 oz). Since first appearing on the market in late 2000, as with virtually all other computer memory devices, storage capacities have risen while prices have dropped. As of March 2016[update], flash drives with anywhere from 8 to 256 gigabytes (GB[2]) were frequently sold, while 512 GB and 1 terabyte (TB[3]) units were less frequent.[4][5] As of 2018, 2 TB flash drives were the largest available in terms of storage capacity.[6] Some allow up to 100,000 write/erase cycles, depending on the exact type of memory chip used, and are thought to last between 10 and 100 years under normal circumstances (shelf storage time[7]).
|
2 |
+
|
3 |
+
USB flash drives are often used for storage, data back-up and transfering of computer files. Compared with floppy disks or CDs, they are smaller, faster, have significantly more capacity, and are more durable due to a lack of moving parts. Additionally, they are immune to electromagnetic interference (unlike floppy disks), and are unharmed by surface scratches (unlike CDs). Until about 2005, most desktop and laptop computers were supplied with floppy disk drives in addition to USB ports, but floppy disk drives became obsolete after widespread adoption of USB ports and the larger USB drive capacity compared to the "1.44 megabyte" (1440 kibibyte) 3.5-inch floppy disk.
|
4 |
+
|
5 |
+
USB flash drives use the USB mass storage device class standard, supported natively by modern operating systems such as Windows, Linux, macOS and other Unix-like systems, as well as many BIOS boot ROMs. USB drives with USB 2.0 support can store more data and transfer faster than much larger optical disc drives like CD-RW or DVD-RW drives and can be read by many other systems such as the Xbox One, PlayStation 4, DVD players, automobile entertainment systems, and in a number of handheld devices such as smartphones and tablet computers, though the electronically similar SD card is better suited for those devices.
|
6 |
+
|
7 |
+
A flash drive consists of a small printed circuit board carrying the circuit elements and a USB connector, insulated electrically and protected inside a plastic, metal, or rubberized case, which can be carried in a pocket or on a key chain, for example. The USB connector may be protected by a removable cap or by retracting into the body of the drive, although it is not likely to be damaged if unprotected. Most flash drives use a standard type-A USB connection allowing connection with a port on a personal computer, but drives for other interfaces also exist. USB flash drives draw power from the computer via the USB connection. Some devices combine the functionality of a portable media player with USB flash storage; they require a battery only when used to play music on the go.
|
8 |
+
|
9 |
+
The basis for USB flash drives is flash memory, a type of floating-gate semiconductor memory invented by Fujio Masuoka in the early 1980s. Flash memory uses floating-gate MOSFET transistors as memory cells.[8][9]
|
10 |
+
|
11 |
+
M-Systems, an Israeli company, were granted a US patent on November 14, 2000, titled "Architecture for a [USB]-based Flash Disk", and crediting the invention to Amir Ban, Dov Moran and Oron Ogdan, all M-Systems employees at the time. The patent application was filed by M-Systems in April 1999.[10][1][11] Later in 1999, IBM filed an invention disclosure by one of its employees.[1] Flash drives were sold initially by Trek 2000 International, a company in Singapore, which began selling in early 2000. IBM became the first to sell USB flash drives in the United States in 2000.[1] The initial storage capacity of a flash drive was 8 MB.[12][11] Another version of the flash drive, described as a pen drive, was also developed. Pua Khein-Seng from Malaysia has been credited with this invention.[13] Patent disputes have arisen over the years, with competing companies including Singaporean company Trek Technology and Chinese company Netac Technology, attempting to enforce their patents.[14] Trek won a suit in Singapore,[15][16] but has lost battles in other countries.[17] Netac Technology has brought lawsuits against PNY Technologies,[18] Lenovo,[19] aigo,[20] Sony,[21][22][23] and Taiwan's Acer and Tai Guen Enterprise Co.[23]
|
12 |
+
|
13 |
+
Flash drives are often measured by the rate at which they transfer data. Transfer rates may be given in megabytes per second (MB/s), megabits per second (Mbit/s), or in optical drive multipliers such as "180X" (180 times 150 KiB/s).[24] File transfer rates vary considerably among devices. Second generation flash drives have claimed to read at up to 30 MB/s and write at about half that rate, which was about 20 times faster than the theoretical transfer rate achievable by the previous model, USB 1.1, which is limited to 12 Mbit/s (1.5 MB/s) with accounted overhead.[25] The effective transfer rate of a device is significantly affected by the data access pattern.[26]
|
14 |
+
|
15 |
+
By 2002, USB flash drives had USB 2.0 connectivity, which has 480 Mbit/s as the transfer rate upper bound; after accounting for the protocol overhead that translates to a 35 MB/s effective throughput.[27] That same year, Intel sparked widespread use of second generation USB by including them within its laptops.[28]
|
16 |
+
|
17 |
+
Third generation USB flash drives were announced in late 2008 and became available for purchase in 2010.[citation needed] Like USB 2.0 before it, USB 3.0 dramatically improved data transfer rates compared to its predecessor. The USB 3.0 interface specified transfer rates up to 5 Gbit/s (625 MB/s), compared to USB 2.0's 480 Mbit/s (60 MB/s).[citation needed] By 2010 the maximum available storage capacity for the devices had reached upwards of 128 GB.[11] USB 3.0 was slow to appear in laptops. As of 2010, the majority of laptop models still contained the 2.0.[28]
|
18 |
+
|
19 |
+
In January 2013, tech company Kingston, released a flash drive with 1 TB of storage.[29] The first USB 3.1 type-C flash drives, with read/write speeds of around 530 MB/s, were announced in March 2015.[30] As of July 2016, flash drives with 8 to 256 GB capacity were sold more frequently than those with capacities between 512 GB and 1 TB.[4][5] In 2017, Kingston Technology announced the release of a 2-TB flash drive.[31] In 2018, SanDisk announced a 1TB USB-C flash drive, the smallest of its kind.[32]
|
20 |
+
|
21 |
+
Internals of a typical USB flash drive
|
22 |
+
|
23 |
+
On a USB flash drive, one end of the device is fitted with a single Standard-A USB plug; some flash drives additionally offer a micro USB plug, facilitating data transfers between different devices.[33]
|
24 |
+
|
25 |
+
On a USB flash drive, one end of the device is fitted with a single Standard-A USB plug; some flash drives additionally offer a micro USB plug, facilitating data transfers between different devices.
|
26 |
+
|
27 |
+
Inside the plastic casing is a small printed circuit board, which has some power circuitry and a small number of surface-mounted integrated circuits (ICs).[citation needed] Typically, one of these ICs provides an interface between the USB connector and the onboard memory, while the other is the flash memory. Drives typically use the USB mass storage device class to communicate with the host.[34]
|
28 |
+
|
29 |
+
Flash memory combines a number of older technologies, with lower cost, lower power consumption and small size made possible by advances in semiconductor device fabrication technology. The memory storage was based on earlier EPROM and EEPROM technologies. These had limited capacity, were slow for both reading and writing, required complex high-voltage drive circuitry, and could be re-written only after erasing the entire contents of the chip.
|
30 |
+
|
31 |
+
Hardware designers later developed EEPROMs with the erasure region broken up into smaller "fields" that could be erased individually without affecting the others. Altering the contents of a particular memory location involved copying the entire field into an off-chip buffer memory, erasing the field, modifying the data as required in the buffer, and re-writing it into the same field. This required considerable computer support, and PC-based EEPROM flash memory systems often carried their own dedicated microprocessor system. Flash drives are more or less a miniaturized version of this.
|
32 |
+
|
33 |
+
The development of high-speed serial data interfaces such as USB made semiconductor memory systems with serially accessed storage viable, and the simultaneous development of small, high-speed, low-power microprocessor systems allowed this to be incorporated into extremely compact systems. Serial access requires far fewer electrical connections for the memory chips than does parallel access, which has simplified the manufacture of multi-gigabyte drives.
|
34 |
+
|
35 |
+
Computers access modern[update] flash memory systems very much like hard disk drives, where the controller system has full control over where information is actually stored. The actual EEPROM writing and erasure processes are, however, still very similar to the earlier systems described above.
|
36 |
+
|
37 |
+
Many low-cost MP3 players simply add extra software and a battery to a standard flash memory control microprocessor so it can also serve as a music playback decoder. Most of these players can also be used as a conventional flash drive, for storing files of any type.
|
38 |
+
|
39 |
+
There are typically five parts to a flash drive:
|
40 |
+
|
41 |
+
The typical device may also include:
|
42 |
+
|
43 |
+
Most USB flash drives weigh less than 30 g (1 oz).[37] While some manufacturers are competing for the smallest size,[38] with the biggest memory, offering drives only a few millimeters larger than the USB plug itself,[39] some manufacturers differentiate their products by using elaborate housings, which are often bulky and make the drive difficult to connect to the USB port. Because the USB port connectors on a computer housing are often closely spaced, plugging a flash drive into a USB port may block an adjacent port. Such devices may carry the USB logo only if sold with a separate extension cable. Such cables are USB-compatible but do not conform to the USB standard.[40][41]
|
44 |
+
|
45 |
+
USB flash drives have been integrated into other commonly carried items, such as watches, pens, laser pointers, and even the Swiss Army Knife; others have been fitted with novelty cases such as toy cars or Lego bricks. USB flash drives with images of dragons, cats or aliens are very popular in Asia.[42] The small size, robustness and cheapness of USB flash drives make them an increasingly popular peripheral for case modding.
|
46 |
+
|
47 |
+
Most flash drives ship preformatted with the FAT32, or exFAT file systems. The ubiquity of the FAT32 file system allows the drive to be accessed on virtually any host device with USB support. Also, standard FAT maintenance utilities (e.g., ScanDisk) can be used to repair or retrieve corrupted data. However, because a flash drive appears as a USB-connected hard drive to the host system, the drive can be reformatted to any file system supported by the host operating system.
|
48 |
+
|
49 |
+
The memory in flash drives is commonly engineered with multi-level cell (MLC) based memory that is good for around 3,000-5,000 program-erase cycles,[46] but some flash drives have single-level cell (SLC) based memory that is good for around 100,000 writes. There is virtually no limit to the number of reads from such flash memory, so a well-worn USB drive may be write-protected to help ensure the life of individual cells.
|
50 |
+
|
51 |
+
Estimation of flash memory endurance is a challenging subject that depends on the SLC/MLC/TLC memory type, size of the flash memory chips, and actual usage pattern. As a result, a USB flash drive can last from a few days to several hundred years.[47]
|
52 |
+
|
53 |
+
Regardless of the endurance of the memory itself, the USB connector hardware is specified to withstand only around 1,500 insert-removal cycles.[48]
|
54 |
+
|
55 |
+
Counterfeit USB flash drives are sometimes sold with claims of having higher capacities than they actually have. These are typically low capacity USB drives whose flash memory controller firmware is modified so that they emulate larger capacity drives (for example, a 2 GB drive being marketed as a 64 GB drive). When plugged into a computer, they report themselves as being the larger capacity they were sold as, but when data is written to them, either the write fails, the drive freezes up, or it overwrites existing data. Software tools exist to check and detect fake USB drives,[49][50] and in some cases it is possible to repair these devices to remove the false capacity information and use its real storage limit.[51]
|
56 |
+
|
57 |
+
Transfer speeds are technically determined by the slowest of three factors: the USB version used, the speed in which the USB controller device can read and write data onto the flash memory, and the speed of the hardware bus, especially in the case of add-on USB ports.
|
58 |
+
|
59 |
+
USB flash drives usually specify their read and write speeds in megabytes per second (MB/s); read speed is usually faster. These speeds are for optimal conditions; real-world speeds are usually slower. In particular, circumstances that often lead to speeds much lower than advertised are transfer (particularly writing) of many small files rather than a few very large ones, and mixed reading and writing to the same device.
|
60 |
+
|
61 |
+
In a typical well-conducted review of a number of high-performance USB 3.0 drives, a drive that could read large files at 68 MB/s and write at 46 MB/s, could only manage 14 MB/s and 0.3 MB/s with many small files. When combining streaming reads and writes the speed of another drive, that could read at 92 MB/s and write at 70 MB/s, was 8 MB/s. These differences differ radically from one drive to another; some drives could write small files at over 10% of the speed for large ones. The examples given are chosen to illustrate extremes....′[52]
|
62 |
+
|
63 |
+
The most common use of flash drives is to transport and store personal files, such as documents, pictures and videos. Individuals also store medical information on flash drives for emergencies and disaster preparation.
|
64 |
+
|
65 |
+
With wide deployment(s) of flash drives being used in various environments (secured or otherwise), the issue of data and information security remains important. The use of biometrics and encryption is becoming the norm with the need for increased security for data; on-the-fly encryption systems are particularly useful in this regard, as they can transparently encrypt large amounts of data. In some cases a secure USB drive may use a hardware-based encryption mechanism that uses a hardware module instead of software for strongly encrypting data. IEEE 1667 is an attempt to create a generic authentication platform for USB drives. It is supported in Windows 7 and Windows Vista (Service Pack 2 with a hotfix).[53]
|
66 |
+
|
67 |
+
A recent development for the use of a USB Flash Drive as an application carrier is to carry the Computer Online Forensic Evidence Extractor (COFEE) application developed by Microsoft. COFEE is a set of applications designed to search for and extract digital evidence on computers confiscated from suspects.[54] Forensic software is required not to alter, in any way, the information stored on the computer being examined. Other forensic suites run from CD-ROM or DVD-ROM, but cannot store data on the media they are run from (although they can write to other attached devices, such as external drives or memory sticks).
|
68 |
+
|
69 |
+
Motherboard firmware (including BIOS and UEFI) can be updated using USB flash drives. Usually, new firmware image is downloaded and placed onto a FAT16- or FAT32-formatted USB flash drive connected to a system which is to be updated, and path to the new firmware image is selected within the update component of system's firmware.[55] Some motherboard manufacturers are also allowing such updates to be performed without the need for entering system's firmware update component, making it possible to easily recover systems with corrupted firmware.[56]
|
70 |
+
|
71 |
+
Also, HP has introduced a USB floppy drive key, which is an ordinary USB flash drive with additional possibility for performing floppy drive emulation, allowing its usage for updating system firmware where direct usage of USB flash drives is not supported. Desired mode of operation (either regular USB mass storage device or of floppy drive emulation) is made selectable by a sliding switch on the device's housing.[57][58]
|
72 |
+
|
73 |
+
Most current PC firmware permits booting from a USB drive, allowing the launch of an operating system from a bootable flash drive. Such a configuration is known as a Live USB.[59]
|
74 |
+
|
75 |
+
Original flash memory designs had very limited estimated lifetimes. The failure mechanism for flash memory cells is analogous to a metal fatigue mode; the device fails by refusing to write new data to specific cells that have been subject to many read-write cycles over the device's lifetime. Premature failure of a "live USB" could be circumvented by using a flash drive with a write-lock switch as a WORM device, identical to a live CD. Originally, this potential failure mode limited the use of "live USB" system to special-purpose applications or temporary tasks, such as:
|
76 |
+
|
77 |
+
As of 2011[update], newer flash memory designs have much higher estimated lifetimes. Several manufacturers are now offering warranties of 5 years or more. Such warranties should make the device more attractive for more applications. By reducing the probability of the device's premature failure, flash memory devices can now be considered for use where a magnetic disk would normally have been required. Flash drives have also experienced an exponential growth in their storage capacity over time (following the Moore's Law growth curve). As of 2013, single-packaged devices with capacities of 1 TB are readily available,[60] and devices with 16 GB capacity are very economical. Storage capacities in this range have traditionally been considered to offer adequate space, because they allow enough space for both the operating system software and some free space for the user's data.
|
78 |
+
|
79 |
+
Installers of some operating systems can be stored to a flash drive instead of a CD or DVD, including various Linux distributions, Windows 7 and newer versions, and macOS. In particular, Mac OS X 10.7 is distributed only online, through the Mac App Store, or on flash drives; for a MacBook Air with Boot Camp and no external optical drive, a flash drive can be used to run installation of Windows or Linux.
|
80 |
+
|
81 |
+
However, for installation of Windows 7 and later versions, using USB flash drive with hard disk drive emulation as detected in PC's firmware is recommended in order to boot from it. Transcend is the only manufacturer of USB flash drives containing such feature.
|
82 |
+
|
83 |
+
Furthermore, for installation of Windows XP, using USB flash drive with storage limit of at most 2 GB is recommended in order to boot from it.
|
84 |
+
|
85 |
+
In Windows Vista and later versions, ReadyBoost feature allows flash drives (from 4 GB in case of Windows Vista) to augment operating system memory.[61]
|
86 |
+
|
87 |
+
Flash drives are used to carry applications that run on the host computer without requiring installation. While any standalone application can in principle be used this way, many programs store data, configuration information, etc. on the hard drive and registry of the host computer.
|
88 |
+
|
89 |
+
The U3 company works with drive makers (parent company SanDisk as well as others) to deliver custom versions of applications designed for Microsoft Windows from a special flash drive; U3-compatible devices are designed to autoload a menu when plugged into a computer running Windows. Applications must be modified for the U3 platform not to leave any data on the host machine. U3 also provides a software framework for independent software vendors interested in their platform.
|
90 |
+
|
91 |
+
Ceedo is an alternative product, with the key difference that it does not require Windows applications to be modified in order for them to be carried and run on the drive.
|
92 |
+
|
93 |
+
Similarly, other application virtualization solutions and portable application creators, such as VMware ThinApp (for Windows) or RUNZ (for Linux) can be used to run software from a flash drive without installation.
|
94 |
+
|
95 |
+
In October 2010, Apple Inc. released their newest iteration of the MacBook Air, which had the system's restore files contained on a USB hard drive rather than the traditional install CDs, due to the Air not coming with an optical drive.[62]
|
96 |
+
|
97 |
+
A wide range of portable applications which are all free of charge, and able to run off a computer running Windows without storing anything on the host computer's drives or registry, can be found in the list of portable software.
|
98 |
+
|
99 |
+
Some value-added resellers are now using a flash drive as part of small-business turnkey solutions (e.g., point-of-sale systems). The drive is used as a backup medium: at the close of business each night, the drive is inserted, and a database backup is saved to the drive. Alternatively, the drive can be left inserted through the business day, and data regularly updated. In either case, the drive is removed at night and taken offsite.
|
100 |
+
|
101 |
+
Flash drives also have disadvantages. They are easy to lose and facilitate unauthorized backups. A lesser setback for flash drives is that they have only one tenth the capacity of hard drives manufactured around their time of distribution.
|
102 |
+
|
103 |
+
Many companies make small solid-state digital audio players, essentially producing flash drives with sound output and a simple user interface. Examples include the Creative MuVo, Philips GoGear and the first generation iPod shuffle. Some of these players are true USB flash drives as well as music players; others do not support general-purpose data storage. Other applications requiring storage, such as digital voice or sound recording, can also be combined with flash drive functionality.[63]
|
104 |
+
|
105 |
+
Many of the smallest players are powered by a permanently fitted rechargeable battery, charged from the USB interface. Fancier devices that function as a digital audio player have a USB host port (type A female typically).
|
106 |
+
|
107 |
+
Digital audio files can be transported from one computer to another like any other file, and played on a compatible media player (with caveats for DRM-locked files). In addition, many home Hi-Fi and car stereo head units are now equipped with a USB port. This allows a USB flash drive containing media files in a variety of formats to be played directly on devices which support the format. Some LCD monitors for consumer HDTV viewing have a dedicated USB port through which music and video files can also be played without use of a personal computer.
|
108 |
+
|
109 |
+
Artists have sold or given away USB flash drives, with the first instance believed to be in 2004 when the German punk band Wizo released the Stick EP, only as a USB drive. In addition to five high-bitrate MP3s, it also included a video, pictures, lyrics, and guitar tablature.[64] Subsequently, artists including Nine Inch Nails and Kylie Minogue[65] have released music and promotional material on USB flash drives. The first USB album to be released in the UK was Kiss Does... Rave, a compilation album released by the Kiss Network in April 2007.[66]
|
110 |
+
|
111 |
+
The availability of inexpensive flash drives has enabled them to be used for promotional and marketing purposes, particularly within technical and computer-industry circles (e.g., technology trade shows). They may be given away for free, sold at less than wholesale price, or included as a bonus with another purchased product.
|
112 |
+
|
113 |
+
Usually, such drives will be custom-stamped with a company's logo, as a form of advertising. The drive may be blank, or preloaded with graphics, documentation, web links, Flash animation or other multimedia, and free or demonstration software. Some preloaded drives are read-only, while others are configured with both read-only and user-writable segments. Such dual-partition drives are more expensive.[67]
|
114 |
+
|
115 |
+
Flash drives can be set up to automatically launch stored presentations, websites, articles, and any other software immediately on insertion of the drive using the Microsoft Windows AutoRun feature.[68] Autorunning software this way does not work on all computers, and it is normally disabled by security-conscious users.
|
116 |
+
|
117 |
+
In the arcade game In the Groove and more commonly In The Groove 2, flash drives are used to transfer high scores, screenshots, dance edits, and combos throughout sessions. As of software revision 21 (R21), players can also store custom songs and play them on any machine on which this feature is enabled. While use of flash drives is common, the drive must be Linux compatible.
|
118 |
+
|
119 |
+
In the arcade games Pump it Up NX2 and Pump it Up NXA, a specially produced flash drive is used as a "save file" for unlocked songs, as well as for progressing in the WorldMax and Brain Shower sections of the game.
|
120 |
+
|
121 |
+
In the arcade game Dance Dance Revolution X, an exclusive USB flash drive was made by Konami for the purpose of the link feature from its Sony PlayStation 2 counterpart. However, any USB flash drive can be used in this arcade game.
|
122 |
+
|
123 |
+
Flash drives use little power, have no fragile moving parts, and for most capacities are small and light. Data stored on flash drives is impervious to mechanical shock, magnetic fields, scratches and dust. These properties make them suitable for transporting data from place to place and keeping the data readily at hand.
|
124 |
+
|
125 |
+
Flash drives also store data densely compared to many removable media. In mid-2009, 256 GB drives became available, with the ability to hold many times more data than a DVD (54 DVDs) or even a Blu-ray (10 BDs).[69]
|
126 |
+
|
127 |
+
Flash drives implement the USB mass storage device class so that most modern operating systems can read and write to them without installing device drivers. The flash drives present a simple block-structured logical unit to the host operating system, hiding the individual complex implementation details of the various underlying flash memory devices. The operating system can use any file system or block addressing scheme. Some computers can boot up from flash drives.
|
128 |
+
|
129 |
+
Specially manufactured flash drives are available that have a tough rubber or metal casing designed to be waterproof and virtually "unbreakable". These flash drives retain their memory after being submerged in water, and even through a machine wash. Leaving such a flash drive out to dry completely before allowing current to run through it has been known to result in a working drive with no future problems. Channel Five's Gadget Show cooked one of these flash drives with propane, froze it with dry ice, submerged it in various acidic liquids, ran over it with a jeep and fired it against a wall with a mortar. A company specializing in recovering lost data from computer drives managed to recover all the data on the drive.[70] All data on the other removable storage devices tested, using optical or magnetic technologies, were destroyed.
|
130 |
+
|
131 |
+
The applications of current data tape cartridges hardly overlap those of flash drives: on tape, cost per gigabyte is very low for large volumes, but the individual drives and media are expensive. Media have a very high capacity and very fast transfer speeds, but store data sequentially and are very slow for random access of data. While disk-based backup is now the primary medium of choice for most companies, tape backup is still popular for taking data off-site for worst-case scenarios and for very large volumes (more than a few hundreds of TB). See LTO tapes.
|
132 |
+
|
133 |
+
Floppy disk drives are rarely fitted to modern computers and are obsolete for normal purposes, although internal and external drives can be fitted if required. Floppy disks may be the method of choice for transferring data to and from very old computers without USB or booting from floppy disks, and so they are sometimes used to change the firmware on, for example, BIOS chips. Devices with removable storage like older Yamaha music keyboards are also dependent on floppy disks, which require computers to process them. Newer devices are built with USB flash drive support.
|
134 |
+
|
135 |
+
Floppy disk hardware emulators exist which effectively utilize the internal connections and physical attributes of a floppy disk drive to utilize a device where a USB flash drive emulates the storage space of a floppy disk in a solid state form, and can be divided into a number of individual virtual floppy disk images using individual data channels.
|
136 |
+
|
137 |
+
The various writable and re-writable forms of CD and DVD are portable storage media supported by the vast majority of computers as of 2008. CD-R, DVD-R, and DVD+R can be written to only once, RW varieties up to about 1,000 erase/write cycles, while modern NAND-based flash drives often last for 500,000 or more erase/write cycles. DVD-RAM discs are the most suitable optical discs for data storage involving much rewriting.
|
138 |
+
|
139 |
+
Optical storage devices are among the cheapest methods of mass data storage after the hard drive. They are slower than their flash-based counterparts. Standard 120 mm optical discs are larger than flash drives and more subject to damage. Smaller optical media do exist, such as business card CD-Rs which have the same dimensions as a credit card, and the slightly less convenient but higher capacity 80 mm recordable MiniCD and Mini DVD. The small discs are more expensive than the standard size, and do not work in all drives.
|
140 |
+
|
141 |
+
Universal Disk Format (UDF) version 1.50 and above has facilities to support rewritable discs like sparing tables and virtual allocation tables, spreading usage over the entire surface of a disc and maximising life, but many older operating systems do not support this format. Packet-writing utilities such as DirectCD and InCD are available but produce discs that are not universally readable (although based on the UDF standard). The Mount Rainier standard addresses this shortcoming in CD-RW media by running the older file systems on top of it and performing defect management for those standards, but it requires support from both the CD/DVD burner and the operating system. Many drives made today do not support Mount Rainier, and many older operating systems such as Windows XP and below, and Linux kernels older than 2.6.2, do not support it (later versions do). Essentially CDs/DVDs are a good way to record a great deal of information cheaply and have the advantage of being readable by most standalone players, but they are poor at making ongoing small changes to a large collection of information. Flash drives' ability to do this is their major advantage over optical media.
|
142 |
+
|
143 |
+
Flash memory cards, e.g., Secure Digital cards, are available in various formats and capacities, and are used by many consumer devices. However, while virtually all PCs have USB ports, allowing the use of USB flash drives, memory card readers are not commonly supplied as standard equipment (particularly with desktop computers). Although inexpensive card readers are available that read many common formats, this results in two pieces of portable equipment (card plus reader) rather than one.
|
144 |
+
|
145 |
+
Some manufacturers, aiming at a "best of both worlds" solution, have produced card readers that approach the size and form of USB flash drives (e.g., Kingston MobileLite,[71] SanDisk MobileMate[72]) These readers are limited to a specific subset of memory card formats (such as SD, microSD, or Memory Stick), and often completely enclose the card, offering durability and portability approaching, if not quite equal to, that of a flash drive. Although the combined cost of a mini-reader and a memory card is usually slightly higher than a USB flash drive of comparable capacity, the reader + card solution offers additional flexibility of use, and virtually "unlimited" capacity. The ubiquity of SD cards is such that, circa 2011, due to economies of scale, their price is now less than an equivalent-capacity USB flash drive, even with the added cost of a USB SD card reader.
|
146 |
+
|
147 |
+
An additional advantage of memory cards is that many consumer devices (e.g., digital cameras, portable music players) cannot make use of USB flash drives (even if the device has a USB port), whereas the memory cards used by the devices can be read by PCs with a card reader.
|
148 |
+
|
149 |
+
Particularly with the advent of USB, external hard disks have become widely available and inexpensive. External hard disk drives currently cost less per gigabyte than flash drives and are available in larger capacities. Some hard drives support alternative and faster interfaces than USB 2.0 (e.g., Thunderbolt, FireWire and eSATA). For consecutive sector writes and reads (for example, from an unfragmented file), most hard drives can provide a much higher sustained data rate than current NAND flash memory, though mechanical latencies seriously impact hard drive performance.
|
150 |
+
|
151 |
+
Unlike solid-state memory, hard drives are susceptible to damage by shock (e.g., a short fall) and vibration, have limitations on use at high altitude, and although they are shielded by their casings, they are vulnerable when exposed to strong magnetic fields. In terms of overall mass, hard drives are usually larger and heavier than flash drives; however, hard disks sometimes weigh less per unit of storage. Like flash drives, hard disks also suffer from file fragmentation, which can reduce access speed.
|
152 |
+
|
153 |
+
Audio tape cassettes and high-capacity floppy disks (e.g., Imation SuperDisk), and other forms of drives with removable magnetic media, such as the Iomega Zip and Jaz drives, are now largely obsolete and rarely used. There are products in today's market that will emulate these legacy drives for both tape and disk (SCSI1/SCSI2, SASI, Magneto optic, Ricoh ZIP, Jaz, IBM3590/ Fujitsu 3490E and Bernoulli for example) in state-of-the-art Compact Flash storage devices – CF2SCSI.
|
154 |
+
|
155 |
+
As highly portable media, USB flash drives are easily lost or stolen. All USB flash drives can have their contents encrypted using third-party disk encryption software, which can often be run directly from the USB drive without installation (for example, FreeOTFE), although some, such as BitLocker, require the user to have administrative rights on every computer it is run on.
|
156 |
+
|
157 |
+
Archiving software can achieve a similar result by creating encrypted ZIP or RAR files.[73][74]
|
158 |
+
|
159 |
+
Some manufacturers have produced USB flash drives which use hardware-based encryption as part of the design,[75] removing the need for third-party encryption software. In limited circumstances these drives have been shown to have security problems, and are typically more expensive than software-based systems, which are available for free.
|
160 |
+
|
161 |
+
A minority of flash drives support biometric fingerprinting to confirm the user's identity. As of mid-2005[update],[needs update] this was an expensive alternative to standard password protection offered on many new USB flash storage devices. Most fingerprint scanning drives rely upon the host operating system to validate the fingerprint via a software driver, often restricting the drive to Microsoft Windows computers. However, there are USB drives with fingerprint scanners which use controllers that allow access to protected data without any authentication.[76]
|
162 |
+
|
163 |
+
Some manufacturers deploy physical authentication tokens in the form of a flash drive. These are used to control access to a sensitive system by containing encryption keys or, more commonly, communicating with security software on the target machine. The system is designed so the target machine will not operate except when the flash drive device is plugged into it. Some of these "PC lock" devices also function as normal flash drives when plugged into other machines.
|
164 |
+
|
165 |
+
Like all flash memory devices, flash drives can sustain only a limited number of write and erase cycles before the drive fails.[77][unreliable source?][78] This should be a consideration when using a flash drive to run application software or an operating system. To address this, as well as space limitations, some developers have produced special versions of operating systems (such as Linux in Live USB)[79] or commonplace applications (such as Mozilla Firefox) designed to run from flash drives. These are typically optimized for size and configured to place temporary or intermediate files in the computer's main RAM rather than store them temporarily on the flash drive.
|
166 |
+
|
167 |
+
When used in the same manner as external rotating drives (hard drives, optical drives, or floppy drives), i.e. in ignorance of their technology, USB drives' failure is more likely to be sudden: while rotating drives can fail instantaneously, they more frequently give some indication (noises, slowness) that they are about to fail, often with enough advance warning that data can be removed before total failure. USB drives give little or no advance warning of failure. Furthermore, when internal wear-leveling is applied to prolong life of the flash drive, once failure of even part of the memory occurs it can be difficult or impossible to use the remainder of the drive, which differs from magnetic media, where bad sectors can be marked permanently not to be used.[80]
|
168 |
+
|
169 |
+
Most USB flash drives do not include a write protection mechanism. This feature, which gradually became less common, consists of a switch on the housing of the drive itself, that prevents the host computer from writing or modifying data on the drive. For example, write protection makes a device suitable for repairing virus-contaminated host computers without the risk of infecting a USB flash drive itself. In contrast to SD cards, write protection on USB flash drives (when available) is connected to the drive circuitry, and is handled by the drive itself instead of the host (on SD cards handling of the write-protection notch is optional).
|
170 |
+
|
171 |
+
A drawback to the small physical size of flash drives is that they are easily misplaced or otherwise lost. This is a particular problem if they contain sensitive data (see data security). As a consequence, some manufacturers have added encryption hardware to their drives, although software encryption systems which can be used in conjunction with any mass storage medium will achieve the same result. Most drives can be attached to keychains or lanyards. The USB plug is usually retractable or fitted with a removable protective cap.
|
172 |
+
|
173 |
+
Storage capacity of USB flash drives in 2019 was up to 2 TB while hard disks can be as large as 16 TB. As of 2011, USB flash drives were more expensive per unit of storage than large hard drives, but were less expensive in capacities of a few tens of gigabytes.[81]
|
174 |
+
|
175 |
+
Most USB-based flash technology integrates a printed circuit board with a metal tip, which is simply soldered on. As a result, the stress point is where the two pieces join. The quality control of some manufacturers does not ensure a proper solder temperature, further weakening the stress point.[82][83] Since many flash drives stick out from computers, they are likely to be bumped repeatedly and may break at the stress point. Most of the time, a break at the stress point tears the joint from the printed circuit board and results in permanent damage. However, some manufacturers produce discreet flash drives that do not stick out, and others use a solid metal or plastic uni-body that has no easily discernible stress point. SD cards serve as a good alternative to USB drives since they can be inserted flush.
|
176 |
+
|
177 |
+
Flash drives may present a significant security challenge for some organizations. Their small size and ease of use allows unsupervised visitors or employees to store and smuggle out confidential data with little chance of detection. Both corporate and public computers are vulnerable to attackers connecting a flash drive to a free USB port and using malicious software such as keyboard loggers or packet sniffers.
|
178 |
+
|
179 |
+
For computers set up to be bootable from a USB drive, it is possible to use a flash drive containing a bootable portable operating system to access the files of the computer, even if the computer is password protected. The password can then be changed, or it may be possible to crack the password with a password cracking program and gain full control over the computer. Encrypting files provides considerable protection against this type of attack.
|
180 |
+
|
181 |
+
USB flash drives may also be used deliberately or unwittingly to transfer malware and autorun worms onto a network.
|
182 |
+
|
183 |
+
Some organizations forbid the use of flash drives, and some computers are configured to disable the mounting of USB mass storage devices by users other than administrators; others use third-party software to control USB usage. The use of software allows the administrator to not only provide a USB lock but also control the use of CD-RW, SD cards and other memory devices. This enables companies with policies forbidding the use of USB flash drives in the workplace to enforce these policies. In a lower-tech security solution, some organizations disconnect USB ports inside the computer or fill the USB sockets with epoxy.
|
184 |
+
|
185 |
+
Some of the security measures taken to prevent confidential data from being taken have presented some side effects such as curtailing user privileges of recharging mobile devices off the USB ports on the systems.
|
186 |
+
|
187 |
+
In appearance similar to a USB flash drive, a USB killer is a circuit that charges up capacitors to a high voltage using the power supply pins of a USB port then discharges high voltage pulses onto the data pins. This completely standalone device can instantly and permanently damage or destroy any host hardware that it is connected to.[84]
|
188 |
+
|
189 |
+
The New York-based Human Rights Foundation collaborated with Forum 280 and USB Memory Direct to launch the "Flash Drives for Freedom" program.[85][86] The program was created in 2016 to smuggle flash drives with American and South Korean movies and television shows, as well as a copy of the Korean Wikipedia, into North Korea to spread pro-Western sentiment.[87][88]
|
190 |
+
|
191 |
+
In 2005, Microsoft was using the term "USB Flash Drive" as the common name for these devices when they introduced the Microsoft USB Flash Drive Manager.[89] Alternative names are commonly used, many of which are trademarks of various manufacturers.
|
192 |
+
|
193 |
+
Semiconductor corporations have worked to reduce the cost of the components in a flash drive by integrating various flash drive functions in a single chip, thereby reducing the part-count and overall package-cost.
|
194 |
+
|
195 |
+
Flash drive capacities on the market increase continually. High speed has become a standard for modern flash drives. Capacities exceeding 256 GB were available on the market as early as 2009.[69]
|
196 |
+
|
197 |
+
Lexar is attempting to introduce a USB FlashCard, which would be a compact USB flash drive intended to replace various kinds of flash memory cards. Pretec introduced a similar card, which also plugs into any USB port, but is just one quarter the thickness of the Lexar model.[90] Until 2008, SanDisk manufactured a product called SD Plus, which was a SecureDigital card with a USB connector.[91]
|
198 |
+
|
199 |
+
SanDisk has also introduced a new technology to allow controlled storage and usage of copyrighted materials on flash drives, primarily for use by students. This technology is termed FlashCP.
|
en/1178.html.txt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
A region is arid when it is characterized by a severe lack of available water, to the extent of hindering or preventing the growth and development of plant and animal life. Environments subject to arid climates tend to lack vegetation and are called xeric or desertic. Most "arid" climates straddle the Equator; these places include parts of Africa, Asia, South America, Central America, and Australia.
|
2 |
+
|
3 |
+
The distribution of aridity observed at any one point in time is largely the result of the general circulation of the atmosphere. The latter does change significantly over time through climate change. For example, temperature increase (by 1.5–2.1 percent) across the Nile Basin over the next 30–40 years could change the region from semi-arid to arid, resulting in a significant reduction in agricultural land. In addition, changes in land use can result in greater demands on soil water and induce a higher degree of aridity.
|