de-francophones commited on
Commit
a81820d
1 Parent(s): 73663ed

01a64b48bbee5cd27fcc061b6bf904896b7760e356eda566df2d4be4993a50a6

Browse files
en/4644.html.txt ADDED
@@ -0,0 +1,256 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+
5
+ Mercury is the smallest and innermost planet in the Solar System. Its orbit around the Sun takes 87.97 days, the shortest of all the planets in the Solar System. It is named after the Roman deity Mercury, the messenger of the gods.
6
+
7
+ Like Venus, Mercury orbits the Sun within Earth's orbit as an inferior planet, and its apparent distance from the Sun as viewed from Earth never exceeds 28°. This proximity to the Sun means the planet can only be seen near the western horizon after sunset or eastern horizon before sunrise, usually in twilight. At this time, it may appear as a bright star-like object, but is often far more difficult to observe than Venus. The planet telescopically displays the complete range of phases, similar to Venus and the Moon, as it moves in its inner orbit relative to Earth, which recurs over its synodic period of approximately 116 days.
8
+
9
+ Mercury rotates in a way that is unique in the Solar System. It is tidally locked with the Sun in a 3:2 spin–orbit resonance,[16] meaning that relative to the fixed stars, it rotates on its axis exactly three times for every two revolutions it makes around the Sun.[a][17] As seen from the Sun, in a frame of reference that rotates with the orbital motion, it appears to rotate only once every two Mercurian years. An observer on Mercury would therefore see only one day every two Mercurian years.
10
+
11
+ Mercury's axis has the smallest tilt of any of the Solar System's planets (about ​1⁄30 degree). Its orbital eccentricity is the largest of all known planets in the Solar System;[b] at perihelion, Mercury's distance from the Sun is only about two-thirds (or 66%) of its distance at aphelion. Mercury's surface appears heavily cratered and is similar in appearance to the Moon's, indicating that it has been geologically inactive for billions of years. Having almost no atmosphere to retain heat, it has surface temperatures that vary diurnally more than on any other planet in the Solar System, ranging from 100 K (−173 °C; −280 °F) at night to 700 K (427 °C; 800 °F) during the day across the equatorial regions.[18] The polar regions are constantly below 180 K (−93 °C; −136 °F). The planet has no known natural satellites.
12
+
13
+ Two spacecraft have visited Mercury: Mariner 10 flew by in 1974 and 1975; and MESSENGER, launched in 2004, orbited Mercury over 4,000 times in four years before exhausting its fuel and crashing into the planet's surface on April 30, 2015.[19][20][21] The BepiColombo spacecraft is planned to arrive at Mercury in 2025.
14
+
15
+ Mercury appears to have a solid silicate crust and mantle overlying a solid, iron sulfide outer core layer, a deeper liquid core layer, and a solid inner core.[22][23]
16
+
17
+ Mercury is one of four terrestrial planets in the Solar System, and is a rocky body like Earth. It is the smallest planet in the Solar System, with an equatorial radius of 2,439.7 kilometres (1,516.0 mi).[3] Mercury is also smaller—albeit more massive—than the largest natural satellites in the Solar System, Ganymede and Titan. Mercury consists of approximately 70% metallic and 30% silicate material.[24] Mercury's density is the second highest in the Solar System at 5.427 g/cm3, only slightly less than Earth's density of 5.515 g/cm3.[3] If the effect of gravitational compression were to be factored out from both planets, the materials of which Mercury is made would be denser than those of Earth, with an uncompressed density of 5.3 g/cm3 versus Earth's 4.4 g/cm3.[25]
18
+
19
+ Mercury's density can be used to infer details of its inner structure. Although Earth's high density results appreciably from gravitational compression, particularly at the core, Mercury is much smaller and its inner regions are not as compressed. Therefore, for it to have such a high density, its core must be large and rich in iron.[26]
20
+
21
+ Geologists estimate that Mercury's core occupies about 55% of its volume; for Earth this proportion is 17%. Research published in 2007 suggests that Mercury has a molten core.[27][28] Surrounding the core is a 500–700 km (310–430 mi) mantle consisting of silicates.[29][30] Based on data from the Mariner 10 mission and Earth-based observation, Mercury's crust is estimated to be 35 km (22 mi) thick.[31] One distinctive feature of Mercury's surface is the presence of numerous narrow ridges, extending up to several hundred kilometers in length. It is thought that these were formed as Mercury's core and mantle cooled and contracted at a time when the crust had already solidified.[32]
22
+
23
+ Mercury's core has a higher iron content than that of any other major planet in the Solar System, and several theories have been proposed to explain this. The most widely accepted theory is that Mercury originally had a metal–silicate ratio similar to common chondrite meteorites, thought to be typical of the Solar System's rocky matter, and a mass approximately 2.25 times its current mass.[33] Early in the Solar System's history, Mercury may have been struck by a planetesimal of approximately 1/6 that mass and several thousand kilometers across.[33] The impact would have stripped away much of the original crust and mantle, leaving the core behind as a relatively major component.[33] A similar process, known as the giant impact hypothesis, has been proposed to explain the formation of the Moon.[33]
24
+
25
+ Alternatively, Mercury may have formed from the solar nebula before the Sun's energy output had stabilized. It would initially have had twice its present mass, but as the protosun contracted, temperatures near Mercury could have been between 2,500 and 3,500 K and possibly even as high as 10,000 K.[34] Much of Mercury's surface rock could have been vaporized at such temperatures, forming an atmosphere of "rock vapor" that could have been carried away by the solar wind.[34]
26
+
27
+ A third hypothesis proposes that the solar nebula caused drag on the particles from which Mercury was accreting, which meant that lighter particles were lost from the accreting material and not gathered by Mercury.[35] Each hypothesis predicts a different surface composition, and there are two space missions set to make observations. MESSENGER, which ended in 2015, found higher-than-expected potassium and sulfur levels on the surface, suggesting that the giant impact hypothesis and vaporization of the crust and mantle did not occur because potassium and sulfur would have been driven off by the extreme heat of these events.[36] BepiColombo, which will arrive at Mercury in 2025, will make observations to test these hypotheses.[37] The findings so far would seem to favor the third hypothesis; however, further analysis of the data is needed.[38]
28
+
29
+ Mercury's surface is similar in appearance to that of the Moon, showing extensive mare-like plains and heavy cratering, indicating that it has been geologically inactive for billions of years. Because knowledge of Mercury's geology had been based only on the 1975 Mariner 10 flyby and terrestrial observations, it is the least understood of the terrestrial planets.[28] As data from MESSENGER orbiter are processed, this knowledge will increase. For example, an unusual crater with radiating troughs has been discovered that scientists called "the spider".[39] It was later named Apollodorus.[40]
30
+
31
+ Albedo features are areas of markedly different reflectivity, as seen by telescopic observation. Mercury has dorsa (also called "wrinkle-ridges"), Moon-like highlands, montes (mountains), planitiae (plains), rupes (escarpments), and valles (valleys).[41][42]
32
+
33
+ Names for features on Mercury come from a variety of sources. Names coming from people are limited to the deceased. Craters are named for artists, musicians, painters, and authors who have made outstanding or fundamental contributions to their field. Ridges, or dorsa, are named for scientists who have contributed to the study of Mercury. Depressions or fossae are named for works of architecture. Montes are named for the word "hot" in a variety of languages. Plains or planitiae are named for Mercury in various languages. Escarpments or rupēs are named for ships of scientific expeditions. Valleys or valles are named for abandoned cities, towns, or settlements of antiquity.[43]
34
+
35
+ Mercury was heavily bombarded by comets and asteroids during and shortly following its formation 4.6 billion years ago, as well as during a possibly separate subsequent episode called the Late Heavy Bombardment that ended 3.8 billion years ago.[44] During this period of intense crater formation, Mercury received impacts over its entire surface,[42] facilitated by the lack of any atmosphere to slow impactors down.[45] During this time Mercury was volcanically active; basins such as the Caloris Basin were filled by magma, producing smooth plains similar to the maria found on the Moon.[46][47]
36
+
37
+ Data from the October 2008 flyby of MESSENGER gave researchers a greater appreciation for the jumbled nature of Mercury's surface. Mercury's surface is more heterogeneous than either Mars's or the Moon's, both of which contain significant stretches of similar geology, such as maria and plateaus.[48]
38
+
39
+ Craters on Mercury range in diameter from small bowl-shaped cavities to multi-ringed impact basins hundreds of kilometers across. They appear in all states of degradation, from relatively fresh rayed craters to highly degraded crater remnants. Mercurian craters differ subtly from lunar craters in that the area blanketed by their ejecta is much smaller, a consequence of Mercury's stronger surface gravity.[49] According to IAU rules, each new crater must be named after an artist that was famous for more than fifty years, and dead for more than three years, before the date the crater is named.[50]
40
+
41
+ The largest known crater is Caloris Basin, with a diameter of 1,550 km.[51] The impact that created the Caloris Basin was so powerful that it caused lava eruptions and left a concentric ring over 2 km tall surrounding the impact crater. At the antipode of the Caloris Basin is a large region of unusual, hilly terrain known as the "Weird Terrain". One hypothesis for its origin is that shock waves generated during the Caloris impact traveled around Mercury, converging at the basin's antipode (180 degrees away). The resulting high stresses fractured the surface.[52] Alternatively, it has been suggested that this terrain formed as a result of the convergence of ejecta at this basin's antipode.[53]
42
+
43
+ Overall, about 15 impact basins have been identified on the imaged part of Mercury. A notable basin is the 400 km wide, multi-ring Tolstoj Basin that has an ejecta blanket extending up to 500 km from its rim and a floor that has been filled by smooth plains materials. Beethoven Basin has a similar-sized ejecta blanket and a 625 km diameter rim.[49] Like the Moon, the surface of Mercury has likely incurred the effects of space weathering processes, including Solar wind and micrometeorite impacts.[54]
44
+
45
+ There are two geologically distinct plains regions on Mercury.[49][55] Gently rolling, hilly plains in the regions between craters are Mercury's oldest visible surfaces,[49] predating the heavily cratered terrain. These inter-crater plains appear to have obliterated many earlier craters, and show a general paucity of smaller craters below about 30 km in diameter.[55]
46
+
47
+ Smooth plains are widespread flat areas that fill depressions of various sizes and bear a strong resemblance to the lunar maria. Notably, they fill a wide ring surrounding the Caloris Basin. Unlike lunar maria, the smooth plains of Mercury have the same albedo as the older inter-crater plains. Despite a lack of unequivocally volcanic characteristics, the localisation and rounded, lobate shape of these plains strongly support volcanic origins.[49] All the smooth plains of Mercury formed significantly later than the Caloris basin, as evidenced by appreciably smaller crater densities than on the Caloris ejecta blanket.[49] The floor of the Caloris Basin is filled by a geologically distinct flat plain, broken up by ridges and fractures in a roughly polygonal pattern. It is not clear whether they are volcanic lavas induced by the impact, or a large sheet of impact melt.[49]
48
+
49
+ One unusual feature of Mercury's surface is the numerous compression folds, or rupes, that crisscross the plains. As Mercury's interior cooled, it contracted and its surface began to deform, creating wrinkle ridges and lobate scarps associated with thrust faults. The scarps can reach lengths of 1000 km and heights of 3 km.[56] These compressional features can be seen on top of other features, such as craters and smooth plains, indicating they are more recent.[57] Mapping of the features has suggested a total shrinkage of Mercury's radius in the range of ~1 to 7 km.[58] Small-scale thrust fault scarps have been found, tens of meters in height and with lengths in the range of a few km, that appear to be less than 50 million years old, indicating that compression of the interior and consequent surface geological activity continue to the present.[56][58]
50
+
51
+ The Lunar Reconnaissance Orbiter discovered that similar small thrust faults exist on the Moon.
52
+
53
+ Images obtained by MESSENGER have revealed evidence for pyroclastic flows on Mercury from low-profile shield volcanoes.[59][60][61] MESSENGER data has helped identify 51 pyroclastic deposits on the surface,[62] where 90% of them are found within impact craters.[62] A study of the degradation state of the impact craters that host pyroclastic deposits suggests that pyroclastic activity occurred on Mercury over a prolonged interval.[62]
54
+
55
+ A "rimless depression" inside the southwest rim of the Caloris Basin consists of at least nine overlapping volcanic vents, each individually up to 8 km in diameter. It is thus a "compound volcano".[63] The vent floors are at a least 1 km below their brinks and they bear a closer resemblance to volcanic craters sculpted by explosive eruptions or modified by collapse into void spaces created by magma withdrawal back down into a conduit.[63] Scientists could not quantify the age of the volcanic complex system, but reported that it could be of the order of a billion years.[63]
56
+
57
+ The surface temperature of Mercury ranges from 100 to 700 K (−173 to 427 °C; −280 to 800 °F)[18] at the most extreme places: 0°N, 0°W, or 180°W. It never rises above 180 K at the poles,[12]
58
+ due to the absence of an atmosphere and a steep temperature gradient between the equator and the poles. The subsolar point reaches about 700 K during perihelion (0°W or 180°W), but only 550 K at aphelion (90° or 270°W).[65]
59
+ On the dark side of the planet, temperatures average 110 K.[12][66]
60
+ The intensity of sunlight on Mercury's surface ranges between 4.59 and 10.61 times the solar constant (1,370 W·m−2).[67]
61
+
62
+ Although the daylight temperature at the surface of Mercury is generally extremely high, observations strongly suggest that ice (frozen water) exists on Mercury. The floors of deep craters at the poles are never exposed to direct sunlight, and temperatures there remain below 102 K; far lower than the global average.[68] Water ice strongly reflects radar, and observations by the 70-meter Goldstone Solar System Radar and the VLA in the early 1990s revealed that there are patches of high radar reflection near the poles.[69] Although ice was not the only possible cause of these reflective regions, astronomers think it was the most likely.[70]
63
+
64
+ The icy regions are estimated to contain about 1014–1015 kg of ice,[71] and may be covered by a layer of regolith that inhibits sublimation.[72] By comparison, the Antarctic ice sheet on Earth has a mass of about 4×1018 kg, and Mars's south polar cap contains about 1016 kg of water.[71] The origin of the ice on Mercury is not yet known, but the two most likely sources are from outgassing of water from the planet's interior or deposition by impacts of comets.[71]
65
+
66
+ Mercury is too small and hot for its gravity to retain any significant atmosphere over long periods of time; it does have a tenuous surface-bounded exosphere[73] containing hydrogen, helium, oxygen, sodium, calcium, potassium and others at a surface pressure of less than approximately 0.5 nPa (0.005 picobars).[14] This exosphere is not stable—atoms are continuously lost and replenished from a variety of sources. Hydrogen atoms and helium atoms probably come from the solar wind, diffusing into Mercury's magnetosphere before later escaping back into space. Radioactive decay of elements within Mercury's crust is another source of helium, as well as sodium and potassium. MESSENGER found high proportions of calcium, helium, hydroxide, magnesium, oxygen, potassium, silicon and sodium. Water vapor is present, released by a combination of processes such as: comets striking its surface, sputtering creating water out of hydrogen from the solar wind and oxygen from rock, and sublimation from reservoirs of water ice in the permanently shadowed polar craters. The detection of high amounts of water-related ions like O+, OH−, and H3O+ was a surprise.[74][75] Because of the quantities of these ions that were detected in Mercury's space environment, scientists surmise that these molecules were blasted from the surface or exosphere by the solar wind.[76][77]
67
+
68
+ Sodium, potassium and calcium were discovered in the atmosphere during the 1980–1990s, and are thought to result primarily from the vaporization of surface rock struck by micrometeorite impacts[78] including presently from Comet Encke.[79] In 2008, magnesium was discovered by MESSENGER.[80] Studies indicate that, at times, sodium emissions are localized at points that correspond to the planet's magnetic poles. This would indicate an interaction between the magnetosphere and the planet's surface.[81]
69
+
70
+ On November 29, 2012, NASA confirmed that images from MESSENGER had detected that craters at the north pole contained water ice. MESSENGER's principal investigator Sean Solomon is quoted in The New York Times estimating the volume of the ice to be large enough to "encase Washington, D.C., in a frozen block two and a half miles deep".[64][c]
71
+
72
+ Despite its small size and slow 59-day-long rotation, Mercury has a significant, and apparently global, magnetic field. According to measurements taken by Mariner 10, it is about 1.1% the strength of Earth's. The magnetic-field strength at Mercury's equator is about 300 nT.[82][83] Like that of Earth, Mercury's magnetic field is dipolar.[81] Unlike Earth's, Mercury's poles are nearly aligned with the planet's spin axis.[84] Measurements from both the Mariner 10 and MESSENGER space probes have indicated that the strength and shape of the magnetic field are stable.[84]
73
+
74
+ It is likely that this magnetic field is generated by a dynamo effect, in a manner similar to the magnetic field of Earth.[85][86] This dynamo effect would result from the circulation of the planet's iron-rich liquid core. Particularly strong tidal effects caused by the planet's high orbital eccentricity would serve to keep the core in the liquid state necessary for this dynamo effect.[29]
75
+
76
+ Mercury's magnetic field is strong enough to deflect the solar wind around the planet, creating a magnetosphere. The planet's magnetosphere, though small enough to fit within Earth,[81] is strong enough to trap solar wind plasma. This contributes to the space weathering of the planet's surface.[84] Observations taken by the Mariner 10 spacecraft detected this low energy plasma in the magnetosphere of the planet's nightside. Bursts of energetic particles in the planet's magnetotail indicate a dynamic quality to the planet's magnetosphere.[81]
77
+
78
+ During its second flyby of the planet on October 6, 2008, MESSENGER discovered that Mercury's magnetic field can be extremely "leaky". The spacecraft encountered magnetic "tornadoes" – twisted bundles of magnetic fields connecting the planetary magnetic field to interplanetary space – that were up to 800 km wide or a third of the radius of the planet. These twisted magnetic flux tubes, technically known as flux transfer events, form open windows in the planet's magnetic shield through which the solar wind may enter and directly impact Mercury's surface via magnetic reconnection[87] This also occurs in Earth's magnetic field. The MESSENGER observations showed the reconnection rate is ten times higher at Mercury, but its proximity to the Sun only accounts for about a third of the reconnection rate observed by MESSENGER.[87]
79
+
80
+ Mercury has the most eccentric orbit of all the planets; its eccentricity is 0.21 with its distance from the Sun ranging from 46,000,000 to 70,000,000 km (29,000,000 to 43,000,000 mi). It takes 87.969 Earth days to complete an orbit. The diagram illustrates the effects of the eccentricity, showing Mercury's orbit overlaid with a circular orbit having the same semi-major axis. Mercury's higher velocity when it is near perihelion is clear from the greater distance it covers in each 5-day interval. In the diagram the varying distance of Mercury to the Sun is represented by the size of the planet, which is inversely proportional to Mercury's distance from the Sun. This varying distance to the Sun leads to Mercury's surface being flexed by tidal bulges raised by the Sun that are about 17 times stronger than the Moon's on Earth.[88] Combined with a 3:2 spin–orbit resonance of the planet's rotation around its axis, it also results in complex variations of the surface temperature.[24]
81
+ The resonance makes a single solar day on Mercury last exactly two Mercury years, or about 176 Earth days.[89]
82
+
83
+ Mercury's orbit is inclined by 7 degrees to the plane of Earth's orbit (the ecliptic), as shown in the diagram on the right. As a result, transits of Mercury across the face of the Sun can only occur when the planet is crossing the plane of the ecliptic at the time it lies between Earth and the Sun, which is in May or November. This occurs about every seven years on average.[90]
84
+
85
+ Mercury's axial tilt is almost zero,[91] with the best measured value as low as 0.027 degrees.[92] This is significantly smaller than that of Jupiter, which has the second smallest axial tilt of all planets at 3.1 degrees. This means that to an observer at Mercury's poles, the center of the Sun never rises more than 2.1 arcminutes above the horizon.[92]
86
+
87
+ At certain points on Mercury's surface, an observer would be able to see the Sun peek up a little more than two-thirds of the way over the horizon, then reverse and set before rising again, all within the same Mercurian day.[93] This is because approximately four Earth days before perihelion, Mercury's angular orbital velocity equals its angular rotational velocity so that the Sun's apparent motion ceases; closer to perihelion, Mercury's angular orbital velocity then exceeds the angular rotational velocity. Thus, to a hypothetical observer on Mercury, the Sun appears to move in a retrograde direction. Four Earth days after perihelion, the Sun's normal apparent motion resumes.[24] A similar effect would have occurred if Mercury had been in synchronous rotation: the alternating gain and loss of rotation over revolution would have caused a libration of 23.65° in longitude.[94]
88
+
89
+ For the same reason, there are two points on Mercury's equator, 180 degrees apart in longitude, at either of which, around perihelion in alternate Mercurian years (once a Mercurian day), the Sun passes overhead, then reverses its apparent motion and passes overhead again, then reverses a second time and passes overhead a third time, taking a total of about 16 Earth-days for this entire process. In the other alternate Mercurian years, the same thing happens at the other of these two points. The amplitude of the retrograde motion is small, so the overall effect is that, for two or three weeks, the Sun is almost stationary overhead, and is at its most brilliant because Mercury is at perihelion, its closest to the Sun. This prolonged exposure to the Sun at its brightest makes these two points the hottest places on Mercury. Maximum temperature occurs when the Sun is at an angle of about 25 degrees past noon due to diurnal temperature lag, at 0.4 Mercury days and 0.8 Mercury years past sunrise.[95] Conversely, there are two other points on the equator, 90 degrees of longitude apart from the first ones, where the Sun passes overhead only when the planet is at aphelion in alternate years, when the apparent motion of the Sun in Mercury's sky is relatively rapid. These points, which are the ones on the equator where the apparent retrograde motion of the Sun happens when it is crossing the horizon as described in the preceding paragraph, receive much less solar heat than the first ones described above.
90
+
91
+ Mercury attains inferior conjunction (nearest approach to Earth) every 116 Earth days on average,[3] but this interval can range from 105 days to 129 days due to the planet's eccentric orbit. Mercury can come as near as 82.2 gigametres (0.549 astronomical units; 51.1 million miles) to Earth, and that is slowly declining: The next approach to within 82.1 Gm (51.0 million miles) is in 2679, and to within 82.0 Gm (51.0 million miles) in 4487, but it will not be closer to Earth than 80 Gm (50 million miles) until 28,622.[96] Its period of retrograde motion as seen from Earth can vary from 8 to 15 days on either side of inferior conjunction. This large range arises from the planet's high orbital eccentricity.[24] On average, Mercury is the closest planet to the Earth,[97] and it is the closest planet to each of the other planets in the Solar System.[98][99]
92
+
93
+ The longitude convention for Mercury puts the zero of longitude at one of the two hottest points on the surface, as described above. However, when this area was first visited, by Mariner 10, this zero meridian was in darkness, so it was impossible to select a feature on the surface to define the exact position of the meridian. Therefore, a small crater further west was chosen, called Hun Kal, which provides the exact reference point for measuring longitude.[100][101] The center of Hun Kal defines the 20° west meridian. A 1970 International Astronomical Union resolution suggests that longitudes be measured positively in the westerly direction on Mercury.[102] The two hottest places on the equator are therefore at longitudes 0° W and 180° W, and the coolest points on the equator are at longitudes 90° W and 270° W. However, the MESSENGER project uses an east-positive convention.[103]
94
+
95
+ For many years it was thought that Mercury was synchronously tidally locked with the Sun, rotating once for each orbit and always keeping the same face directed towards the Sun, in the same way that the same side of the Moon always faces Earth. Radar observations in 1965 proved that the planet has a 3:2 spin-orbit resonance, rotating three times for every two revolutions around the Sun. The eccentricity of Mercury's orbit makes this resonance stable—at perihelion, when the solar tide is strongest, the Sun is nearly still in Mercury's sky.[104]
96
+
97
+ The rare 3:2 resonant tidal locking is stabilized by the variance of the tidal force along Mercury's eccentric orbit, acting on a permanent dipole component of Mercury's mass distribution.[105] In a circular orbit there is no such variance, so the only resonance stabilized in such an orbit is at 1:1 (e.g., Earth–Moon), when the tidal force, stretching a body along the "center-body" line, exerts a torque that aligns the body's axis of least inertia (the "longest" axis, and the axis of the aforementioned dipole) to point always at the center. However, with noticeable eccentricity, like that of Mercury's orbit, the tidal force has a maximum at perihelion and therefore stabilizes resonances, like 3:2, enforcing that the planet points its axis of least inertia roughly at the Sun when passing through perihelion.[105]
98
+
99
+ The original reason astronomers thought it was synchronously locked was that, whenever Mercury was best placed for observation, it was always nearly at the same point in its 3:2 resonance, hence showing the same face. This is because, coincidentally, Mercury's rotation period is almost exactly half of its synodic period with respect to Earth. Due to Mercury's 3:2 spin-orbit resonance, a solar day (the length between two meridian transits of the Sun) lasts about 176 Earth days.[24] A sidereal day (the period of rotation) lasts about 58.7 Earth days.[24]
100
+
101
+ Simulations indicate that the orbital eccentricity of Mercury varies chaotically from nearly zero (circular) to more than 0.45 over millions of years due to perturbations from the other planets.[24][106]
102
+ This was thought to explain Mercury's 3:2 spin-orbit resonance (rather than the more usual 1:1), because this state is more likely to arise during a period of high eccentricity.[107]
103
+ However, accurate modeling based on a realistic model of tidal response has demonstrated that Mercury was captured into the 3:2 spin-orbit state at a very early stage of its history, within 20 (more likely, 10) million years after its formation.[108]
104
+
105
+ Numerical simulations show that a future secular orbital resonant perihelion interaction with Jupiter may cause the eccentricity of Mercury's orbit to increase to the point where there is a 1% chance that the planet may collide with Venus within the next five billion years.[109][110]
106
+
107
+ In 1859, the French mathematician and astronomer Urbain Le Verrier reported that the slow precession of Mercury's orbit around the Sun could not be completely explained by Newtonian mechanics and perturbations by the known planets. He suggested, among possible explanations, that another planet (or perhaps instead a series of smaller 'corpuscules') might exist in an orbit even closer to the Sun than that of Mercury, to account for this perturbation.[111] (Other explanations considered included a slight oblateness of the Sun.) The success of the search for Neptune based on its perturbations of the orbit of Uranus led astronomers to place faith in this possible explanation, and the hypothetical planet was named Vulcan, but no such planet was ever found.[112]
108
+
109
+ The perihelion precession of Mercury is 5,600 arcseconds (1.5556°) per century relative to Earth, or 574.10±0.65 arcseconds per century[113] relative to the inertial ICRF. Newtonian mechanics, taking into account all the effects from the other planets, predicts a precession of 5,557 arcseconds (1.5436°) per century.[113] In the early 20th century, Albert Einstein's general theory of relativity provided the explanation for the observed precession, by formalizing gravitation as being mediated by the curvature of spacetime. The effect is small: just 42.98 arcseconds per century for Mercury; it therefore requires a little over twelve million orbits for a full excess turn. Similar, but much smaller, effects exist for other Solar System bodies: 8.62 arcseconds per century for Venus, 3.84 for Earth, 1.35 for Mars, and 10.05 for 1566 Icarus.[114][115]
110
+
111
+ Einstein's formula for the perihelion shift per revolution is
112
+
113
+
114
+
115
+ ϵ
116
+ =
117
+ 24
118
+
119
+ π
120
+
121
+ 3
122
+
123
+
124
+
125
+
126
+
127
+ a
128
+
129
+ 2
130
+
131
+
132
+
133
+
134
+ T
135
+
136
+ 2
137
+
138
+
139
+
140
+ c
141
+
142
+ 2
143
+
144
+
145
+ (
146
+ 1
147
+
148
+
149
+ e
150
+
151
+ 2
152
+
153
+
154
+ )
155
+
156
+
157
+
158
+
159
+
160
+ {\displaystyle \epsilon =24\pi ^{3}{\frac {a^{2}}{T^{2}c^{2}(1-e^{2})}}}
161
+
162
+ , where
163
+
164
+
165
+
166
+ e
167
+
168
+
169
+ {\displaystyle e}
170
+
171
+ is the orbital eccentricity,
172
+
173
+
174
+
175
+ a
176
+
177
+
178
+ {\displaystyle a}
179
+
180
+ the semi-major axis, and
181
+
182
+
183
+
184
+ T
185
+
186
+
187
+ {\displaystyle T}
188
+
189
+ the orbital period. Filling in the values gives a result of 0.1035 arcseconds per revolution or 0.4297 arcseconds per Earth year, i.e., 42.97 arcseconds per century. This is in close agreement with the accepted value of Mercury's perihelion advance of 42.98 arcseconds per century.[116]
190
+
191
+ There may be scientific support, based on studies reported in March 2020, for considering that parts of the planet Mercury may have been habitable, and perhaps that life forms, albeit likely primitive microorganisms, may have existed on the planet.[117][118]
192
+
193
+ Mercury's apparent magnitude is calculated to vary between −2.48 (brighter than Sirius) around superior conjunction and +7.25 (below the limit of naked-eye visibility) around inferior conjunction.[13] The mean apparent magnitude is 0.23 while the standard deviation of 1.78 is the largest of any planet. The mean apparent magnitude at superior conjunction is −1.89 while that at inferior conjunction is +5.93.[13] Observation of Mercury is complicated by its proximity to the Sun, as it is lost in the Sun's glare for much of the time. Mercury can be observed for only a brief period during either morning or evening twilight.[119]
194
+
195
+ Mercury can, like several other planets and the brightest stars, be seen during a total solar eclipse.[120]
196
+
197
+ Like the Moon and Venus, Mercury exhibits phases as seen from Earth. It is "new" at inferior conjunction and "full" at superior conjunction. The planet is rendered invisible from Earth on both of these occasions because of its being obscured by the Sun,[119] except its new phase during a transit.
198
+
199
+ Mercury is technically brightest as seen from Earth when it is at a full phase. Although Mercury is farthest from Earth when it is full, the greater illuminated area that is visible and the opposition brightness surge more than compensates for the distance.[121] The opposite is true for Venus, which appears brightest when it is a crescent, because it is much closer to Earth than when gibbous.[121][122]
200
+
201
+ Nonetheless, the brightest (full phase) appearance of Mercury is an essentially impossible time for practical observation, because of the extreme proximity of the Sun. Mercury is best observed at the first and last quarter, although they are phases of lesser brightness. The first and last quarter phases occur at greatest elongation east and west of the Sun, respectively. At both of these times Mercury's separation from the Sun ranges anywhere from 17.9° at perihelion to 27.8° at aphelion.[123][124] At greatest western elongation, Mercury rises at its earliest before sunrise, and at greatest eastern elongation, it sets at its latest after sunset.[125]
202
+
203
+ Mercury can be easily seen from the tropics and subtropics more than from higher latitudes. Viewed from low latitudes and at the right times of year, the ecliptic intersects the horizon at a steep angle. Mercury is 10° above the horizon when the planet appears directly above the Sun (i.e. its orbit appears vertical) and is at maximum elongation from the Sun (28°) and also when the Sun is 18° below the horizon, so the sky is just completely dark.[d] This angle is the maximum altitude at which Mercury is visible in a completely dark sky.
204
+
205
+ At middle latitudes, Mercury is more often and easily visible from the Southern Hemisphere than from the Northern. This is because Mercury's maximum western elongation occurs only during early autumn in the Southern Hemisphere, whereas its greatest eastern elongation happens only during late winter in the Southern Hemisphere.[125] In both of these cases, the angle at which the planet's orbit intersects the horizon is maximized, allowing it to rise several hours before sunrise in the former instance and not set until several hours after sundown in the latter from southern mid-latitudes, such as Argentina and South Africa.[125]
206
+
207
+ An alternate method for viewing Mercury involves observing the planet during daylight hours when conditions are clear, ideally when it is at its greatest elongation. This allows the planet to be found easily, even when using telescopes with 8 cm (3.1 in) apertures. Care must be taken to ensure the instrument isn't pointed directly towards the Sun because of the risk for eye damage. This method bypasses the limitation of twilight observing when the ecliptic is located at a low elevation (e.g. on autumn evenings).
208
+
209
+ Ground-based telescope observations of Mercury reveal only an illuminated partial disk with limited detail. The first of two spacecraft to visit the planet was Mariner 10, which mapped about 45% of its surface from 1974 to 1975. The second is the MESSENGER spacecraft, which after three Mercury flybys between 2008 and 2009, attained orbit around Mercury on March 17, 2011,[126] to study and map the rest of the planet.[127]
210
+
211
+ The Hubble Space Telescope cannot observe Mercury at all, due to safety procedures that prevent its pointing too close to the Sun.[128]
212
+
213
+ Because the shift of 0.15 revolutions in a year makes up a seven-year cycle (0.15 × 7 ≈ 1.0), in the seventh year Mercury follows almost exactly (earlier by 7 days) the sequence of phenomena it showed seven years before.[123]
214
+
215
+ The earliest known recorded observations of Mercury are from the Mul.Apin tablets. These observations were most likely made by an Assyrian astronomer around the 14th century BC.[129] The cuneiform name used to designate Mercury on the Mul.Apin tablets is transcribed as Udu.Idim.Gu\u4.Ud ("the jumping planet").[e][130] Babylonian records of Mercury date back to the 1st millennium BC. The Babylonians called the planet Nabu after the messenger to the gods in their mythology.[131]
216
+
217
+ The ancients knew Mercury by different names depending on whether it was an evening star or a morning star. By about 350 BC, the ancient Greeks had realized the two stars were one.[132] They knew the planet as Στίλβων Stilbōn, meaning "twinkling", and Ἑρμής Hermēs, for its fleeting motion,[133] a name that is retained in modern Greek (Ερμής Ermis).[134] The Romans named the planet after the swift-footed Roman messenger god, Mercury (Latin Mercurius), which they equated with the Greek Hermes, because it moves across the sky faster than any other planet.[132][135] The astronomical symbol for Mercury is a stylized version of Hermes' caduceus.[136]
218
+
219
+ The Greco-Egyptian[137] astronomer Ptolemy wrote about the possibility of planetary transits across the face of the Sun in his work Planetary Hypotheses. He suggested that no transits had been observed either because planets such as Mercury were too small to see, or because the transits were too infrequent.[138]
220
+
221
+ In ancient China, Mercury was known as "the Hour Star" (Chen-xing 辰星). It was associated with the direction north and the phase of water in the Five Phases system of metaphysics.[139] Modern Chinese, Korean, Japanese and Vietnamese cultures refer to the planet literally as the "water star" (水星), based on the Five elements.[140][141][142] Hindu mythology used the name Budha for Mercury, and this god was thought to preside over Wednesday.[143] The god Odin (or Woden) of Germanic paganism was associated with the planet Mercury and Wednesday.[144] The Maya may have represented Mercury as an owl (or possibly four owls; two for the morning aspect and two for the evening) that served as a messenger to the underworld.[145]
222
+
223
+ In medieval Islamic astronomy, the Andalusian astronomer Abū Ishāq Ibrāhīm al-Zarqālī in the 11th century described the deferent of Mercury's geocentric orbit as being oval, like an egg or a pignon, although this insight did not influence his astronomical theory or his astronomical calculations.[146][147] In the 12th century, Ibn Bajjah observed "two planets as black spots on the face of the Sun", which was later suggested as the transit of Mercury and/or Venus by the Maragha astronomer Qotb al-Din Shirazi in the 13th century.[148] (Note that most such medieval reports of transits were later taken as observations of sunspots.[149])
224
+
225
+ In India, the Kerala school astronomer Nilakantha Somayaji in the 15th century developed a partially heliocentric planetary model in which Mercury orbits the Sun, which in turn orbits Earth, similar to the Tychonic system later proposed by Tycho Brahe in the late 16th century.[150]
226
+
227
+ The first telescopic observations of Mercury were made by Galileo in the early 17th century. Although he observed phases when he looked at Venus, his telescope was not powerful enough to see the phases of Mercury. In 1631, Pierre Gassendi made the first telescopic observations of the transit of a planet across the Sun when he saw a transit of Mercury predicted by Johannes Kepler. In 1639, Giovanni Zupi used a telescope to discover that the planet had orbital phases similar to Venus and the Moon. The observation demonstrated conclusively that Mercury orbited around the Sun.[24]
228
+
229
+ A rare event in astronomy is the passage of one planet in front of another (occultation), as seen from Earth. Mercury and Venus occult each other every few centuries, and the event of May 28, 1737 is the only one historically observed, having been seen by John Bevis at the Royal Greenwich Observatory.[151] The next occultation of Mercury by Venus will be on December 3, 2133.[152]
230
+
231
+ The difficulties inherent in observing Mercury mean that it has been far less studied than the other planets. In 1800, Johann Schröter made observations of surface features, claiming to have observed 20-kilometre-high (12 mi) mountains. Friedrich Bessel used Schröter's drawings to erroneously estimate the rotation period as 24 hours and an axial tilt of 70°.[153] In the 1880s, Giovanni Schiaparelli mapped the planet more accurately, and suggested that Mercury's rotational period was 88 days, the same as its orbital period due to tidal locking.[154] This phenomenon is known as synchronous rotation. The effort to map the surface of Mercury was continued by Eugenios Antoniadi, who published a book in 1934 that included both maps and his own observations.[81] Many of the planet's surface features, particularly the albedo features, take their names from Antoniadi's map.[155]
232
+
233
+ In June 1962, Soviet scientists at the Institute of Radio-engineering and Electronics of the USSR Academy of Sciences, led by Vladimir Kotelnikov, became the first to bounce a radar signal off Mercury and receive it, starting radar observations of the planet.[156][157][158] Three years later, radar observations by Americans Gordon H. Pettengill and Rolf B. Dyce, using the 300-meter Arecibo Observatory radio telescope in Puerto Rico, showed conclusively that the planet's rotational period was about 59 days.[159][160] The theory that Mercury's rotation was synchronous had become widely held, and it was a surprise to astronomers when these radio observations were announced. If Mercury were tidally locked, its dark face would be extremely cold, but measurements of radio emission revealed that it was much hotter than expected. Astronomers were reluctant to drop the synchronous rotation theory and proposed alternative mechanisms such as powerful heat-distributing winds to explain the observations.[161]
234
+
235
+ Italian astronomer Giuseppe Colombo noted that the rotation value was about two-thirds of Mercury's orbital period, and proposed that the planet's orbital and rotational periods were locked into a 3:2 rather than a 1:1 resonance.[162] Data from Mariner 10 subsequently confirmed this view.[163] This means that Schiaparelli's and Antoniadi's maps were not "wrong". Instead, the astronomers saw the same features during every second orbit and recorded them, but disregarded those seen in the meantime, when Mercury's other face was toward the Sun, because the orbital geometry meant that these observations were made under poor viewing conditions.[153]
236
+
237
+ Ground-based optical observations did not shed much further light on Mercury, but radio astronomers using interferometry at microwave wavelengths, a technique that enables removal of the solar radiation, were able to discern physical and chemical characteristics of the subsurface layers to a depth of several meters.[164][165] Not until the first space probe flew past Mercury did many of its most fundamental morphological properties become known. Moreover, recent technological advances have led to improved ground-based observations. In 2000, high-resolution lucky imaging observations were conducted by the Mount Wilson Observatory 1.5 meter Hale telescope. They provided the first views that resolved surface features on the parts of Mercury that were not imaged in the Mariner 10 mission.[166] Most of the planet has been mapped by the Arecibo radar telescope, with 5 km (3.1 mi) resolution, including polar deposits in shadowed craters of what may be water ice.[167]
238
+
239
+ Reaching Mercury from Earth poses significant technical challenges, because it orbits so much closer to the Sun than Earth. A Mercury-bound spacecraft launched from Earth must travel over 91 million kilometres (57 million miles) into the Sun's gravitational potential well. Mercury has an orbital speed of 48 km/s (30 mi/s), whereas Earth's orbital speed is 30 km/s (19 mi/s). Therefore, the spacecraft must make a large change in velocity (delta-v) to enter a Hohmann transfer orbit that passes near Mercury, as compared to the delta-v required for other planetary missions.[169]
240
+
241
+ The potential energy liberated by moving down the Sun's potential well becomes kinetic energy; requiring another large delta-v change to do anything other than rapidly pass by Mercury. To land safely or enter a stable orbit the spacecraft would rely entirely on rocket motors. Aerobraking is ruled out because Mercury has a negligible atmosphere. A trip to Mercury requires more rocket fuel than that required to escape the Solar System completely. As a result, only two space probes have visited it so far.[170] A proposed alternative approach would use a solar sail to attain a Mercury-synchronous orbit around the Sun.[171]
242
+
243
+ The first spacecraft to visit Mercury was NASA's Mariner 10 (1974–1975).[132] The spacecraft used the gravity of Venus to adjust its orbital velocity so that it could approach Mercury, making it both the first spacecraft to use this gravitational "slingshot" effect and the first NASA mission to visit multiple planets.[169] Mariner 10 provided the first close-up images of Mercury's surface, which immediately showed its heavily cratered nature, and revealed many other types of geological features, such as the giant scarps that were later ascribed to the effect of the planet shrinking slightly as its iron core cools.[172] Unfortunately, the same face of the planet was lit at each of Mariner 10's close approaches. This made close observation of both sides of the planet impossible,[173] and resulted in the mapping of less than 45% of the planet's surface.[174]
244
+
245
+ The spacecraft made three close approaches to Mercury, the closest of which took it to within 327 km (203 mi) of the surface.[175] At the first close approach, instruments detected a magnetic field, to the great surprise of planetary geologists—Mercury's rotation was expected to be much too slow to generate a significant dynamo effect. The second close approach was primarily used for imaging, but at the third approach, extensive magnetic data were obtained. The data revealed that the planet's magnetic field is much like Earth's, which deflects the solar wind around the planet. For many years after the Mariner 10 encounters, the origin of Mercury's magnetic field remained the subject of several competing theories.[176][177]
246
+
247
+ On March 24, 1975, just eight days after its final close approach, Mariner 10 ran out of fuel. Because its orbit could no longer be accurately controlled, mission controllers instructed the probe to shut down.[178] Mariner 10 is thought to be still orbiting the Sun, passing close to Mercury every few months.[179]
248
+
249
+ A second NASA mission to Mercury, named MESSENGER (MErcury Surface, Space ENvironment, GEochemistry, and Ranging), was launched on August 3, 2004. It made a fly-by of Earth in August 2005, and of Venus in October 2006 and June 2007 to place it onto the correct trajectory to reach an orbit around Mercury.[180] A first fly-by of Mercury occurred on January 14, 2008, a second on October 6, 2008,[181] and a third on September 29, 2009.[182] Most of the hemisphere not imaged by Mariner 10 was mapped during these fly-bys. The probe successfully entered an elliptical orbit around the planet on March 18, 2011. The first orbital image of Mercury was obtained on March 29, 2011. The probe finished a one-year mapping mission,[181] and then entered a one-year extended mission into 2013. In addition to continued observations and mapping of Mercury, MESSENGER observed the 2012 solar maximum.[183]
250
+
251
+ The mission was designed to clear up six key issues: Mercury's high density, its geological history, the nature of its magnetic field, the structure of its core, whether it has ice at its poles, and where its tenuous atmosphere comes from. To this end, the probe carried imaging devices that gathered much-higher-resolution images of much more of Mercury than Mariner 10, assorted spectrometers to determine abundances of elements in the crust, and magnetometers and devices to measure velocities of charged particles. Measurements of changes in the probe's orbital velocity were expected to be used to infer details of the planet's interior structure.[184] MESSENGER's final maneuver was on April 24, 2015, and it crashed into Mercury's surface on April 30, 2015.[185][186][187] The spacecraft's impact with Mercury occurred near 3:26 PM EDT on April 30, 2015, leaving a crater estimated to be 16 m (52 ft) in diameter.[188]
252
+
253
+ The European Space Agency and the Japanese Space Agency developed and launched a joint mission called BepiColombo, which will orbit Mercury with two probes: one to map the planet and the other to study its magnetosphere.[189] Launched on October 20, 2018, BepiColombo is expected to reach Mercury in 2025.[190] It will release a magnetometer probe into an elliptical orbit, then chemical rockets will fire to deposit the mapper probe into a circular orbit. Both probes will operate for one terrestrial year.[189] The mapper probe carries an array of spectrometers similar to those on MESSENGER, and will study the planet at many different wavelengths including infrared, ultraviolet, X-ray and gamma ray.[191]
254
+
255
+ Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Virgo Supercluster → Laniakea Supercluster → Observable universe → UniverseEach arrow (→) may be read as "within" or "part of".
256
+
en/4645.html.txt ADDED
@@ -0,0 +1,256 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+
5
+ Mercury is the smallest and innermost planet in the Solar System. Its orbit around the Sun takes 87.97 days, the shortest of all the planets in the Solar System. It is named after the Roman deity Mercury, the messenger of the gods.
6
+
7
+ Like Venus, Mercury orbits the Sun within Earth's orbit as an inferior planet, and its apparent distance from the Sun as viewed from Earth never exceeds 28°. This proximity to the Sun means the planet can only be seen near the western horizon after sunset or eastern horizon before sunrise, usually in twilight. At this time, it may appear as a bright star-like object, but is often far more difficult to observe than Venus. The planet telescopically displays the complete range of phases, similar to Venus and the Moon, as it moves in its inner orbit relative to Earth, which recurs over its synodic period of approximately 116 days.
8
+
9
+ Mercury rotates in a way that is unique in the Solar System. It is tidally locked with the Sun in a 3:2 spin–orbit resonance,[16] meaning that relative to the fixed stars, it rotates on its axis exactly three times for every two revolutions it makes around the Sun.[a][17] As seen from the Sun, in a frame of reference that rotates with the orbital motion, it appears to rotate only once every two Mercurian years. An observer on Mercury would therefore see only one day every two Mercurian years.
10
+
11
+ Mercury's axis has the smallest tilt of any of the Solar System's planets (about ​1⁄30 degree). Its orbital eccentricity is the largest of all known planets in the Solar System;[b] at perihelion, Mercury's distance from the Sun is only about two-thirds (or 66%) of its distance at aphelion. Mercury's surface appears heavily cratered and is similar in appearance to the Moon's, indicating that it has been geologically inactive for billions of years. Having almost no atmosphere to retain heat, it has surface temperatures that vary diurnally more than on any other planet in the Solar System, ranging from 100 K (−173 °C; −280 °F) at night to 700 K (427 °C; 800 °F) during the day across the equatorial regions.[18] The polar regions are constantly below 180 K (−93 °C; −136 °F). The planet has no known natural satellites.
12
+
13
+ Two spacecraft have visited Mercury: Mariner 10 flew by in 1974 and 1975; and MESSENGER, launched in 2004, orbited Mercury over 4,000 times in four years before exhausting its fuel and crashing into the planet's surface on April 30, 2015.[19][20][21] The BepiColombo spacecraft is planned to arrive at Mercury in 2025.
14
+
15
+ Mercury appears to have a solid silicate crust and mantle overlying a solid, iron sulfide outer core layer, a deeper liquid core layer, and a solid inner core.[22][23]
16
+
17
+ Mercury is one of four terrestrial planets in the Solar System, and is a rocky body like Earth. It is the smallest planet in the Solar System, with an equatorial radius of 2,439.7 kilometres (1,516.0 mi).[3] Mercury is also smaller—albeit more massive—than the largest natural satellites in the Solar System, Ganymede and Titan. Mercury consists of approximately 70% metallic and 30% silicate material.[24] Mercury's density is the second highest in the Solar System at 5.427 g/cm3, only slightly less than Earth's density of 5.515 g/cm3.[3] If the effect of gravitational compression were to be factored out from both planets, the materials of which Mercury is made would be denser than those of Earth, with an uncompressed density of 5.3 g/cm3 versus Earth's 4.4 g/cm3.[25]
18
+
19
+ Mercury's density can be used to infer details of its inner structure. Although Earth's high density results appreciably from gravitational compression, particularly at the core, Mercury is much smaller and its inner regions are not as compressed. Therefore, for it to have such a high density, its core must be large and rich in iron.[26]
20
+
21
+ Geologists estimate that Mercury's core occupies about 55% of its volume; for Earth this proportion is 17%. Research published in 2007 suggests that Mercury has a molten core.[27][28] Surrounding the core is a 500–700 km (310–430 mi) mantle consisting of silicates.[29][30] Based on data from the Mariner 10 mission and Earth-based observation, Mercury's crust is estimated to be 35 km (22 mi) thick.[31] One distinctive feature of Mercury's surface is the presence of numerous narrow ridges, extending up to several hundred kilometers in length. It is thought that these were formed as Mercury's core and mantle cooled and contracted at a time when the crust had already solidified.[32]
22
+
23
+ Mercury's core has a higher iron content than that of any other major planet in the Solar System, and several theories have been proposed to explain this. The most widely accepted theory is that Mercury originally had a metal–silicate ratio similar to common chondrite meteorites, thought to be typical of the Solar System's rocky matter, and a mass approximately 2.25 times its current mass.[33] Early in the Solar System's history, Mercury may have been struck by a planetesimal of approximately 1/6 that mass and several thousand kilometers across.[33] The impact would have stripped away much of the original crust and mantle, leaving the core behind as a relatively major component.[33] A similar process, known as the giant impact hypothesis, has been proposed to explain the formation of the Moon.[33]
24
+
25
+ Alternatively, Mercury may have formed from the solar nebula before the Sun's energy output had stabilized. It would initially have had twice its present mass, but as the protosun contracted, temperatures near Mercury could have been between 2,500 and 3,500 K and possibly even as high as 10,000 K.[34] Much of Mercury's surface rock could have been vaporized at such temperatures, forming an atmosphere of "rock vapor" that could have been carried away by the solar wind.[34]
26
+
27
+ A third hypothesis proposes that the solar nebula caused drag on the particles from which Mercury was accreting, which meant that lighter particles were lost from the accreting material and not gathered by Mercury.[35] Each hypothesis predicts a different surface composition, and there are two space missions set to make observations. MESSENGER, which ended in 2015, found higher-than-expected potassium and sulfur levels on the surface, suggesting that the giant impact hypothesis and vaporization of the crust and mantle did not occur because potassium and sulfur would have been driven off by the extreme heat of these events.[36] BepiColombo, which will arrive at Mercury in 2025, will make observations to test these hypotheses.[37] The findings so far would seem to favor the third hypothesis; however, further analysis of the data is needed.[38]
28
+
29
+ Mercury's surface is similar in appearance to that of the Moon, showing extensive mare-like plains and heavy cratering, indicating that it has been geologically inactive for billions of years. Because knowledge of Mercury's geology had been based only on the 1975 Mariner 10 flyby and terrestrial observations, it is the least understood of the terrestrial planets.[28] As data from MESSENGER orbiter are processed, this knowledge will increase. For example, an unusual crater with radiating troughs has been discovered that scientists called "the spider".[39] It was later named Apollodorus.[40]
30
+
31
+ Albedo features are areas of markedly different reflectivity, as seen by telescopic observation. Mercury has dorsa (also called "wrinkle-ridges"), Moon-like highlands, montes (mountains), planitiae (plains), rupes (escarpments), and valles (valleys).[41][42]
32
+
33
+ Names for features on Mercury come from a variety of sources. Names coming from people are limited to the deceased. Craters are named for artists, musicians, painters, and authors who have made outstanding or fundamental contributions to their field. Ridges, or dorsa, are named for scientists who have contributed to the study of Mercury. Depressions or fossae are named for works of architecture. Montes are named for the word "hot" in a variety of languages. Plains or planitiae are named for Mercury in various languages. Escarpments or rupēs are named for ships of scientific expeditions. Valleys or valles are named for abandoned cities, towns, or settlements of antiquity.[43]
34
+
35
+ Mercury was heavily bombarded by comets and asteroids during and shortly following its formation 4.6 billion years ago, as well as during a possibly separate subsequent episode called the Late Heavy Bombardment that ended 3.8 billion years ago.[44] During this period of intense crater formation, Mercury received impacts over its entire surface,[42] facilitated by the lack of any atmosphere to slow impactors down.[45] During this time Mercury was volcanically active; basins such as the Caloris Basin were filled by magma, producing smooth plains similar to the maria found on the Moon.[46][47]
36
+
37
+ Data from the October 2008 flyby of MESSENGER gave researchers a greater appreciation for the jumbled nature of Mercury's surface. Mercury's surface is more heterogeneous than either Mars's or the Moon's, both of which contain significant stretches of similar geology, such as maria and plateaus.[48]
38
+
39
+ Craters on Mercury range in diameter from small bowl-shaped cavities to multi-ringed impact basins hundreds of kilometers across. They appear in all states of degradation, from relatively fresh rayed craters to highly degraded crater remnants. Mercurian craters differ subtly from lunar craters in that the area blanketed by their ejecta is much smaller, a consequence of Mercury's stronger surface gravity.[49] According to IAU rules, each new crater must be named after an artist that was famous for more than fifty years, and dead for more than three years, before the date the crater is named.[50]
40
+
41
+ The largest known crater is Caloris Basin, with a diameter of 1,550 km.[51] The impact that created the Caloris Basin was so powerful that it caused lava eruptions and left a concentric ring over 2 km tall surrounding the impact crater. At the antipode of the Caloris Basin is a large region of unusual, hilly terrain known as the "Weird Terrain". One hypothesis for its origin is that shock waves generated during the Caloris impact traveled around Mercury, converging at the basin's antipode (180 degrees away). The resulting high stresses fractured the surface.[52] Alternatively, it has been suggested that this terrain formed as a result of the convergence of ejecta at this basin's antipode.[53]
42
+
43
+ Overall, about 15 impact basins have been identified on the imaged part of Mercury. A notable basin is the 400 km wide, multi-ring Tolstoj Basin that has an ejecta blanket extending up to 500 km from its rim and a floor that has been filled by smooth plains materials. Beethoven Basin has a similar-sized ejecta blanket and a 625 km diameter rim.[49] Like the Moon, the surface of Mercury has likely incurred the effects of space weathering processes, including Solar wind and micrometeorite impacts.[54]
44
+
45
+ There are two geologically distinct plains regions on Mercury.[49][55] Gently rolling, hilly plains in the regions between craters are Mercury's oldest visible surfaces,[49] predating the heavily cratered terrain. These inter-crater plains appear to have obliterated many earlier craters, and show a general paucity of smaller craters below about 30 km in diameter.[55]
46
+
47
+ Smooth plains are widespread flat areas that fill depressions of various sizes and bear a strong resemblance to the lunar maria. Notably, they fill a wide ring surrounding the Caloris Basin. Unlike lunar maria, the smooth plains of Mercury have the same albedo as the older inter-crater plains. Despite a lack of unequivocally volcanic characteristics, the localisation and rounded, lobate shape of these plains strongly support volcanic origins.[49] All the smooth plains of Mercury formed significantly later than the Caloris basin, as evidenced by appreciably smaller crater densities than on the Caloris ejecta blanket.[49] The floor of the Caloris Basin is filled by a geologically distinct flat plain, broken up by ridges and fractures in a roughly polygonal pattern. It is not clear whether they are volcanic lavas induced by the impact, or a large sheet of impact melt.[49]
48
+
49
+ One unusual feature of Mercury's surface is the numerous compression folds, or rupes, that crisscross the plains. As Mercury's interior cooled, it contracted and its surface began to deform, creating wrinkle ridges and lobate scarps associated with thrust faults. The scarps can reach lengths of 1000 km and heights of 3 km.[56] These compressional features can be seen on top of other features, such as craters and smooth plains, indicating they are more recent.[57] Mapping of the features has suggested a total shrinkage of Mercury's radius in the range of ~1 to 7 km.[58] Small-scale thrust fault scarps have been found, tens of meters in height and with lengths in the range of a few km, that appear to be less than 50 million years old, indicating that compression of the interior and consequent surface geological activity continue to the present.[56][58]
50
+
51
+ The Lunar Reconnaissance Orbiter discovered that similar small thrust faults exist on the Moon.
52
+
53
+ Images obtained by MESSENGER have revealed evidence for pyroclastic flows on Mercury from low-profile shield volcanoes.[59][60][61] MESSENGER data has helped identify 51 pyroclastic deposits on the surface,[62] where 90% of them are found within impact craters.[62] A study of the degradation state of the impact craters that host pyroclastic deposits suggests that pyroclastic activity occurred on Mercury over a prolonged interval.[62]
54
+
55
+ A "rimless depression" inside the southwest rim of the Caloris Basin consists of at least nine overlapping volcanic vents, each individually up to 8 km in diameter. It is thus a "compound volcano".[63] The vent floors are at a least 1 km below their brinks and they bear a closer resemblance to volcanic craters sculpted by explosive eruptions or modified by collapse into void spaces created by magma withdrawal back down into a conduit.[63] Scientists could not quantify the age of the volcanic complex system, but reported that it could be of the order of a billion years.[63]
56
+
57
+ The surface temperature of Mercury ranges from 100 to 700 K (−173 to 427 °C; −280 to 800 °F)[18] at the most extreme places: 0°N, 0°W, or 180°W. It never rises above 180 K at the poles,[12]
58
+ due to the absence of an atmosphere and a steep temperature gradient between the equator and the poles. The subsolar point reaches about 700 K during perihelion (0°W or 180°W), but only 550 K at aphelion (90° or 270°W).[65]
59
+ On the dark side of the planet, temperatures average 110 K.[12][66]
60
+ The intensity of sunlight on Mercury's surface ranges between 4.59 and 10.61 times the solar constant (1,370 W·m−2).[67]
61
+
62
+ Although the daylight temperature at the surface of Mercury is generally extremely high, observations strongly suggest that ice (frozen water) exists on Mercury. The floors of deep craters at the poles are never exposed to direct sunlight, and temperatures there remain below 102 K; far lower than the global average.[68] Water ice strongly reflects radar, and observations by the 70-meter Goldstone Solar System Radar and the VLA in the early 1990s revealed that there are patches of high radar reflection near the poles.[69] Although ice was not the only possible cause of these reflective regions, astronomers think it was the most likely.[70]
63
+
64
+ The icy regions are estimated to contain about 1014–1015 kg of ice,[71] and may be covered by a layer of regolith that inhibits sublimation.[72] By comparison, the Antarctic ice sheet on Earth has a mass of about 4×1018 kg, and Mars's south polar cap contains about 1016 kg of water.[71] The origin of the ice on Mercury is not yet known, but the two most likely sources are from outgassing of water from the planet's interior or deposition by impacts of comets.[71]
65
+
66
+ Mercury is too small and hot for its gravity to retain any significant atmosphere over long periods of time; it does have a tenuous surface-bounded exosphere[73] containing hydrogen, helium, oxygen, sodium, calcium, potassium and others at a surface pressure of less than approximately 0.5 nPa (0.005 picobars).[14] This exosphere is not stable—atoms are continuously lost and replenished from a variety of sources. Hydrogen atoms and helium atoms probably come from the solar wind, diffusing into Mercury's magnetosphere before later escaping back into space. Radioactive decay of elements within Mercury's crust is another source of helium, as well as sodium and potassium. MESSENGER found high proportions of calcium, helium, hydroxide, magnesium, oxygen, potassium, silicon and sodium. Water vapor is present, released by a combination of processes such as: comets striking its surface, sputtering creating water out of hydrogen from the solar wind and oxygen from rock, and sublimation from reservoirs of water ice in the permanently shadowed polar craters. The detection of high amounts of water-related ions like O+, OH−, and H3O+ was a surprise.[74][75] Because of the quantities of these ions that were detected in Mercury's space environment, scientists surmise that these molecules were blasted from the surface or exosphere by the solar wind.[76][77]
67
+
68
+ Sodium, potassium and calcium were discovered in the atmosphere during the 1980–1990s, and are thought to result primarily from the vaporization of surface rock struck by micrometeorite impacts[78] including presently from Comet Encke.[79] In 2008, magnesium was discovered by MESSENGER.[80] Studies indicate that, at times, sodium emissions are localized at points that correspond to the planet's magnetic poles. This would indicate an interaction between the magnetosphere and the planet's surface.[81]
69
+
70
+ On November 29, 2012, NASA confirmed that images from MESSENGER had detected that craters at the north pole contained water ice. MESSENGER's principal investigator Sean Solomon is quoted in The New York Times estimating the volume of the ice to be large enough to "encase Washington, D.C., in a frozen block two and a half miles deep".[64][c]
71
+
72
+ Despite its small size and slow 59-day-long rotation, Mercury has a significant, and apparently global, magnetic field. According to measurements taken by Mariner 10, it is about 1.1% the strength of Earth's. The magnetic-field strength at Mercury's equator is about 300 nT.[82][83] Like that of Earth, Mercury's magnetic field is dipolar.[81] Unlike Earth's, Mercury's poles are nearly aligned with the planet's spin axis.[84] Measurements from both the Mariner 10 and MESSENGER space probes have indicated that the strength and shape of the magnetic field are stable.[84]
73
+
74
+ It is likely that this magnetic field is generated by a dynamo effect, in a manner similar to the magnetic field of Earth.[85][86] This dynamo effect would result from the circulation of the planet's iron-rich liquid core. Particularly strong tidal effects caused by the planet's high orbital eccentricity would serve to keep the core in the liquid state necessary for this dynamo effect.[29]
75
+
76
+ Mercury's magnetic field is strong enough to deflect the solar wind around the planet, creating a magnetosphere. The planet's magnetosphere, though small enough to fit within Earth,[81] is strong enough to trap solar wind plasma. This contributes to the space weathering of the planet's surface.[84] Observations taken by the Mariner 10 spacecraft detected this low energy plasma in the magnetosphere of the planet's nightside. Bursts of energetic particles in the planet's magnetotail indicate a dynamic quality to the planet's magnetosphere.[81]
77
+
78
+ During its second flyby of the planet on October 6, 2008, MESSENGER discovered that Mercury's magnetic field can be extremely "leaky". The spacecraft encountered magnetic "tornadoes" – twisted bundles of magnetic fields connecting the planetary magnetic field to interplanetary space – that were up to 800 km wide or a third of the radius of the planet. These twisted magnetic flux tubes, technically known as flux transfer events, form open windows in the planet's magnetic shield through which the solar wind may enter and directly impact Mercury's surface via magnetic reconnection[87] This also occurs in Earth's magnetic field. The MESSENGER observations showed the reconnection rate is ten times higher at Mercury, but its proximity to the Sun only accounts for about a third of the reconnection rate observed by MESSENGER.[87]
79
+
80
+ Mercury has the most eccentric orbit of all the planets; its eccentricity is 0.21 with its distance from the Sun ranging from 46,000,000 to 70,000,000 km (29,000,000 to 43,000,000 mi). It takes 87.969 Earth days to complete an orbit. The diagram illustrates the effects of the eccentricity, showing Mercury's orbit overlaid with a circular orbit having the same semi-major axis. Mercury's higher velocity when it is near perihelion is clear from the greater distance it covers in each 5-day interval. In the diagram the varying distance of Mercury to the Sun is represented by the size of the planet, which is inversely proportional to Mercury's distance from the Sun. This varying distance to the Sun leads to Mercury's surface being flexed by tidal bulges raised by the Sun that are about 17 times stronger than the Moon's on Earth.[88] Combined with a 3:2 spin–orbit resonance of the planet's rotation around its axis, it also results in complex variations of the surface temperature.[24]
81
+ The resonance makes a single solar day on Mercury last exactly two Mercury years, or about 176 Earth days.[89]
82
+
83
+ Mercury's orbit is inclined by 7 degrees to the plane of Earth's orbit (the ecliptic), as shown in the diagram on the right. As a result, transits of Mercury across the face of the Sun can only occur when the planet is crossing the plane of the ecliptic at the time it lies between Earth and the Sun, which is in May or November. This occurs about every seven years on average.[90]
84
+
85
+ Mercury's axial tilt is almost zero,[91] with the best measured value as low as 0.027 degrees.[92] This is significantly smaller than that of Jupiter, which has the second smallest axial tilt of all planets at 3.1 degrees. This means that to an observer at Mercury's poles, the center of the Sun never rises more than 2.1 arcminutes above the horizon.[92]
86
+
87
+ At certain points on Mercury's surface, an observer would be able to see the Sun peek up a little more than two-thirds of the way over the horizon, then reverse and set before rising again, all within the same Mercurian day.[93] This is because approximately four Earth days before perihelion, Mercury's angular orbital velocity equals its angular rotational velocity so that the Sun's apparent motion ceases; closer to perihelion, Mercury's angular orbital velocity then exceeds the angular rotational velocity. Thus, to a hypothetical observer on Mercury, the Sun appears to move in a retrograde direction. Four Earth days after perihelion, the Sun's normal apparent motion resumes.[24] A similar effect would have occurred if Mercury had been in synchronous rotation: the alternating gain and loss of rotation over revolution would have caused a libration of 23.65° in longitude.[94]
88
+
89
+ For the same reason, there are two points on Mercury's equator, 180 degrees apart in longitude, at either of which, around perihelion in alternate Mercurian years (once a Mercurian day), the Sun passes overhead, then reverses its apparent motion and passes overhead again, then reverses a second time and passes overhead a third time, taking a total of about 16 Earth-days for this entire process. In the other alternate Mercurian years, the same thing happens at the other of these two points. The amplitude of the retrograde motion is small, so the overall effect is that, for two or three weeks, the Sun is almost stationary overhead, and is at its most brilliant because Mercury is at perihelion, its closest to the Sun. This prolonged exposure to the Sun at its brightest makes these two points the hottest places on Mercury. Maximum temperature occurs when the Sun is at an angle of about 25 degrees past noon due to diurnal temperature lag, at 0.4 Mercury days and 0.8 Mercury years past sunrise.[95] Conversely, there are two other points on the equator, 90 degrees of longitude apart from the first ones, where the Sun passes overhead only when the planet is at aphelion in alternate years, when the apparent motion of the Sun in Mercury's sky is relatively rapid. These points, which are the ones on the equator where the apparent retrograde motion of the Sun happens when it is crossing the horizon as described in the preceding paragraph, receive much less solar heat than the first ones described above.
90
+
91
+ Mercury attains inferior conjunction (nearest approach to Earth) every 116 Earth days on average,[3] but this interval can range from 105 days to 129 days due to the planet's eccentric orbit. Mercury can come as near as 82.2 gigametres (0.549 astronomical units; 51.1 million miles) to Earth, and that is slowly declining: The next approach to within 82.1 Gm (51.0 million miles) is in 2679, and to within 82.0 Gm (51.0 million miles) in 4487, but it will not be closer to Earth than 80 Gm (50 million miles) until 28,622.[96] Its period of retrograde motion as seen from Earth can vary from 8 to 15 days on either side of inferior conjunction. This large range arises from the planet's high orbital eccentricity.[24] On average, Mercury is the closest planet to the Earth,[97] and it is the closest planet to each of the other planets in the Solar System.[98][99]
92
+
93
+ The longitude convention for Mercury puts the zero of longitude at one of the two hottest points on the surface, as described above. However, when this area was first visited, by Mariner 10, this zero meridian was in darkness, so it was impossible to select a feature on the surface to define the exact position of the meridian. Therefore, a small crater further west was chosen, called Hun Kal, which provides the exact reference point for measuring longitude.[100][101] The center of Hun Kal defines the 20° west meridian. A 1970 International Astronomical Union resolution suggests that longitudes be measured positively in the westerly direction on Mercury.[102] The two hottest places on the equator are therefore at longitudes 0° W and 180° W, and the coolest points on the equator are at longitudes 90° W and 270° W. However, the MESSENGER project uses an east-positive convention.[103]
94
+
95
+ For many years it was thought that Mercury was synchronously tidally locked with the Sun, rotating once for each orbit and always keeping the same face directed towards the Sun, in the same way that the same side of the Moon always faces Earth. Radar observations in 1965 proved that the planet has a 3:2 spin-orbit resonance, rotating three times for every two revolutions around the Sun. The eccentricity of Mercury's orbit makes this resonance stable—at perihelion, when the solar tide is strongest, the Sun is nearly still in Mercury's sky.[104]
96
+
97
+ The rare 3:2 resonant tidal locking is stabilized by the variance of the tidal force along Mercury's eccentric orbit, acting on a permanent dipole component of Mercury's mass distribution.[105] In a circular orbit there is no such variance, so the only resonance stabilized in such an orbit is at 1:1 (e.g., Earth–Moon), when the tidal force, stretching a body along the "center-body" line, exerts a torque that aligns the body's axis of least inertia (the "longest" axis, and the axis of the aforementioned dipole) to point always at the center. However, with noticeable eccentricity, like that of Mercury's orbit, the tidal force has a maximum at perihelion and therefore stabilizes resonances, like 3:2, enforcing that the planet points its axis of least inertia roughly at the Sun when passing through perihelion.[105]
98
+
99
+ The original reason astronomers thought it was synchronously locked was that, whenever Mercury was best placed for observation, it was always nearly at the same point in its 3:2 resonance, hence showing the same face. This is because, coincidentally, Mercury's rotation period is almost exactly half of its synodic period with respect to Earth. Due to Mercury's 3:2 spin-orbit resonance, a solar day (the length between two meridian transits of the Sun) lasts about 176 Earth days.[24] A sidereal day (the period of rotation) lasts about 58.7 Earth days.[24]
100
+
101
+ Simulations indicate that the orbital eccentricity of Mercury varies chaotically from nearly zero (circular) to more than 0.45 over millions of years due to perturbations from the other planets.[24][106]
102
+ This was thought to explain Mercury's 3:2 spin-orbit resonance (rather than the more usual 1:1), because this state is more likely to arise during a period of high eccentricity.[107]
103
+ However, accurate modeling based on a realistic model of tidal response has demonstrated that Mercury was captured into the 3:2 spin-orbit state at a very early stage of its history, within 20 (more likely, 10) million years after its formation.[108]
104
+
105
+ Numerical simulations show that a future secular orbital resonant perihelion interaction with Jupiter may cause the eccentricity of Mercury's orbit to increase to the point where there is a 1% chance that the planet may collide with Venus within the next five billion years.[109][110]
106
+
107
+ In 1859, the French mathematician and astronomer Urbain Le Verrier reported that the slow precession of Mercury's orbit around the Sun could not be completely explained by Newtonian mechanics and perturbations by the known planets. He suggested, among possible explanations, that another planet (or perhaps instead a series of smaller 'corpuscules') might exist in an orbit even closer to the Sun than that of Mercury, to account for this perturbation.[111] (Other explanations considered included a slight oblateness of the Sun.) The success of the search for Neptune based on its perturbations of the orbit of Uranus led astronomers to place faith in this possible explanation, and the hypothetical planet was named Vulcan, but no such planet was ever found.[112]
108
+
109
+ The perihelion precession of Mercury is 5,600 arcseconds (1.5556°) per century relative to Earth, or 574.10±0.65 arcseconds per century[113] relative to the inertial ICRF. Newtonian mechanics, taking into account all the effects from the other planets, predicts a precession of 5,557 arcseconds (1.5436°) per century.[113] In the early 20th century, Albert Einstein's general theory of relativity provided the explanation for the observed precession, by formalizing gravitation as being mediated by the curvature of spacetime. The effect is small: just 42.98 arcseconds per century for Mercury; it therefore requires a little over twelve million orbits for a full excess turn. Similar, but much smaller, effects exist for other Solar System bodies: 8.62 arcseconds per century for Venus, 3.84 for Earth, 1.35 for Mars, and 10.05 for 1566 Icarus.[114][115]
110
+
111
+ Einstein's formula for the perihelion shift per revolution is
112
+
113
+
114
+
115
+ ϵ
116
+ =
117
+ 24
118
+
119
+ π
120
+
121
+ 3
122
+
123
+
124
+
125
+
126
+
127
+ a
128
+
129
+ 2
130
+
131
+
132
+
133
+
134
+ T
135
+
136
+ 2
137
+
138
+
139
+
140
+ c
141
+
142
+ 2
143
+
144
+
145
+ (
146
+ 1
147
+
148
+
149
+ e
150
+
151
+ 2
152
+
153
+
154
+ )
155
+
156
+
157
+
158
+
159
+
160
+ {\displaystyle \epsilon =24\pi ^{3}{\frac {a^{2}}{T^{2}c^{2}(1-e^{2})}}}
161
+
162
+ , where
163
+
164
+
165
+
166
+ e
167
+
168
+
169
+ {\displaystyle e}
170
+
171
+ is the orbital eccentricity,
172
+
173
+
174
+
175
+ a
176
+
177
+
178
+ {\displaystyle a}
179
+
180
+ the semi-major axis, and
181
+
182
+
183
+
184
+ T
185
+
186
+
187
+ {\displaystyle T}
188
+
189
+ the orbital period. Filling in the values gives a result of 0.1035 arcseconds per revolution or 0.4297 arcseconds per Earth year, i.e., 42.97 arcseconds per century. This is in close agreement with the accepted value of Mercury's perihelion advance of 42.98 arcseconds per century.[116]
190
+
191
+ There may be scientific support, based on studies reported in March 2020, for considering that parts of the planet Mercury may have been habitable, and perhaps that life forms, albeit likely primitive microorganisms, may have existed on the planet.[117][118]
192
+
193
+ Mercury's apparent magnitude is calculated to vary between −2.48 (brighter than Sirius) around superior conjunction and +7.25 (below the limit of naked-eye visibility) around inferior conjunction.[13] The mean apparent magnitude is 0.23 while the standard deviation of 1.78 is the largest of any planet. The mean apparent magnitude at superior conjunction is −1.89 while that at inferior conjunction is +5.93.[13] Observation of Mercury is complicated by its proximity to the Sun, as it is lost in the Sun's glare for much of the time. Mercury can be observed for only a brief period during either morning or evening twilight.[119]
194
+
195
+ Mercury can, like several other planets and the brightest stars, be seen during a total solar eclipse.[120]
196
+
197
+ Like the Moon and Venus, Mercury exhibits phases as seen from Earth. It is "new" at inferior conjunction and "full" at superior conjunction. The planet is rendered invisible from Earth on both of these occasions because of its being obscured by the Sun,[119] except its new phase during a transit.
198
+
199
+ Mercury is technically brightest as seen from Earth when it is at a full phase. Although Mercury is farthest from Earth when it is full, the greater illuminated area that is visible and the opposition brightness surge more than compensates for the distance.[121] The opposite is true for Venus, which appears brightest when it is a crescent, because it is much closer to Earth than when gibbous.[121][122]
200
+
201
+ Nonetheless, the brightest (full phase) appearance of Mercury is an essentially impossible time for practical observation, because of the extreme proximity of the Sun. Mercury is best observed at the first and last quarter, although they are phases of lesser brightness. The first and last quarter phases occur at greatest elongation east and west of the Sun, respectively. At both of these times Mercury's separation from the Sun ranges anywhere from 17.9° at perihelion to 27.8° at aphelion.[123][124] At greatest western elongation, Mercury rises at its earliest before sunrise, and at greatest eastern elongation, it sets at its latest after sunset.[125]
202
+
203
+ Mercury can be easily seen from the tropics and subtropics more than from higher latitudes. Viewed from low latitudes and at the right times of year, the ecliptic intersects the horizon at a steep angle. Mercury is 10° above the horizon when the planet appears directly above the Sun (i.e. its orbit appears vertical) and is at maximum elongation from the Sun (28°) and also when the Sun is 18° below the horizon, so the sky is just completely dark.[d] This angle is the maximum altitude at which Mercury is visible in a completely dark sky.
204
+
205
+ At middle latitudes, Mercury is more often and easily visible from the Southern Hemisphere than from the Northern. This is because Mercury's maximum western elongation occurs only during early autumn in the Southern Hemisphere, whereas its greatest eastern elongation happens only during late winter in the Southern Hemisphere.[125] In both of these cases, the angle at which the planet's orbit intersects the horizon is maximized, allowing it to rise several hours before sunrise in the former instance and not set until several hours after sundown in the latter from southern mid-latitudes, such as Argentina and South Africa.[125]
206
+
207
+ An alternate method for viewing Mercury involves observing the planet during daylight hours when conditions are clear, ideally when it is at its greatest elongation. This allows the planet to be found easily, even when using telescopes with 8 cm (3.1 in) apertures. Care must be taken to ensure the instrument isn't pointed directly towards the Sun because of the risk for eye damage. This method bypasses the limitation of twilight observing when the ecliptic is located at a low elevation (e.g. on autumn evenings).
208
+
209
+ Ground-based telescope observations of Mercury reveal only an illuminated partial disk with limited detail. The first of two spacecraft to visit the planet was Mariner 10, which mapped about 45% of its surface from 1974 to 1975. The second is the MESSENGER spacecraft, which after three Mercury flybys between 2008 and 2009, attained orbit around Mercury on March 17, 2011,[126] to study and map the rest of the planet.[127]
210
+
211
+ The Hubble Space Telescope cannot observe Mercury at all, due to safety procedures that prevent its pointing too close to the Sun.[128]
212
+
213
+ Because the shift of 0.15 revolutions in a year makes up a seven-year cycle (0.15 × 7 ≈ 1.0), in the seventh year Mercury follows almost exactly (earlier by 7 days) the sequence of phenomena it showed seven years before.[123]
214
+
215
+ The earliest known recorded observations of Mercury are from the Mul.Apin tablets. These observations were most likely made by an Assyrian astronomer around the 14th century BC.[129] The cuneiform name used to designate Mercury on the Mul.Apin tablets is transcribed as Udu.Idim.Gu\u4.Ud ("the jumping planet").[e][130] Babylonian records of Mercury date back to the 1st millennium BC. The Babylonians called the planet Nabu after the messenger to the gods in their mythology.[131]
216
+
217
+ The ancients knew Mercury by different names depending on whether it was an evening star or a morning star. By about 350 BC, the ancient Greeks had realized the two stars were one.[132] They knew the planet as Στίλβων Stilbōn, meaning "twinkling", and Ἑρμής Hermēs, for its fleeting motion,[133] a name that is retained in modern Greek (Ερμής Ermis).[134] The Romans named the planet after the swift-footed Roman messenger god, Mercury (Latin Mercurius), which they equated with the Greek Hermes, because it moves across the sky faster than any other planet.[132][135] The astronomical symbol for Mercury is a stylized version of Hermes' caduceus.[136]
218
+
219
+ The Greco-Egyptian[137] astronomer Ptolemy wrote about the possibility of planetary transits across the face of the Sun in his work Planetary Hypotheses. He suggested that no transits had been observed either because planets such as Mercury were too small to see, or because the transits were too infrequent.[138]
220
+
221
+ In ancient China, Mercury was known as "the Hour Star" (Chen-xing 辰星). It was associated with the direction north and the phase of water in the Five Phases system of metaphysics.[139] Modern Chinese, Korean, Japanese and Vietnamese cultures refer to the planet literally as the "water star" (水星), based on the Five elements.[140][141][142] Hindu mythology used the name Budha for Mercury, and this god was thought to preside over Wednesday.[143] The god Odin (or Woden) of Germanic paganism was associated with the planet Mercury and Wednesday.[144] The Maya may have represented Mercury as an owl (or possibly four owls; two for the morning aspect and two for the evening) that served as a messenger to the underworld.[145]
222
+
223
+ In medieval Islamic astronomy, the Andalusian astronomer Abū Ishāq Ibrāhīm al-Zarqālī in the 11th century described the deferent of Mercury's geocentric orbit as being oval, like an egg or a pignon, although this insight did not influence his astronomical theory or his astronomical calculations.[146][147] In the 12th century, Ibn Bajjah observed "two planets as black spots on the face of the Sun", which was later suggested as the transit of Mercury and/or Venus by the Maragha astronomer Qotb al-Din Shirazi in the 13th century.[148] (Note that most such medieval reports of transits were later taken as observations of sunspots.[149])
224
+
225
+ In India, the Kerala school astronomer Nilakantha Somayaji in the 15th century developed a partially heliocentric planetary model in which Mercury orbits the Sun, which in turn orbits Earth, similar to the Tychonic system later proposed by Tycho Brahe in the late 16th century.[150]
226
+
227
+ The first telescopic observations of Mercury were made by Galileo in the early 17th century. Although he observed phases when he looked at Venus, his telescope was not powerful enough to see the phases of Mercury. In 1631, Pierre Gassendi made the first telescopic observations of the transit of a planet across the Sun when he saw a transit of Mercury predicted by Johannes Kepler. In 1639, Giovanni Zupi used a telescope to discover that the planet had orbital phases similar to Venus and the Moon. The observation demonstrated conclusively that Mercury orbited around the Sun.[24]
228
+
229
+ A rare event in astronomy is the passage of one planet in front of another (occultation), as seen from Earth. Mercury and Venus occult each other every few centuries, and the event of May 28, 1737 is the only one historically observed, having been seen by John Bevis at the Royal Greenwich Observatory.[151] The next occultation of Mercury by Venus will be on December 3, 2133.[152]
230
+
231
+ The difficulties inherent in observing Mercury mean that it has been far less studied than the other planets. In 1800, Johann Schröter made observations of surface features, claiming to have observed 20-kilometre-high (12 mi) mountains. Friedrich Bessel used Schröter's drawings to erroneously estimate the rotation period as 24 hours and an axial tilt of 70°.[153] In the 1880s, Giovanni Schiaparelli mapped the planet more accurately, and suggested that Mercury's rotational period was 88 days, the same as its orbital period due to tidal locking.[154] This phenomenon is known as synchronous rotation. The effort to map the surface of Mercury was continued by Eugenios Antoniadi, who published a book in 1934 that included both maps and his own observations.[81] Many of the planet's surface features, particularly the albedo features, take their names from Antoniadi's map.[155]
232
+
233
+ In June 1962, Soviet scientists at the Institute of Radio-engineering and Electronics of the USSR Academy of Sciences, led by Vladimir Kotelnikov, became the first to bounce a radar signal off Mercury and receive it, starting radar observations of the planet.[156][157][158] Three years later, radar observations by Americans Gordon H. Pettengill and Rolf B. Dyce, using the 300-meter Arecibo Observatory radio telescope in Puerto Rico, showed conclusively that the planet's rotational period was about 59 days.[159][160] The theory that Mercury's rotation was synchronous had become widely held, and it was a surprise to astronomers when these radio observations were announced. If Mercury were tidally locked, its dark face would be extremely cold, but measurements of radio emission revealed that it was much hotter than expected. Astronomers were reluctant to drop the synchronous rotation theory and proposed alternative mechanisms such as powerful heat-distributing winds to explain the observations.[161]
234
+
235
+ Italian astronomer Giuseppe Colombo noted that the rotation value was about two-thirds of Mercury's orbital period, and proposed that the planet's orbital and rotational periods were locked into a 3:2 rather than a 1:1 resonance.[162] Data from Mariner 10 subsequently confirmed this view.[163] This means that Schiaparelli's and Antoniadi's maps were not "wrong". Instead, the astronomers saw the same features during every second orbit and recorded them, but disregarded those seen in the meantime, when Mercury's other face was toward the Sun, because the orbital geometry meant that these observations were made under poor viewing conditions.[153]
236
+
237
+ Ground-based optical observations did not shed much further light on Mercury, but radio astronomers using interferometry at microwave wavelengths, a technique that enables removal of the solar radiation, were able to discern physical and chemical characteristics of the subsurface layers to a depth of several meters.[164][165] Not until the first space probe flew past Mercury did many of its most fundamental morphological properties become known. Moreover, recent technological advances have led to improved ground-based observations. In 2000, high-resolution lucky imaging observations were conducted by the Mount Wilson Observatory 1.5 meter Hale telescope. They provided the first views that resolved surface features on the parts of Mercury that were not imaged in the Mariner 10 mission.[166] Most of the planet has been mapped by the Arecibo radar telescope, with 5 km (3.1 mi) resolution, including polar deposits in shadowed craters of what may be water ice.[167]
238
+
239
+ Reaching Mercury from Earth poses significant technical challenges, because it orbits so much closer to the Sun than Earth. A Mercury-bound spacecraft launched from Earth must travel over 91 million kilometres (57 million miles) into the Sun's gravitational potential well. Mercury has an orbital speed of 48 km/s (30 mi/s), whereas Earth's orbital speed is 30 km/s (19 mi/s). Therefore, the spacecraft must make a large change in velocity (delta-v) to enter a Hohmann transfer orbit that passes near Mercury, as compared to the delta-v required for other planetary missions.[169]
240
+
241
+ The potential energy liberated by moving down the Sun's potential well becomes kinetic energy; requiring another large delta-v change to do anything other than rapidly pass by Mercury. To land safely or enter a stable orbit the spacecraft would rely entirely on rocket motors. Aerobraking is ruled out because Mercury has a negligible atmosphere. A trip to Mercury requires more rocket fuel than that required to escape the Solar System completely. As a result, only two space probes have visited it so far.[170] A proposed alternative approach would use a solar sail to attain a Mercury-synchronous orbit around the Sun.[171]
242
+
243
+ The first spacecraft to visit Mercury was NASA's Mariner 10 (1974–1975).[132] The spacecraft used the gravity of Venus to adjust its orbital velocity so that it could approach Mercury, making it both the first spacecraft to use this gravitational "slingshot" effect and the first NASA mission to visit multiple planets.[169] Mariner 10 provided the first close-up images of Mercury's surface, which immediately showed its heavily cratered nature, and revealed many other types of geological features, such as the giant scarps that were later ascribed to the effect of the planet shrinking slightly as its iron core cools.[172] Unfortunately, the same face of the planet was lit at each of Mariner 10's close approaches. This made close observation of both sides of the planet impossible,[173] and resulted in the mapping of less than 45% of the planet's surface.[174]
244
+
245
+ The spacecraft made three close approaches to Mercury, the closest of which took it to within 327 km (203 mi) of the surface.[175] At the first close approach, instruments detected a magnetic field, to the great surprise of planetary geologists—Mercury's rotation was expected to be much too slow to generate a significant dynamo effect. The second close approach was primarily used for imaging, but at the third approach, extensive magnetic data were obtained. The data revealed that the planet's magnetic field is much like Earth's, which deflects the solar wind around the planet. For many years after the Mariner 10 encounters, the origin of Mercury's magnetic field remained the subject of several competing theories.[176][177]
246
+
247
+ On March 24, 1975, just eight days after its final close approach, Mariner 10 ran out of fuel. Because its orbit could no longer be accurately controlled, mission controllers instructed the probe to shut down.[178] Mariner 10 is thought to be still orbiting the Sun, passing close to Mercury every few months.[179]
248
+
249
+ A second NASA mission to Mercury, named MESSENGER (MErcury Surface, Space ENvironment, GEochemistry, and Ranging), was launched on August 3, 2004. It made a fly-by of Earth in August 2005, and of Venus in October 2006 and June 2007 to place it onto the correct trajectory to reach an orbit around Mercury.[180] A first fly-by of Mercury occurred on January 14, 2008, a second on October 6, 2008,[181] and a third on September 29, 2009.[182] Most of the hemisphere not imaged by Mariner 10 was mapped during these fly-bys. The probe successfully entered an elliptical orbit around the planet on March 18, 2011. The first orbital image of Mercury was obtained on March 29, 2011. The probe finished a one-year mapping mission,[181] and then entered a one-year extended mission into 2013. In addition to continued observations and mapping of Mercury, MESSENGER observed the 2012 solar maximum.[183]
250
+
251
+ The mission was designed to clear up six key issues: Mercury's high density, its geological history, the nature of its magnetic field, the structure of its core, whether it has ice at its poles, and where its tenuous atmosphere comes from. To this end, the probe carried imaging devices that gathered much-higher-resolution images of much more of Mercury than Mariner 10, assorted spectrometers to determine abundances of elements in the crust, and magnetometers and devices to measure velocities of charged particles. Measurements of changes in the probe's orbital velocity were expected to be used to infer details of the planet's interior structure.[184] MESSENGER's final maneuver was on April 24, 2015, and it crashed into Mercury's surface on April 30, 2015.[185][186][187] The spacecraft's impact with Mercury occurred near 3:26 PM EDT on April 30, 2015, leaving a crater estimated to be 16 m (52 ft) in diameter.[188]
252
+
253
+ The European Space Agency and the Japanese Space Agency developed and launched a joint mission called BepiColombo, which will orbit Mercury with two probes: one to map the planet and the other to study its magnetosphere.[189] Launched on October 20, 2018, BepiColombo is expected to reach Mercury in 2025.[190] It will release a magnetometer probe into an elliptical orbit, then chemical rockets will fire to deposit the mapper probe into a circular orbit. Both probes will operate for one terrestrial year.[189] The mapper probe carries an array of spectrometers similar to those on MESSENGER, and will study the planet at many different wavelengths including infrared, ultraviolet, X-ray and gamma ray.[191]
254
+
255
+ Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Virgo Supercluster → Laniakea Supercluster → Observable universe → UniverseEach arrow (→) may be read as "within" or "part of".
256
+
en/4646.html.txt ADDED
@@ -0,0 +1,121 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ A dwarf planet is a planetary-mass object that does not dominate its region of space (as a true or classical planet does) and is not a satellite. That is, it is in direct orbit of the Sun and is massive enough to be plastic – for its gravity to maintain it in a hydrostatically equilibrious shape (usually a spheroid) – but has not cleared the neighborhood around its orbit of other material.[2] The prototype dwarf planet is Pluto.[3]
4
+
5
+ The number of dwarf planets in the Solar System is unknown, as determining whether a potential body is a dwarf planet requires close observation. The half-dozen largest candidates have at least one known moon, allowing determination of their masses. The interest of dwarf planets to planetary geologists is that, being differentiated and perhaps geologically active bodies, they are likely to display planetary geology, an expectation borne out by the 2015 New Horizons mission to Pluto and Dawn mission to Ceres.
6
+
7
+ The term dwarf planet was coined by planetary scientist Alan Stern as part of a three-way categorization of planetary-mass objects in the Solar System: classical planets (the big eight), dwarf planets and satellite planets. Dwarf planets were thus originally conceived of as a kind of planet, as the name suggests. However, in 2006 the term was adopted by the International Astronomical Union (IAU) as a category of sub-planetary objects, part of a three-way recategorization of bodies orbiting the Sun[2] precipitated by the discovery of Eris, an object farther away from the Sun than Neptune that was more massive than Pluto but still much smaller than the classical planets, after discoveries of a number of other objects that rivaled Pluto in size had forced a reconsideration of what Pluto was.[4] Thus Stern and many other planetary geologists distinguish dwarf planets from classical planets, but since 2006 the IAU and the majority of astronomers have excluded bodies such as Eris and Pluto from the roster of planets altogether. This redefinition of what constitutes a planet has been both praised and criticized.[5][6][7][8][9][10]
8
+
9
+ Starting in 1801, astronomers discovered Ceres and other bodies between Mars and Jupiter that for decades were considered to be planets. Between then and around 1851, when the number of planets had reached 23, astronomers started using the word asteroid for the smaller bodies and then stopped naming or classifying them as planets.[12]
10
+
11
+ With the discovery of Pluto in 1930, most astronomers considered the Solar System to have nine planets, along with thousands of significantly smaller bodies (asteroids and comets). For almost 50 years Pluto was thought to be larger than Mercury,[13][14] but with the discovery in 1978 of Pluto's moon Charon, it became possible to measure Pluto's mass accurately and to determine that it was much smaller than initial estimates.[15] It was roughly one-twentieth the mass of Mercury, which made Pluto by far the smallest planet. Although it was still more than ten times as massive as the largest object in the asteroid belt, Ceres, it had only one-fifth the mass of Earth's Moon.[16] Furthermore, having some unusual characteristics, such as large orbital eccentricity and a high orbital inclination, it became evident that it was a different kind of body from any of the other planets.[17]
12
+
13
+ In the 1990s, astronomers began to find objects in the same region of space as Pluto (now known as the Kuiper belt), and some even farther away.[18]
14
+ Many of these shared several of Pluto's key orbital characteristics, and Pluto started being seen as the largest member of a new class of objects, the plutinos. It became clear that either the larger of these bodies would also have to be classified as planets, or Pluto would have to be reclassified, much as Ceres had been reclassified after the discovery of additional asteroids.[19]
15
+ This led some astronomers to stop referring to Pluto as a planet. Several terms, including subplanet and planetoid, started to be used for the bodies now known as dwarf planets.[20][21]
16
+ Astronomers were also confident that more objects as large as Pluto would be discovered, and the number of planets would start growing quickly if Pluto were to remain classified as a planet.[22]
17
+
18
+ Eris (then known as 2003 UB313) was discovered in January 2005;[23] it was thought to be slightly larger than Pluto, and some reports informally referred to it as the tenth planet.[24] As a consequence, the issue became a matter of intense debate during the IAU General Assembly in August 2006.[25] The IAU's initial draft proposal included Charon, Eris, and Ceres in the list of planets. After many astronomers objected to this proposal, an alternative was drawn up by the Uruguayan astronomers Julio Ángel Fernández and Gonzalo Tancredi: they proposed an intermediate category for objects large enough to be round but which had not cleared their orbits of planetesimals. Dropping Charon from the list, the new proposal also removed Pluto, Ceres, and Eris, because they have not cleared their orbits.[26]
19
+
20
+ The IAU's final Resolution 5A preserved this three-category system for the celestial bodies orbiting the Sun. It reads:
21
+
22
+ The IAU ... resolves that planets and other bodies, except satellites, in our Solar System be defined into three distinct categories in the following way:
23
+
24
+ (1) A planet1 is a celestial body that (a) is in orbit around the Sun, (b) has sufficient mass for its self-gravity to overcome rigid body forces so that it assumes a hydrostatic equilibrium (nearly round) shape, and (c) has cleared the neighbourhood around its orbit.
25
+ (2) A "dwarf planet" is a celestial body that (a) is in orbit around the Sun, (b) has sufficient mass for its self-gravity to overcome rigid body forces so that it assumes a hydrostatic equilibrium (nearly round) shape,2 (c) has not cleared the neighbourhood around its orbit, and (d) is not a satellite.
26
+ (3) All other objects,3 except satellites, orbiting the Sun shall be referred to collectively as "Small Solar System Bodies."
27
+
28
+ The IAU never did establish a process to assign borderline objects, leaving such judgements to astronomers. However, it did subsequently establish guidelines under which an IAU committee would oversee the naming of possible dwarf planets: unnamed trans-Neptunian objects with an absolute magnitude brighter than +1 (and hence a minimum diameter of 838 km corresponding to a geometric albedo of 1)[27] were to be named by the dwarf-planet naming committee.[28] At the time (and still as of 2019), the only bodies to meet the naming criterion were Haumea and Makemake.
29
+
30
+ These five bodies – the three under consideration in 2006 (Pluto, Ceres and Eris) plus the two named in 2008 (Haumea and Makemake) – are commonly presented as the dwarf planets of the Solar System by naming authorities.[29]
31
+ However, only one of them – Pluto – has been observed in enough detail to verify that its current shapes fits what would be expected from hydrostatic equilibrium.[30] Ceres is close to equilibrium, but some gravitational anomalies remain unexplained.[31]
32
+
33
+ On the other hand, the astronomical community typically refers to the larger TNOs as dwarf planets.[32] For instance, JPL/NASA characterized Gonggong as a dwarf planet after observations in 2016,[33] and Simon Porter spoke of "the big eight [TNO] dwarf planets" in 2018.[34]
34
+
35
+ Although concerns were raised about the classification of planets orbiting other stars,[35] the issue was not resolved; it was proposed instead to decide this only when such objects start to be observed.[26]
36
+
37
+ Names for large subplanetary bodies include dwarf planet, planetoid, meso-planet, quasi-planet and (in the transneptunian region) plutoid. Dwarf planet, however, was originally coined as a term for the smallest planets, not the largest sub-planets, and is still used that way by many planetary astronomers.
38
+
39
+ Alan Stern coined the term dwarf planet, analogous to the term dwarf star, as part of a three-fold classification of planets, and he and many of his colleagues continue to classify dwarf planets as a class of planets. The IAU decided that dwarf planets are not to be considered planets, but kept Stern's term for them. Other terms for the IAU definition of the largest subplanetary bodies that do not have such conflicting connotations or usage include quasi-planet[36]
40
+ and the older term planetoid ("having the form of a planet").[37]
41
+ Michael E. Brown stated that planetoid is "a perfectly good word" that has been used for these bodies for years, and that the use of the term dwarf planet for a non-planet is "dumb", but that it was motivated by an attempt by the IAU division III plenary session to reinstate Pluto as a planet in a second resolution.[38] Indeed, the draft of Resolution 5A had called these median bodies planetoids,[39][40] but the plenary session voted unanimously to change the name to dwarf planet.[2] The second resolution, 5B, defined dwarf planets as a subtype of planet, as Stern had originally intended, distinguished from the other eight that were to be called "classical planets". Under this arrangement, the twelve planets of the rejected proposal were to be preserved in a distinction between eight classical planets and four dwarf planets. Resolution 5B was defeated in the same session that 5A was passed.[38] Because of the semantic inconsistency of a dwarf planet not being a planet due to the failure of Resolution 5B, alternative terms such as nanoplanet and subplanet were discussed, but there was no consensus among the CSBN to change it.[41]
42
+
43
+ In most languages equivalent terms have been created by translating dwarf planet more-or-less literally: French planète naine, Spanish planeta enano, German Zwergplanet, Russian karlikovaya planeta (карликовая планета), Arabic kaukab qazm (كوكب قزم), Chinese ǎixíngxīng (矮行星), Korean waesohangseong or waehangseong (왜소행성; 矮小行星, 왜행성; 矮行星), but in Japanese they are called junwakusei (準惑星), meaning "quasi-planets" or "peneplanets".
44
+
45
+ IAU Resolution 6a of 2006[3] recognizes Pluto as "the prototype of a new category of trans-Neptunian objects". The name and precise nature of this category were not specified but left for the IAU to establish at a later date; in the debate leading up to the resolution, the members of the category were variously referred to as plutons and plutonian objects but neither name was carried forward, perhaps due to objections from geologists that this would create confusion with their pluton.[2]
46
+
47
+ On June 11, 2008, the IAU Executive Committee announced a name, plutoid, and a definition: all trans-Neptunian dwarf planets are plutoids.[28] The authority of that initial announcement has not been universally recognized:
48
+
49
+ ...in part because of an email miscommunication, the WG-PSN [Working Group for Planetary System Nomenclature] was not involved in choosing the word plutoid. ... In fact, a vote taken by the WG-PSN subsequent to the Executive Committee meeting has rejected the use of that specific term..."[42]
50
+
51
+ The category of 'plutoid' captured an earlier distinction between the 'terrestrial dwarf' Ceres and the 'ice dwarfs' of the outer Solar system,[43] part of a conception of a threefold division of the Solar System into inner terrestrial planets, central gas giants and outer ice dwarfs, of which Pluto was the principal member.[44] 'Ice dwarf' however also saw some use as an umbrella term for all trans-Neptunian minor planets, or for the ice asteroids of the outer Solar System; one attempted definition was that an ice dwarf "is larger than the nucleus of a normal comet and icier than a typical asteroid."[45]
52
+
53
+ Before the Dawn mission, Ceres was sometimes called a 'terrestrial dwarf' to distinguish it from the 'ice dwarfs' Pluto and Eris. However, since Dawn it has been recognized that Ceres is an icy body more similar to the icy moons of the outer planets and to TNOs such as Pluto than it is to the terrestrial planets, blurring the distinction,[46][47]
54
+ and Ceres has since been called an ice dwarf as well.[48]
55
+
56
+ Showing the planets and the largest known sub-planetary objects (purple) covering the orbital zones containing likely dwarf planets. All known possible dwarf planets have smaller discriminants than those shown for that zone.
57
+
58
+ Alan Stern and Harold F. Levison introduced a parameter Λ (lambda), expressing the likelihood of an encounter resulting in a given deflection of orbit.[51] The value of this parameter in Stern's model is proportional to the square of the mass and inversely proportional to the period. This value can be used to estimate the capacity of a body to clear the neighbourhood of its orbit, where Λ > 1 will eventually clear it. A gap of five orders of magnitude in Λ was found between the smallest terrestrial planets and the largest asteroids and Kuiper belt objects.[49]
59
+
60
+ Using this parameter, Steven Soter and other astronomers argued for a distinction between planets and dwarf planets based on the inability of the latter to "clear the neighbourhood around their orbits": planets are able to remove smaller bodies near their orbits by collision, capture, or gravitational disturbance (or establish orbital resonances that prevent collisions), whereas dwarf planets lack the mass to do so.[51] Soter went on to propose a parameter he called the planetary discriminant, designated with the symbol µ (mu), that represents an experimental measure of the actual degree of cleanliness of the orbital zone (where µ is calculated by dividing the mass of the candidate body by the total mass of the other objects that share its orbital zone), where µ > 100 is deemed to be cleared.[49]
61
+
62
+ Jean-Luc Margot refined Stern and Levison's concept to produce a similar parameter Π (Pi).[52] It is based on theory, avoiding the empirical data used by Λ. Π > 1 indicates a planet, and there is again a gap of several orders of magnitude between planets and dwarf planets.
63
+
64
+ There are several other schemes that try to differentiate between planets and dwarf planets,[8] but the 2006 definition uses this concept.[2]
65
+
66
+ Sufficient internal pressure, caused by the body's gravitation, will turn a body plastic, and sufficient plasticity will allow high elevations to sink and hollows to fill in, a process known as gravitational relaxation. Bodies smaller than a few kilometers are dominated by non-gravitational forces and tend to have an irregular shape and may be rubble piles. Larger objects, where gravitation is significant but not dominant, are "potato" shaped; the more massive the body is, the higher its internal pressure, the more solid it is and the more rounded its shape, until the pressure is sufficient to overcome its internal compressive strength and it achieves hydrostatic equilibrium. At this point a body is as round as it is possible to be, given its rotation and tidal effects, and is an ellipsoid in shape. This is the defining limit of a dwarf planet.[53]
67
+
68
+ When an object is in hydrostatic equilibrium, a global layer of liquid covering its surface would form a liquid surface of the same shape as the body, apart from small-scale surface features such as craters and fissures. If the body does not rotate, it will be a sphere, but the faster it rotates, the more oblate or even scalene it becomes. If such a rotating body were to be heated until it melted, its overall shape would not change when liquid. The extreme example of a body that may be scalene due to rapid rotation is Haumea, which is twice as long along its major axis as it is at the poles. If the body has a massive nearby companion, then tidal forces cause its rotation to gradually slow until it is tidally locked, such that it always presents the same face to its companion. An extreme example of this is the Pluto–Charon system, where both bodies are tidally locked to each other. Tidally locked bodies are also scalene, though sometimes only slightly so. Earth's Moon is also tidally locked, as are all rounded satellites of the gas giants.
69
+
70
+ The upper and lower size and mass limits of dwarf planets have not been specified by the IAU. There is no defined upper limit, and an object larger or more massive than Mercury that has not "cleared the neighbourhood around its orbit" would be classified as a dwarf planet.[54] The lower limit is determined by the requirements of achieving a hydrostatic equilibrium shape, but the size or mass at which an object attains this shape depends on its composition and thermal history. The original draft of the 2006 IAU resolution redefined hydrostatic equilibrium shape as applying "to objects with mass above 5×1020 kg and diameter greater than 800 km",[35] but this was not retained in the final draft.[2]
71
+
72
+ The number of dwarf planets in the Solar system is not known. The three objects under consideration during the debates leading up to the 2006 IAU acceptance of the category of dwarf planet – Ceres, Pluto and Eris – are generally accepted as dwarf planets, including by those astronomers who continue to classify dwarf planets as planets. In 2015, Ceres and Pluto were determined to have shapes consistent with hydrostatic equilibrium (and thus with being dwarf planets) by the Dawn and New Horizons missions, respectively, though there is still some question about Ceres. Eris is assumed to be a dwarf planet because it is more massive than Pluto.
73
+
74
+ In order of discovery, these three bodies are:
75
+
76
+ Due to the 2008 decision to assign the naming of Haumea and Makemake to the dwarf-planet naming committee and their announcement as dwarf planets in IAU press releases, these two bodies are also generally accepted as dwarf planets:
77
+
78
+ Four additional bodies meet the criteria of Brown, Tancredi et al. and Grundy et al. for candidate objects:
79
+
80
+ Additional bodies have been proposed, such as Salacia and 2002 MS4 by Brown, or Varuna and Ixion by Tancredi et al. Most of the larger bodies have moons, which enables a determination of their masses and thus their densities, which inform estimates as to whether they could be dwarf planets. The largest TNOs that are not known to have moons are Sedna, 2002 MS4 and 2002 AW197.
81
+
82
+ At the time Makemake and Haumea were named, it was thought that trans-Neptunian objects (TNOs) with icy cores would require a diameter of only perhaps 400 km (250 mi)—about 3% of that of Earth—to relax into gravitational equilibrium.[56] Researchers thought that the number of such bodies could prove to be around 200 in the Kuiper belt, with thousands more beyond.[56][57][58]
83
+ This was one of the reasons (keeping the roster of 'planets' to a reasonable number) that Pluto was reclassified in the first place.
84
+ However, research since then has cast doubt on the idea that bodies that small could have achieved or maintained equilibrium under common conditions.
85
+
86
+ Individual astronomers have recognized a number of such objects as dwarf planets or as highly likely to prove to be dwarf planets. In 2008, Tancredi et al. advised the IAU to officially accept Orcus, Sedna and Quaoar as dwarf planets, though the IAU did not address the issue then and has not since. In addition, Tancredi considered the five TNOs Varuna, Ixion, 2003 AZ84, 2004 GV9, and 2002 AW197 to be mostly likely dwarf planets as well.[59] In 2012, Stern stated that there are more than a dozen known dwarf planets, though he did not specify which they were.[58]
87
+ Since 2011, Brown has maintained a list of hundreds of candidate objects, ranging from "nearly certain" to "possible" dwarf planets, based solely on estimated size.[60]
88
+ As of 13 September 2019, Brown's list identifies ten trans-Neptunian objects with diameters greater than 900 km (the four named by the IAU plus Gonggong, Quaoar, Sedna, Orcus, 2002 MS4 and Salacia) as "near certain" to be dwarf planets, and another 16, with diameters greater than 600 km, as "highly likely".[61] Notably, Gonggong may have a larger diameter (1230±50 km) than Pluto's largest moon Charon (1212 km). Pinilla-Alonso et al. (2019) propose that the surface compositions of 40 bodies possibly larger than 450 km in diameter be compared with the planned James Webb Space Telescope.[32]
89
+
90
+ However, in 2019 Grundy et al. proposed that dark, low-density bodies smaller than about 900–1000 km in diameter, such as Salacia and Varda, never fully collapsed into solid planetary bodies and retain internal porosity from their formation (in which case they could not be dwarf-planets), while accepting that brighter (albedo > ≈0.2)[62] or denser (> ≈1.4 g/cc) Orcus and Quaoar probably were fully solid.[63]
91
+
92
+ Observations in 2017–2019 led researchers to suggest that the large icy asteroids 10 Hygiea and 704 Interamnia may be transitional between dwarf planets and smaller objects.[64][65]
93
+
94
+ The following Trans-Neptunian objects are agreed by Brown, Tancredi et al. and Grundy et al. to be likely to be dwarf planets. Charon, a moon of Pluto that was proposed as a dwarf planet by the IAU in 2006, is included for comparison. Those objects that have absolute magnitudes greater than +1, and so meet the criteria for the dwarf-planet naming committee of the IAU, are highlighted, as is Ceres, which has been accepted as a dwarf planet by the IAU since they first debated the concept, though it's is not yet demonstrated to meet the definition.
95
+
96
+ On March 6, 2015, the Dawn spacecraft began to orbit Ceres, becoming the first spacecraft to orbit a dwarf planet.[67] On July 14, 2015, the New Horizons space probe flew by Pluto and its five moons.
97
+ Ceres displays such planetary-geologic features as surface salt deposits and cryovolcanos, while Pluto has water-ice mountains drifting in nitrogen-ice glaciers, as well as of course an atmosphere.
98
+ For both bodies, there is at least the possibility of a subsurface ocean or brine layer.
99
+
100
+ Dawn has also orbited the former dwarf planet Vesta. Phoebe has been explored by Cassini (most recently) and Voyager 2, which also explored Neptune's moon Triton. These three bodies are thought to be former dwarf planets and therefore their exploration helps in the study of the evolution of dwarf planets.
101
+
102
+ In the immediate aftermath of the IAU definition of dwarf planet, some scientists expressed their disagreement with the IAU resolution.[8] Campaigns included car bumper stickers and T-shirts.[68] Mike Brown (the discoverer of Eris) agrees with the reduction of the number of planets to eight.[69]
103
+
104
+ NASA has announced that it will use the new guidelines established by the IAU.[70] Alan Stern, the director of NASA's mission to Pluto, rejects the current IAU definition of planet, both in terms of defining dwarf planets as something other than a type of planet, and in using orbital characteristics (rather than intrinsic characteristics) of objects to define them as dwarf planets.[71] Thus, in 2011, he still referred to Pluto as a planet,[72] and accepted other likely dwarf planets such as Ceres and Eris, as well as the larger moons, as additional planets.[73] Several years before the IAU definition, he used orbital characteristics to separate "überplanets" (the dominant eight) from "unterplanets" (the dwarf planets), considering both types "planets".[51]
105
+
106
+ A number of bodies physically resemble dwarf planets. This include former dwarf planets, which may still have an equilibrium shape; planetary-mass moons, which meet the physical but not the orbital definition for dwarf planets; and Charon in the Pluto–Charon system, which is arguably a binary dwarf planet. The categories may overlap: Triton, for example, is both a former dwarf planet and a planetary-mass moon.
107
+
108
+ Vesta, the next-most-massive body in the asteroid belt after Ceres, was once in hydrostatic equilibrium and is roughly spherical, deviating mainly because of massive impacts that formed the Rheasilvia and Veneneia craters after it solidified.[74]
109
+ Its dimensions are not consistent with it currently being in hydrostatic equilibrium.[75][76]
110
+ Triton is more massive than Eris or Pluto, has an equilibrium shape, and is thought to be a captured dwarf planet (likely a member of a binary system), but no longer directly orbits the sun.[77]
111
+ Phoebe is a captured centaur that, like Vesta, is no longer in hydrostatic equilibrium, but is thought to have been so early in its history due to radiogenic heating.[78]
112
+
113
+ Evidence from 2019 suggests that Theia, the former planet that collided with Earth in the giant-impact hypothesis, may have originated in the outer Solar System rather than in the inner Solar System and that Earth's water originated on Theia, thus implying that Theia may have been a former dwarf planet from the Kuiper Belt.[79]
114
+
115
+ Nineteen moons have an equilibrium shape from having relaxed under their own gravity at some point in their history, though some have since frozen solid and are no longer in equilibrium. Seven are more massive than either Eris or Pluto. These moons are not physically distinct from the dwarf planets, but do not fit the IAU definition because they do not directly orbit the Sun. (Indeed, Neptune's moon Triton is a captured dwarf planet, and Ceres formed in the same region of the Solar System as the moons of Jupiter and Saturn.) Alan Stern calls planetary-mass moons "satellite planets", one of three categories of planet, together with dwarf planets and classical planets.[73] The term planemo ("planetary-mass object") also covers all three populations.[80]
116
+
117
+ There has been some debate as to whether the Pluto–Charon system should be considered a double dwarf planet.
118
+ In a draft resolution for the IAU definition of planet, both Pluto and Charon were considered planets in a binary system.[note 1][35] The IAU currently states that Charon is not considered to be a dwarf planet but rather is a satellite of Pluto, although the idea that Charon might qualify as a dwarf planet in its own right may be considered at a later date.[81] However, it is no longer clear that Charon is in hydrostatic equilibrium. Further, the location of the barycenter depends not only on the relative masses of the bodies, but also on the distance between them; the barycenter of the Sun–Jupiter orbit, for example, lies outside the Sun, but they are not considered a binary object.
119
+
120
+ Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Virgo Supercluster → Laniakea Supercluster → Observable universe → UniverseEach arrow (→) may be read as "within" or "part of".
121
+
en/4647.html.txt ADDED
@@ -0,0 +1,112 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+
5
+ Saturn is the sixth planet from the Sun and the second-largest in the Solar System, after Jupiter. It is a gas giant with an average radius of about nine times that of Earth.[18][19] It only has one-eighth the average density of Earth; however, with its larger volume, Saturn is over 95 times more massive.[20][21][22] Saturn is named after the Roman god of wealth and agriculture; its astronomical symbol (♄) represents the god's sickle.
6
+
7
+ Saturn's interior is most likely composed of a core of iron–nickel and rock (silicon and oxygen compounds). Its core is surrounded by a deep layer of metallic hydrogen, an intermediate layer of liquid hydrogen and liquid helium, and finally a gaseous outer layer. Saturn has a pale yellow hue due to ammonia crystals in its upper atmosphere. An electrical current within the metallic hydrogen layer is thought to give rise to Saturn's planetary magnetic field, which is weaker than the Earth's, but has a magnetic moment 580 times that of Earth due to Saturn's larger size. Saturn's magnetic field strength is around one-twentieth of Jupiter's.[23] The outer atmosphere is generally bland and lacking in contrast, although long-lived features can appear. Wind speeds on Saturn can reach 1,800 km/h (1,100 mph; 500 m/s), higher than on Jupiter, but not as high as those on Neptune.[24] In January 2019, astronomers reported that a day on the planet Saturn has been determined to be 10h 33m 38s + 1m 52s− 1m 19s , based on studies of the planet's C Ring.[12][13]
8
+
9
+ The planet's most famous feature is its prominent ring system, which is composed mostly of ice particles, with a smaller amount of rocky debris and dust. At least 82 moons[25] are known to orbit Saturn, of which 53 are officially named; this does not include the hundreds of moonlets in its rings. Titan, Saturn's largest moon, and the second-largest in the Solar System, is larger than the planet Mercury, although less massive, and is the only moon in the Solar System to have a substantial atmosphere.[26]
10
+
11
+ Saturn is a gas giant because it is predominantly composed of hydrogen and helium. It lacks a definite surface, though it may have a solid core.[27] Saturn's rotation causes it to have the shape of an oblate spheroid; that is, it is flattened at the poles and bulges at its equator. Its equatorial and polar radii differ by almost 10%: 60,268 km versus 54,364 km.[9] Jupiter, Uranus, and Neptune, the other giant planets in the Solar System, are also oblate but to a lesser extent. The combination of the bulge and rotation rate means that the effective surface gravity along the equator, 8.96 m/s2, is 74% that at the poles and is lower than the surface gravity of Earth. However, the equatorial escape velocity of nearly 36 km/s is much higher than that for Earth.[28]
12
+
13
+ Saturn is the only planet of the Solar System that is less dense than water—about 30% less.[29] Although Saturn's core is considerably denser than water, the average specific density of the planet is 0.69 g/cm3 due to the atmosphere. Jupiter has 318 times Earth's mass,[30] and Saturn is 95 times Earth's mass.[9] Together, Jupiter and Saturn hold 92% of the total planetary mass in the Solar System.[31]
14
+
15
+ Despite consisting mostly of hydrogen and helium, most of Saturn's mass is not in the gas phase, because hydrogen becomes a non-ideal liquid when the density is above 0.01 g/cm3, which is reached at a radius containing 99.9% of Saturn's mass. The temperature, pressure, and density inside Saturn all rise steadily toward the core, which causes hydrogen to be a metal in the deeper layers.[31]
16
+
17
+ Standard planetary models suggest that the interior of Saturn is similar to that of Jupiter, having a small rocky core surrounded by hydrogen and helium, with trace amounts of various volatiles.[32] This core is similar in composition to Earth, but is more dense. The examination of Saturn's gravitational moment, in combination with physical models of the interior, has allowed constraints to be placed on the mass of Saturn's core. In 2004, scientists estimated that the core must be 9–22 times the mass of Earth,[33][34] which corresponds to a diameter of about 25,000 km.[35] This is surrounded by a thicker liquid metallic hydrogen layer, followed by a liquid layer of helium-saturated molecular hydrogen that gradually transitions to a gas with increasing altitude. The outermost layer spans 1,000 km and consists of gas.[36][37][38]
18
+
19
+ Saturn has a hot interior, reaching 11,700 °C at its core, and it radiates 2.5 times more energy into space than it receives from the Sun. Jupiter's thermal energy is generated by the Kelvin–Helmholtz mechanism of slow gravitational compression, but such a process alone may not be sufficient to explain heat production for Saturn, because it is less massive. An alternative or additional mechanism may be generation of heat through the "raining out" of droplets of helium deep in Saturn's interior. As the droplets descend through the lower-density hydrogen, the process releases heat by friction and leaves Saturn's outer layers depleted of helium.[39][40] These descending droplets may have accumulated into a helium shell surrounding the core.[32] Rainfalls of diamonds have been suggested to occur within Saturn, as well as in Jupiter[41] and ice giants Uranus and Neptune.[42]
20
+
21
+ The outer atmosphere of Saturn contains 96.3% molecular hydrogen and 3.25% helium by volume.[43] The proportion of helium is significantly deficient compared to the abundance of this element in the Sun.[32] The quantity of elements heavier than helium (metallicity) is not known precisely, but the proportions are assumed to match the primordial abundances from the formation of the Solar System. The total mass of these heavier elements is estimated to be 19–31 times the mass of the Earth, with a significant fraction located in Saturn's core region.[44]
22
+
23
+ Trace amounts of ammonia, acetylene, ethane, propane, phosphine, and methane have been detected in Saturn's atmosphere.[45][46][47] The upper clouds are composed of ammonia crystals, while the lower level clouds appear to consist of either ammonium hydrosulfide (NH4SH) or water.[48] Ultraviolet radiation from the Sun causes methane photolysis in the upper atmosphere, leading to a series of hydrocarbon chemical reactions with the resulting products being carried downward by eddies and diffusion. This photochemical cycle is modulated by Saturn's annual seasonal cycle.[47]
24
+
25
+ Saturn's atmosphere exhibits a banded pattern similar to Jupiter's, but Saturn's bands are much fainter and are much wider near the equator. The nomenclature used to describe these bands is the same as on Jupiter. Saturn's finer cloud patterns were not observed until the flybys of the Voyager spacecraft during the 1980s. Since then, Earth-based telescopy has improved to the point where regular observations can be made.[49]
26
+
27
+ The composition of the clouds varies with depth and increasing pressure. In the upper cloud layers, with the temperature in the range 100–160 K and pressures extending between 0.5–2 bar, the clouds consist of ammonia ice. Water ice clouds begin at a level where the pressure is about 2.5 bar and extend down to 9.5 bar, where temperatures range from 185–270 K. Intermixed in this layer is a band of ammonium hydrosulfide ice, lying in the pressure range 3–6 bar with temperatures of 190–235 K. Finally, the lower layers, where pressures are between 10–20 bar and temperatures are 270–330 K, contains a region of water droplets with ammonia in aqueous solution.[50]
28
+
29
+ Saturn's usually bland atmosphere occasionally exhibits long-lived ovals and other features common on Jupiter. In 1990, the Hubble Space Telescope imaged an enormous white cloud near Saturn's equator that was not present during the Voyager encounters, and in 1994 another smaller storm was observed. The 1990 storm was an example of a Great White Spot, a unique but short-lived phenomenon that occurs once every Saturnian year, roughly every 30 Earth years, around the time of the northern hemisphere's summer solstice.[51] Previous Great White Spots were observed in 1876, 1903, 1933 and 1960, with the 1933 storm being the most famous. If the periodicity is maintained, another storm will occur in about 2020.[52]
30
+
31
+ The winds on Saturn are the second fastest among the Solar System's planets, after Neptune's. Voyager data indicate peak easterly winds of 500 m/s (1,800 km/h).[53] In images from the Cassini spacecraft during 2007, Saturn's northern hemisphere displayed a bright blue hue, similar to Uranus. The color was most likely caused by Rayleigh scattering.[54] Thermography has shown that Saturn's south pole has a warm polar vortex, the only known example of such a phenomenon in the Solar System.[55] Whereas temperatures on Saturn are normally −185 °C, temperatures on the vortex often reach as high as −122 °C, suspected to be the warmest spot on Saturn.[55]
32
+
33
+ A persisting hexagonal wave pattern around the north polar vortex in the atmosphere at about 78°N was first noted in the Voyager images.[56][57][58] The sides of the hexagon are each about 13,800 km (8,600 mi) long, which is longer than the diameter of the Earth.[59] The entire structure rotates with a period of  10h 39m 24s (the same period as that of the planet's radio emissions) which is assumed to be equal to the period of rotation of Saturn's interior.[60] The hexagonal feature does not shift in longitude like the other clouds in the visible atmosphere.[61] The pattern's origin is a matter of much speculation. Most scientists think it is a standing wave pattern in the atmosphere. Polygonal shapes have been replicated in the laboratory through differential rotation of fluids.[62][63]
34
+
35
+ HST imaging of the south polar region indicates the presence of a jet stream, but no strong polar vortex nor any hexagonal standing wave.[64] NASA reported in November 2006 that Cassini had observed a "hurricane-like" storm locked to the south pole that had a clearly defined eyewall.[65][66] Eyewall clouds had not previously been seen on any planet other than Earth. For example, images from the Galileo spacecraft did not show an eyewall in the Great Red Spot of Jupiter.[67]
36
+
37
+ The south pole storm may have been present for billions of years.[68] This vortex is comparable to the size of Earth, and it has winds of 550 km/h.[68]
38
+
39
+ Cassini observed a series of cloud features nicknamed "String of Pearls" found in northern latitudes. These features are cloud clearings that reside in deeper cloud layers.[69]
40
+
41
+ Saturn has an intrinsic magnetic field that has a simple, symmetric shape – a magnetic dipole. Its strength at the equator – 0.2 gauss (20 µT) – is approximately one twentieth of that of the field around Jupiter and slightly weaker than Earth's magnetic field.[23] As a result, Saturn's magnetosphere is much smaller than Jupiter's.[71] When Voyager 2 entered the magnetosphere, the solar wind pressure was high and the magnetosphere extended only 19 Saturn radii, or 1.1 million km (712,000 mi),[72] although it enlarged within several hours, and remained so for about three days.[73] Most probably, the magnetic field is generated similarly to that of Jupiter – by currents in the liquid metallic-hydrogen layer called a metallic-hydrogen dynamo.[71] This magnetosphere is efficient at deflecting the solar wind particles from the Sun. The moon Titan orbits within the outer part of Saturn's magnetosphere and contributes plasma from the ionized particles in Titan's outer atmosphere.[23] Saturn's magnetosphere, like Earth's, produces aurorae.[74]
42
+
43
+ The average distance between Saturn and the Sun is over 1.4 billion kilometers (9 AU). With an average orbital speed of 9.68 km/s,[9] it takes Saturn 10,759 Earth days (or about ​29 1⁄2 years)[75] to finish one revolution around the Sun.[9] As a consequence, it forms a near 5:2 mean-motion resonance with Jupiter.[76] The elliptical orbit of Saturn is inclined 2.48° relative to the orbital plane of the Earth.[9] The perihelion and aphelion distances are, respectively, 9.195 and 9.957 AU, on average.[9][77] The visible features on Saturn rotate at different rates depending on latitude and multiple rotation periods have been assigned to various regions (as in Jupiter's case).
44
+
45
+ Astronomers use three different systems for specifying the rotation rate of Saturn. System I has a period of  10h 14m 00s (844.3°/d) and encompasses the Equatorial Zone, the South Equatorial Belt, and the North Equatorial Belt. The polar regions are considered to have rotation rates similar to System I. All other Saturnian latitudes, excluding the north and south polar regions, are indicated as System II and have been assigned a rotation period of  10h 38m 25.4s (810.76°/d). System III refers to Saturn's internal rotation rate. Based on radio emissions from the planet detected by Voyager 1 and Voyager 2,[78] System III has a rotation period of  10h 39m 22.4s (810.8°/d). System III has largely superseded System II.[79]
46
+
47
+ A precise value for the rotation period of the interior remains elusive. While approaching Saturn in 2004, Cassini found that the radio rotation period of Saturn had increased appreciably, to approximately  10h 45m 45s ± 36s.[80][81] The latest estimate of Saturn's rotation (as an indicated rotation rate for Saturn as a whole) based on a compilation of various measurements from the Cassini, Voyager and Pioneer probes was reported in September 2007 is  10h 32m 35s.[82]
48
+
49
+ In March 2007, it was found that the variation of radio emissions from the planet did not match Saturn's rotation rate. This variance may be caused by geyser activity on Saturn's moon Enceladus. The water vapor emitted into Saturn's orbit by this activity becomes charged and creates a drag upon Saturn's magnetic field, slowing its rotation slightly relative to the rotation of the planet.[83][84][85]
50
+
51
+ An apparent oddity for Saturn is that it does not have any known trojan asteroids. These are minor planets that orbit the Sun at the stable Lagrangian points, designated L4 and L5, located at 60° angles to the planet along its orbit. Trojan asteroids have been discovered for Mars, Jupiter, Uranus, and Neptune. Orbital resonance mechanisms, including secular resonance, are believed to be the cause of the missing Saturnian trojans.[86]
52
+
53
+ Saturn has 82 known moons,[25] 53 of which have formal names.[87][88] In addition, there is evidence of dozens to hundreds of moonlets with diameters of 40–500 meters in Saturn's rings,[89] which are not considered to be true moons. Titan, the largest moon, comprises more than 90% of the mass in orbit around Saturn, including the rings.[90] Saturn's second-largest moon, Rhea, may have a tenuous ring system of its own,[91] along with a tenuous atmosphere.[92][93][94]
54
+
55
+ Many of the other moons are small: 34 are less than 10 km in diameter and another 14 between 10 and 50 km in diameter.[95] Traditionally, most of Saturn's moons have been named after Titans of Greek mythology. Titan is the only satellite in the Solar System with a major atmosphere,[96][97] in which a complex organic chemistry occurs. It is the only satellite with hydrocarbon lakes.[98][99]
56
+
57
+ On 6 June 2013, scientists at the IAA-CSIC reported the detection of polycyclic aromatic hydrocarbons in the upper atmosphere of Titan, a possible precursor for life.[100] On 23 June 2014, NASA claimed to have strong evidence that nitrogen in the atmosphere of Titan came from materials in the Oort cloud, associated with comets, and not from the materials that formed Saturn in earlier times.[101]
58
+
59
+ Saturn's moon Enceladus, which seems similar in chemical makeup to comets,[102] has often been regarded as a potential habitat for microbial life.[103][104][105][106] Evidence of this possibility includes the satellite's salt-rich particles having an "ocean-like" composition that indicates most of Enceladus's expelled ice comes from the evaporation of liquid salt water.[107][108][109] A 2015 flyby by Cassini through a plume on Enceladus found most of the ingredients to sustain life forms that live by methanogenesis.[110]
60
+
61
+ In April 2014, NASA scientists reported the possible beginning of a new moon within the A Ring, which was imaged by Cassini on 15 April 2013.[111]
62
+
63
+ Saturn is probably best known for the system of planetary rings that makes it visually unique.[37] The rings extend from 6,630 to 120,700 kilometers (4,120 to 75,000 mi) outward from Saturn's equator and average approximately 20 meters (66 ft) in thickness. They are composed predominantly of water ice with trace amounts of tholin impurities, and a peppered coating of approximately 7% amorphous carbon.[112] The particles that make up the rings range in size from specks of dust up to 10 m.[113] While the other gas giants also have ring systems, Saturn's is the largest and most visible.
64
+
65
+ There are two main hypotheses regarding the origin of the rings. One hypothesis is that the rings are remnants of a destroyed moon of Saturn. The second hypothesis is that the rings are left over from the original nebular material from which Saturn was formed. Some ice in the E ring comes from the moon Enceladus's geysers.[114][115][116][117] The water abundance of the rings varies radially, with the outermost ring A being the most pure in ice water. This abundance variance may be explained by meteor bombardment.[118]
66
+
67
+ Beyond the main rings at a distance of 12 million km from the planet is the sparse Phoebe ring, which is tilted at an angle of 27° to the other rings and, like Phoebe, orbits in retrograde fashion.[119]
68
+
69
+ Some of the moons of Saturn, including Pandora and Prometheus, act as shepherd moons to confine the rings and prevent them from spreading out.[120] Pan and Atlas cause weak, linear density waves in Saturn's rings that have yielded more reliable calculations of their masses.[121]
70
+
71
+ The observation and exploration of Saturn can be divided into three phases. The first phase is ancient observations (such as with the naked eye), before the invention of modern telescopes. The second phase began in the 17th century, with telescopic observations from Earth, which improved over time. The third phase is visitation by space probes, in orbit or on flyby. In the 21st century, telescopic observations continue from Earth (including Earth-orbiting observatories like the Hubble Space Telescope) and, until its 2017 retirement, from the Cassini orbiter around Saturn.
72
+
73
+ Saturn has been known since prehistoric times,[122] and in early recorded history it was a major character in various mythologies. Babylonian astronomers systematically observed and recorded the movements of Saturn.[123] In ancient Greek, the planet was known as Φαίνων Phainon,[124] and in Roman times it was known as the "star of Saturn".[125] In ancient Roman mythology, the planet Phainon was sacred to this agricultural god, from which the planet takes its modern name.[126] The Romans considered the god Saturnus the equivalent of the Greek god Cronus; in modern Greek, the planet retains the name Cronus—Κρόνος: Kronos.[127]
74
+
75
+ The Greek scientist Ptolemy based his calculations of Saturn's orbit on observations he made while it was in opposition.[128] In Hindu astrology, there are nine astrological objects, known as Navagrahas. Saturn is known as "Shani" and judges everyone based on the good and bad deeds performed in life.[126][128] Ancient Chinese and Japanese culture designated the planet Saturn as the "earth star" (土星). This was based on Five Elements which were traditionally used to classify natural elements.[129][130][131]
76
+
77
+ In ancient Hebrew, Saturn is called 'Shabbathai'.[132] Its angel is Cassiel. Its intelligence or beneficial spirit is 'Agȋȇl (Hebrew: אגיאל‎‎, romanized: ʿAgyal),[133] and its darker spirit (demon) is Zȃzȇl (Hebrew: זאזל‎, romanized: Zazl).[133][134][135] Zazel has been described as a great angel, invoked in Solomonic magic, who is "effective in love conjurations".[136][137] In Ottoman Turkish, Urdu and Malay, the name of Zazel is 'Zuhal', derived from the Arabic language (Arabic: زحل‎, romanized: Zuhal).[134]
78
+
79
+ Saturn's rings require at least a 15-mm-diameter telescope[138] to resolve and thus were not known to exist until Christiaan Huygens saw them in 1659. Galileo, with his primitive telescope in 1610,[139][140] incorrectly thought of Saturn's appearing not quite round as two moons on Saturn's sides.[141][142] It was not until Huygens used greater telescopic magnification that this notion was refuted, and the rings were truly seen for the first time. Huygens also discovered Saturn's moon Titan; Giovanni Domenico Cassini later discovered four other moons: Iapetus, Rhea, Tethys and Dione. In 1675, Cassini discovered the gap now known as the Cassini Division.[143]
80
+
81
+ No further discoveries of significance were made until 1789 when William Herschel discovered two further moons, Mimas and Enceladus. The irregularly shaped satellite Hyperion, which has a resonance with Titan, was discovered in 1848 by a British team.[144]
82
+
83
+ In 1899 William Henry Pickering discovered Phoebe, a highly irregular satellite that does not rotate synchronously with Saturn as the larger moons do.[144] Phoebe was the first such satellite found and it takes more than a year to orbit Saturn in a retrograde orbit. During the early 20th century, research on Titan led to the confirmation in 1944 that it had a thick atmosphere – a feature unique among the Solar System's moons.[145]
84
+
85
+ Pioneer 11 made the first flyby of Saturn in September 1979, when it passed within 20,000 km of the planet's cloud tops. Images were taken of the planet and a few of its moons, although their resolution was too low to discern surface detail. The spacecraft also studied Saturn's rings, revealing the thin F-ring and the fact that dark gaps in the rings are bright when viewed at high phase angle (towards the Sun), meaning that they contain fine light-scattering material. In addition, Pioneer 11 measured the temperature of Titan.[146]
86
+
87
+ In November 1980, the Voyager 1 probe visited the Saturn system. It sent back the first high-resolution images of the planet, its rings and satellites. Surface features of various moons were seen for the first time. Voyager 1 performed a close flyby of Titan, increasing knowledge of the atmosphere of the moon. It proved that Titan's atmosphere is impenetrable in visible wavelengths; therefore no surface details were seen. The flyby changed the spacecraft's trajectory out from the plane of the Solar System.[147]
88
+
89
+ Almost a year later, in August 1981, Voyager 2 continued the study of the Saturn system. More close-up images of Saturn's moons were acquired, as well as evidence of changes in the atmosphere and the rings. Unfortunately, during the flyby, the probe's turnable camera platform stuck for a couple of days and some planned imaging was lost. Saturn's gravity was used to direct the spacecraft's trajectory towards Uranus.[147]
90
+
91
+ The probes discovered and confirmed several new satellites orbiting near or within the planet's rings, as well as the small Maxwell Gap (a gap within the C Ring) and Keeler gap (a 42 km-wide gap in the A Ring).
92
+
93
+ The Cassini–Huygens space probe entered orbit around Saturn on 1 July 2004. In June 2004, it conducted a close flyby of Phoebe, sending back high-resolution images and data. Cassini's flyby of Saturn's largest moon, Titan, captured radar images of large lakes and their coastlines with numerous islands and mountains. The orbiter completed two Titan flybys before releasing the Huygens probe on 25 December 2004. Huygens descended onto the surface of Titan on 14 January 2005.[148]
94
+
95
+ Starting in early 2005, scientists used Cassini to track lightning on Saturn. The power of the lightning is approximately 1,000 times that of lightning on Earth.[149]
96
+
97
+ In 2006, NASA reported that Cassini had found evidence of liquid water reservoirs no more than tens of meters below the surface that erupt in geysers on Saturn's moon Enceladus. These jets of icy particles are emitted into orbit around Saturn from vents in the moon's south polar region.[151] Over 100 geysers have been identified on Enceladus.[150] In May 2011, NASA scientists reported that Enceladus "is emerging as the most habitable spot beyond Earth in the Solar System for life as we know it".[152][153]
98
+
99
+ Cassini photographs have revealed a previously undiscovered planetary ring, outside the brighter main rings of Saturn and inside the G and E rings. The source of this ring is hypothesized to be the crashing of a meteoroid off Janus and Epimetheus.[154] In July 2006, images were returned of hydrocarbon lakes near Titan's north pole, the presence of which were confirmed in January 2007. In March 2007, hydrocarbon seas were found near the North pole, the largest of which is almost the size of the Caspian Sea.[155] In October 2006, the probe detected an 8,000 km diameter cyclone-like storm with an eyewall at Saturn's south pole.[156]
100
+
101
+ From 2004 to 2 November 2009, the probe discovered and confirmed eight new satellites.[157] In April 2013 Cassini sent back images of a hurricane at the planet's north pole 20 times larger than those found on Earth, with winds faster than 530 km/h (330 mph).[158] On 15 September 2017, the Cassini-Huygens spacecraft performed the "Grand Finale" of its mission: a number of passes through gaps between Saturn and Saturn's inner rings.[159][160] The atmospheric entry of Cassini ended the mission.
102
+
103
+ The continued exploration of Saturn is still considered to be a viable option for NASA as part of their ongoing New Frontiers program of missions. NASA previously requested for plans to be put forward for a mission to Saturn that included the Saturn Atmospheric Entry Probe, and possible investigations into the habitability and possible discovery of life on Saturn's moons Titan and Enceladus by Dragonfly.[161][162]
104
+
105
+ Saturn is the most distant of the five planets easily visible to the naked eye from Earth, the other four being Mercury, Venus, Mars and Jupiter. (Uranus, and occasionally 4 Vesta, are visible to the naked eye in dark skies.) Saturn appears to the naked eye in the night sky as a bright, yellowish point of light. The mean apparent magnitude of Saturn is 0.46 with a standard deviation of 0.34.[16] Most of the magnitude variation is due to the inclination of the ring system relative to the Sun and Earth. The brightest magnitude, −0.55, occurs near in time to when the plane of the rings is inclined most highly, and the faintest magnitude, 1.17, occurs around the time when they are least inclined.[16] It takes approximately 29.5 years for the planet to complete an entire circuit of the ecliptic against the background constellations of the zodiac. Most people will require an optical aid (very large binoculars or a small telescope) that magnifies at least 30 times to achieve an image of Saturn's rings, in which clear resolution is present.[37][138] When Earth passes through the ring plane, which occurs twice every Saturnian year (roughly every 15 Earth years), the rings briefly disappear from view because they are so thin. Such a "disappearance" will next occur in 2025, but Saturn will be too close to the Sun for observations.[163]
106
+
107
+ Saturn and its rings are best seen when the planet is at, or near, opposition, the configuration of a planet when it is at an elongation of 180°, and thus appears opposite the Sun in the sky. A Saturnian opposition occurs every year—approximately every 378 days—and results in the planet appearing at its brightest. Both the Earth and Saturn orbit the Sun on eccentric orbits, which means their distances from the Sun vary over time, and therefore so do their distances from each other, hence varying the brightness of Saturn from one opposition to the next. Saturn also appears brighter when the rings are angled such that they are more visible. For example, during the opposition of 17 December 2002, Saturn appeared at its brightest due to a favorable orientation of its rings relative to the Earth,[164] even though Saturn was closer to the Earth and Sun in late 2003.[164]
108
+
109
+ From time to time, Saturn is occulted by the Moon (that is, the Moon covers up Saturn in the sky). As with all the planets in the Solar System, occultations of Saturn occur in "seasons". Saturnian occultations will take place monthly for about a 12-month period, followed by about a five-year period in which no such activity is registered. The Moon's orbit is inclined by several degrees relative to Saturn's, so occultations will only occur when Saturn is near one of the points in the sky where the two planes intersect (both the length of Saturn's year and the 18.6-Earth year nodal precession period of the Moon's orbit influence the periodicity).[165]
110
+
111
+ Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Virgo Supercluster → Laniakea Supercluster → Observable universe → UniverseEach arrow (→) may be read as "within" or "part of".
112
+
en/4648.html.txt ADDED
@@ -0,0 +1,112 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+
5
+ Saturn is the sixth planet from the Sun and the second-largest in the Solar System, after Jupiter. It is a gas giant with an average radius of about nine times that of Earth.[18][19] It only has one-eighth the average density of Earth; however, with its larger volume, Saturn is over 95 times more massive.[20][21][22] Saturn is named after the Roman god of wealth and agriculture; its astronomical symbol (♄) represents the god's sickle.
6
+
7
+ Saturn's interior is most likely composed of a core of iron–nickel and rock (silicon and oxygen compounds). Its core is surrounded by a deep layer of metallic hydrogen, an intermediate layer of liquid hydrogen and liquid helium, and finally a gaseous outer layer. Saturn has a pale yellow hue due to ammonia crystals in its upper atmosphere. An electrical current within the metallic hydrogen layer is thought to give rise to Saturn's planetary magnetic field, which is weaker than the Earth's, but has a magnetic moment 580 times that of Earth due to Saturn's larger size. Saturn's magnetic field strength is around one-twentieth of Jupiter's.[23] The outer atmosphere is generally bland and lacking in contrast, although long-lived features can appear. Wind speeds on Saturn can reach 1,800 km/h (1,100 mph; 500 m/s), higher than on Jupiter, but not as high as those on Neptune.[24] In January 2019, astronomers reported that a day on the planet Saturn has been determined to be 10h 33m 38s + 1m 52s− 1m 19s , based on studies of the planet's C Ring.[12][13]
8
+
9
+ The planet's most famous feature is its prominent ring system, which is composed mostly of ice particles, with a smaller amount of rocky debris and dust. At least 82 moons[25] are known to orbit Saturn, of which 53 are officially named; this does not include the hundreds of moonlets in its rings. Titan, Saturn's largest moon, and the second-largest in the Solar System, is larger than the planet Mercury, although less massive, and is the only moon in the Solar System to have a substantial atmosphere.[26]
10
+
11
+ Saturn is a gas giant because it is predominantly composed of hydrogen and helium. It lacks a definite surface, though it may have a solid core.[27] Saturn's rotation causes it to have the shape of an oblate spheroid; that is, it is flattened at the poles and bulges at its equator. Its equatorial and polar radii differ by almost 10%: 60,268 km versus 54,364 km.[9] Jupiter, Uranus, and Neptune, the other giant planets in the Solar System, are also oblate but to a lesser extent. The combination of the bulge and rotation rate means that the effective surface gravity along the equator, 8.96 m/s2, is 74% that at the poles and is lower than the surface gravity of Earth. However, the equatorial escape velocity of nearly 36 km/s is much higher than that for Earth.[28]
12
+
13
+ Saturn is the only planet of the Solar System that is less dense than water—about 30% less.[29] Although Saturn's core is considerably denser than water, the average specific density of the planet is 0.69 g/cm3 due to the atmosphere. Jupiter has 318 times Earth's mass,[30] and Saturn is 95 times Earth's mass.[9] Together, Jupiter and Saturn hold 92% of the total planetary mass in the Solar System.[31]
14
+
15
+ Despite consisting mostly of hydrogen and helium, most of Saturn's mass is not in the gas phase, because hydrogen becomes a non-ideal liquid when the density is above 0.01 g/cm3, which is reached at a radius containing 99.9% of Saturn's mass. The temperature, pressure, and density inside Saturn all rise steadily toward the core, which causes hydrogen to be a metal in the deeper layers.[31]
16
+
17
+ Standard planetary models suggest that the interior of Saturn is similar to that of Jupiter, having a small rocky core surrounded by hydrogen and helium, with trace amounts of various volatiles.[32] This core is similar in composition to Earth, but is more dense. The examination of Saturn's gravitational moment, in combination with physical models of the interior, has allowed constraints to be placed on the mass of Saturn's core. In 2004, scientists estimated that the core must be 9–22 times the mass of Earth,[33][34] which corresponds to a diameter of about 25,000 km.[35] This is surrounded by a thicker liquid metallic hydrogen layer, followed by a liquid layer of helium-saturated molecular hydrogen that gradually transitions to a gas with increasing altitude. The outermost layer spans 1,000 km and consists of gas.[36][37][38]
18
+
19
+ Saturn has a hot interior, reaching 11,700 °C at its core, and it radiates 2.5 times more energy into space than it receives from the Sun. Jupiter's thermal energy is generated by the Kelvin–Helmholtz mechanism of slow gravitational compression, but such a process alone may not be sufficient to explain heat production for Saturn, because it is less massive. An alternative or additional mechanism may be generation of heat through the "raining out" of droplets of helium deep in Saturn's interior. As the droplets descend through the lower-density hydrogen, the process releases heat by friction and leaves Saturn's outer layers depleted of helium.[39][40] These descending droplets may have accumulated into a helium shell surrounding the core.[32] Rainfalls of diamonds have been suggested to occur within Saturn, as well as in Jupiter[41] and ice giants Uranus and Neptune.[42]
20
+
21
+ The outer atmosphere of Saturn contains 96.3% molecular hydrogen and 3.25% helium by volume.[43] The proportion of helium is significantly deficient compared to the abundance of this element in the Sun.[32] The quantity of elements heavier than helium (metallicity) is not known precisely, but the proportions are assumed to match the primordial abundances from the formation of the Solar System. The total mass of these heavier elements is estimated to be 19–31 times the mass of the Earth, with a significant fraction located in Saturn's core region.[44]
22
+
23
+ Trace amounts of ammonia, acetylene, ethane, propane, phosphine, and methane have been detected in Saturn's atmosphere.[45][46][47] The upper clouds are composed of ammonia crystals, while the lower level clouds appear to consist of either ammonium hydrosulfide (NH4SH) or water.[48] Ultraviolet radiation from the Sun causes methane photolysis in the upper atmosphere, leading to a series of hydrocarbon chemical reactions with the resulting products being carried downward by eddies and diffusion. This photochemical cycle is modulated by Saturn's annual seasonal cycle.[47]
24
+
25
+ Saturn's atmosphere exhibits a banded pattern similar to Jupiter's, but Saturn's bands are much fainter and are much wider near the equator. The nomenclature used to describe these bands is the same as on Jupiter. Saturn's finer cloud patterns were not observed until the flybys of the Voyager spacecraft during the 1980s. Since then, Earth-based telescopy has improved to the point where regular observations can be made.[49]
26
+
27
+ The composition of the clouds varies with depth and increasing pressure. In the upper cloud layers, with the temperature in the range 100–160 K and pressures extending between 0.5–2 bar, the clouds consist of ammonia ice. Water ice clouds begin at a level where the pressure is about 2.5 bar and extend down to 9.5 bar, where temperatures range from 185–270 K. Intermixed in this layer is a band of ammonium hydrosulfide ice, lying in the pressure range 3–6 bar with temperatures of 190–235 K. Finally, the lower layers, where pressures are between 10–20 bar and temperatures are 270–330 K, contains a region of water droplets with ammonia in aqueous solution.[50]
28
+
29
+ Saturn's usually bland atmosphere occasionally exhibits long-lived ovals and other features common on Jupiter. In 1990, the Hubble Space Telescope imaged an enormous white cloud near Saturn's equator that was not present during the Voyager encounters, and in 1994 another smaller storm was observed. The 1990 storm was an example of a Great White Spot, a unique but short-lived phenomenon that occurs once every Saturnian year, roughly every 30 Earth years, around the time of the northern hemisphere's summer solstice.[51] Previous Great White Spots were observed in 1876, 1903, 1933 and 1960, with the 1933 storm being the most famous. If the periodicity is maintained, another storm will occur in about 2020.[52]
30
+
31
+ The winds on Saturn are the second fastest among the Solar System's planets, after Neptune's. Voyager data indicate peak easterly winds of 500 m/s (1,800 km/h).[53] In images from the Cassini spacecraft during 2007, Saturn's northern hemisphere displayed a bright blue hue, similar to Uranus. The color was most likely caused by Rayleigh scattering.[54] Thermography has shown that Saturn's south pole has a warm polar vortex, the only known example of such a phenomenon in the Solar System.[55] Whereas temperatures on Saturn are normally −185 °C, temperatures on the vortex often reach as high as −122 °C, suspected to be the warmest spot on Saturn.[55]
32
+
33
+ A persisting hexagonal wave pattern around the north polar vortex in the atmosphere at about 78°N was first noted in the Voyager images.[56][57][58] The sides of the hexagon are each about 13,800 km (8,600 mi) long, which is longer than the diameter of the Earth.[59] The entire structure rotates with a period of  10h 39m 24s (the same period as that of the planet's radio emissions) which is assumed to be equal to the period of rotation of Saturn's interior.[60] The hexagonal feature does not shift in longitude like the other clouds in the visible atmosphere.[61] The pattern's origin is a matter of much speculation. Most scientists think it is a standing wave pattern in the atmosphere. Polygonal shapes have been replicated in the laboratory through differential rotation of fluids.[62][63]
34
+
35
+ HST imaging of the south polar region indicates the presence of a jet stream, but no strong polar vortex nor any hexagonal standing wave.[64] NASA reported in November 2006 that Cassini had observed a "hurricane-like" storm locked to the south pole that had a clearly defined eyewall.[65][66] Eyewall clouds had not previously been seen on any planet other than Earth. For example, images from the Galileo spacecraft did not show an eyewall in the Great Red Spot of Jupiter.[67]
36
+
37
+ The south pole storm may have been present for billions of years.[68] This vortex is comparable to the size of Earth, and it has winds of 550 km/h.[68]
38
+
39
+ Cassini observed a series of cloud features nicknamed "String of Pearls" found in northern latitudes. These features are cloud clearings that reside in deeper cloud layers.[69]
40
+
41
+ Saturn has an intrinsic magnetic field that has a simple, symmetric shape – a magnetic dipole. Its strength at the equator – 0.2 gauss (20 µT) – is approximately one twentieth of that of the field around Jupiter and slightly weaker than Earth's magnetic field.[23] As a result, Saturn's magnetosphere is much smaller than Jupiter's.[71] When Voyager 2 entered the magnetosphere, the solar wind pressure was high and the magnetosphere extended only 19 Saturn radii, or 1.1 million km (712,000 mi),[72] although it enlarged within several hours, and remained so for about three days.[73] Most probably, the magnetic field is generated similarly to that of Jupiter – by currents in the liquid metallic-hydrogen layer called a metallic-hydrogen dynamo.[71] This magnetosphere is efficient at deflecting the solar wind particles from the Sun. The moon Titan orbits within the outer part of Saturn's magnetosphere and contributes plasma from the ionized particles in Titan's outer atmosphere.[23] Saturn's magnetosphere, like Earth's, produces aurorae.[74]
42
+
43
+ The average distance between Saturn and the Sun is over 1.4 billion kilometers (9 AU). With an average orbital speed of 9.68 km/s,[9] it takes Saturn 10,759 Earth days (or about ​29 1⁄2 years)[75] to finish one revolution around the Sun.[9] As a consequence, it forms a near 5:2 mean-motion resonance with Jupiter.[76] The elliptical orbit of Saturn is inclined 2.48° relative to the orbital plane of the Earth.[9] The perihelion and aphelion distances are, respectively, 9.195 and 9.957 AU, on average.[9][77] The visible features on Saturn rotate at different rates depending on latitude and multiple rotation periods have been assigned to various regions (as in Jupiter's case).
44
+
45
+ Astronomers use three different systems for specifying the rotation rate of Saturn. System I has a period of  10h 14m 00s (844.3°/d) and encompasses the Equatorial Zone, the South Equatorial Belt, and the North Equatorial Belt. The polar regions are considered to have rotation rates similar to System I. All other Saturnian latitudes, excluding the north and south polar regions, are indicated as System II and have been assigned a rotation period of  10h 38m 25.4s (810.76°/d). System III refers to Saturn's internal rotation rate. Based on radio emissions from the planet detected by Voyager 1 and Voyager 2,[78] System III has a rotation period of  10h 39m 22.4s (810.8°/d). System III has largely superseded System II.[79]
46
+
47
+ A precise value for the rotation period of the interior remains elusive. While approaching Saturn in 2004, Cassini found that the radio rotation period of Saturn had increased appreciably, to approximately  10h 45m 45s ± 36s.[80][81] The latest estimate of Saturn's rotation (as an indicated rotation rate for Saturn as a whole) based on a compilation of various measurements from the Cassini, Voyager and Pioneer probes was reported in September 2007 is  10h 32m 35s.[82]
48
+
49
+ In March 2007, it was found that the variation of radio emissions from the planet did not match Saturn's rotation rate. This variance may be caused by geyser activity on Saturn's moon Enceladus. The water vapor emitted into Saturn's orbit by this activity becomes charged and creates a drag upon Saturn's magnetic field, slowing its rotation slightly relative to the rotation of the planet.[83][84][85]
50
+
51
+ An apparent oddity for Saturn is that it does not have any known trojan asteroids. These are minor planets that orbit the Sun at the stable Lagrangian points, designated L4 and L5, located at 60° angles to the planet along its orbit. Trojan asteroids have been discovered for Mars, Jupiter, Uranus, and Neptune. Orbital resonance mechanisms, including secular resonance, are believed to be the cause of the missing Saturnian trojans.[86]
52
+
53
+ Saturn has 82 known moons,[25] 53 of which have formal names.[87][88] In addition, there is evidence of dozens to hundreds of moonlets with diameters of 40–500 meters in Saturn's rings,[89] which are not considered to be true moons. Titan, the largest moon, comprises more than 90% of the mass in orbit around Saturn, including the rings.[90] Saturn's second-largest moon, Rhea, may have a tenuous ring system of its own,[91] along with a tenuous atmosphere.[92][93][94]
54
+
55
+ Many of the other moons are small: 34 are less than 10 km in diameter and another 14 between 10 and 50 km in diameter.[95] Traditionally, most of Saturn's moons have been named after Titans of Greek mythology. Titan is the only satellite in the Solar System with a major atmosphere,[96][97] in which a complex organic chemistry occurs. It is the only satellite with hydrocarbon lakes.[98][99]
56
+
57
+ On 6 June 2013, scientists at the IAA-CSIC reported the detection of polycyclic aromatic hydrocarbons in the upper atmosphere of Titan, a possible precursor for life.[100] On 23 June 2014, NASA claimed to have strong evidence that nitrogen in the atmosphere of Titan came from materials in the Oort cloud, associated with comets, and not from the materials that formed Saturn in earlier times.[101]
58
+
59
+ Saturn's moon Enceladus, which seems similar in chemical makeup to comets,[102] has often been regarded as a potential habitat for microbial life.[103][104][105][106] Evidence of this possibility includes the satellite's salt-rich particles having an "ocean-like" composition that indicates most of Enceladus's expelled ice comes from the evaporation of liquid salt water.[107][108][109] A 2015 flyby by Cassini through a plume on Enceladus found most of the ingredients to sustain life forms that live by methanogenesis.[110]
60
+
61
+ In April 2014, NASA scientists reported the possible beginning of a new moon within the A Ring, which was imaged by Cassini on 15 April 2013.[111]
62
+
63
+ Saturn is probably best known for the system of planetary rings that makes it visually unique.[37] The rings extend from 6,630 to 120,700 kilometers (4,120 to 75,000 mi) outward from Saturn's equator and average approximately 20 meters (66 ft) in thickness. They are composed predominantly of water ice with trace amounts of tholin impurities, and a peppered coating of approximately 7% amorphous carbon.[112] The particles that make up the rings range in size from specks of dust up to 10 m.[113] While the other gas giants also have ring systems, Saturn's is the largest and most visible.
64
+
65
+ There are two main hypotheses regarding the origin of the rings. One hypothesis is that the rings are remnants of a destroyed moon of Saturn. The second hypothesis is that the rings are left over from the original nebular material from which Saturn was formed. Some ice in the E ring comes from the moon Enceladus's geysers.[114][115][116][117] The water abundance of the rings varies radially, with the outermost ring A being the most pure in ice water. This abundance variance may be explained by meteor bombardment.[118]
66
+
67
+ Beyond the main rings at a distance of 12 million km from the planet is the sparse Phoebe ring, which is tilted at an angle of 27° to the other rings and, like Phoebe, orbits in retrograde fashion.[119]
68
+
69
+ Some of the moons of Saturn, including Pandora and Prometheus, act as shepherd moons to confine the rings and prevent them from spreading out.[120] Pan and Atlas cause weak, linear density waves in Saturn's rings that have yielded more reliable calculations of their masses.[121]
70
+
71
+ The observation and exploration of Saturn can be divided into three phases. The first phase is ancient observations (such as with the naked eye), before the invention of modern telescopes. The second phase began in the 17th century, with telescopic observations from Earth, which improved over time. The third phase is visitation by space probes, in orbit or on flyby. In the 21st century, telescopic observations continue from Earth (including Earth-orbiting observatories like the Hubble Space Telescope) and, until its 2017 retirement, from the Cassini orbiter around Saturn.
72
+
73
+ Saturn has been known since prehistoric times,[122] and in early recorded history it was a major character in various mythologies. Babylonian astronomers systematically observed and recorded the movements of Saturn.[123] In ancient Greek, the planet was known as Φαίνων Phainon,[124] and in Roman times it was known as the "star of Saturn".[125] In ancient Roman mythology, the planet Phainon was sacred to this agricultural god, from which the planet takes its modern name.[126] The Romans considered the god Saturnus the equivalent of the Greek god Cronus; in modern Greek, the planet retains the name Cronus—Κρόνος: Kronos.[127]
74
+
75
+ The Greek scientist Ptolemy based his calculations of Saturn's orbit on observations he made while it was in opposition.[128] In Hindu astrology, there are nine astrological objects, known as Navagrahas. Saturn is known as "Shani" and judges everyone based on the good and bad deeds performed in life.[126][128] Ancient Chinese and Japanese culture designated the planet Saturn as the "earth star" (土星). This was based on Five Elements which were traditionally used to classify natural elements.[129][130][131]
76
+
77
+ In ancient Hebrew, Saturn is called 'Shabbathai'.[132] Its angel is Cassiel. Its intelligence or beneficial spirit is 'Agȋȇl (Hebrew: אגיאל‎‎, romanized: ʿAgyal),[133] and its darker spirit (demon) is Zȃzȇl (Hebrew: זאזל‎, romanized: Zazl).[133][134][135] Zazel has been described as a great angel, invoked in Solomonic magic, who is "effective in love conjurations".[136][137] In Ottoman Turkish, Urdu and Malay, the name of Zazel is 'Zuhal', derived from the Arabic language (Arabic: زحل‎, romanized: Zuhal).[134]
78
+
79
+ Saturn's rings require at least a 15-mm-diameter telescope[138] to resolve and thus were not known to exist until Christiaan Huygens saw them in 1659. Galileo, with his primitive telescope in 1610,[139][140] incorrectly thought of Saturn's appearing not quite round as two moons on Saturn's sides.[141][142] It was not until Huygens used greater telescopic magnification that this notion was refuted, and the rings were truly seen for the first time. Huygens also discovered Saturn's moon Titan; Giovanni Domenico Cassini later discovered four other moons: Iapetus, Rhea, Tethys and Dione. In 1675, Cassini discovered the gap now known as the Cassini Division.[143]
80
+
81
+ No further discoveries of significance were made until 1789 when William Herschel discovered two further moons, Mimas and Enceladus. The irregularly shaped satellite Hyperion, which has a resonance with Titan, was discovered in 1848 by a British team.[144]
82
+
83
+ In 1899 William Henry Pickering discovered Phoebe, a highly irregular satellite that does not rotate synchronously with Saturn as the larger moons do.[144] Phoebe was the first such satellite found and it takes more than a year to orbit Saturn in a retrograde orbit. During the early 20th century, research on Titan led to the confirmation in 1944 that it had a thick atmosphere – a feature unique among the Solar System's moons.[145]
84
+
85
+ Pioneer 11 made the first flyby of Saturn in September 1979, when it passed within 20,000 km of the planet's cloud tops. Images were taken of the planet and a few of its moons, although their resolution was too low to discern surface detail. The spacecraft also studied Saturn's rings, revealing the thin F-ring and the fact that dark gaps in the rings are bright when viewed at high phase angle (towards the Sun), meaning that they contain fine light-scattering material. In addition, Pioneer 11 measured the temperature of Titan.[146]
86
+
87
+ In November 1980, the Voyager 1 probe visited the Saturn system. It sent back the first high-resolution images of the planet, its rings and satellites. Surface features of various moons were seen for the first time. Voyager 1 performed a close flyby of Titan, increasing knowledge of the atmosphere of the moon. It proved that Titan's atmosphere is impenetrable in visible wavelengths; therefore no surface details were seen. The flyby changed the spacecraft's trajectory out from the plane of the Solar System.[147]
88
+
89
+ Almost a year later, in August 1981, Voyager 2 continued the study of the Saturn system. More close-up images of Saturn's moons were acquired, as well as evidence of changes in the atmosphere and the rings. Unfortunately, during the flyby, the probe's turnable camera platform stuck for a couple of days and some planned imaging was lost. Saturn's gravity was used to direct the spacecraft's trajectory towards Uranus.[147]
90
+
91
+ The probes discovered and confirmed several new satellites orbiting near or within the planet's rings, as well as the small Maxwell Gap (a gap within the C Ring) and Keeler gap (a 42 km-wide gap in the A Ring).
92
+
93
+ The Cassini–Huygens space probe entered orbit around Saturn on 1 July 2004. In June 2004, it conducted a close flyby of Phoebe, sending back high-resolution images and data. Cassini's flyby of Saturn's largest moon, Titan, captured radar images of large lakes and their coastlines with numerous islands and mountains. The orbiter completed two Titan flybys before releasing the Huygens probe on 25 December 2004. Huygens descended onto the surface of Titan on 14 January 2005.[148]
94
+
95
+ Starting in early 2005, scientists used Cassini to track lightning on Saturn. The power of the lightning is approximately 1,000 times that of lightning on Earth.[149]
96
+
97
+ In 2006, NASA reported that Cassini had found evidence of liquid water reservoirs no more than tens of meters below the surface that erupt in geysers on Saturn's moon Enceladus. These jets of icy particles are emitted into orbit around Saturn from vents in the moon's south polar region.[151] Over 100 geysers have been identified on Enceladus.[150] In May 2011, NASA scientists reported that Enceladus "is emerging as the most habitable spot beyond Earth in the Solar System for life as we know it".[152][153]
98
+
99
+ Cassini photographs have revealed a previously undiscovered planetary ring, outside the brighter main rings of Saturn and inside the G and E rings. The source of this ring is hypothesized to be the crashing of a meteoroid off Janus and Epimetheus.[154] In July 2006, images were returned of hydrocarbon lakes near Titan's north pole, the presence of which were confirmed in January 2007. In March 2007, hydrocarbon seas were found near the North pole, the largest of which is almost the size of the Caspian Sea.[155] In October 2006, the probe detected an 8,000 km diameter cyclone-like storm with an eyewall at Saturn's south pole.[156]
100
+
101
+ From 2004 to 2 November 2009, the probe discovered and confirmed eight new satellites.[157] In April 2013 Cassini sent back images of a hurricane at the planet's north pole 20 times larger than those found on Earth, with winds faster than 530 km/h (330 mph).[158] On 15 September 2017, the Cassini-Huygens spacecraft performed the "Grand Finale" of its mission: a number of passes through gaps between Saturn and Saturn's inner rings.[159][160] The atmospheric entry of Cassini ended the mission.
102
+
103
+ The continued exploration of Saturn is still considered to be a viable option for NASA as part of their ongoing New Frontiers program of missions. NASA previously requested for plans to be put forward for a mission to Saturn that included the Saturn Atmospheric Entry Probe, and possible investigations into the habitability and possible discovery of life on Saturn's moons Titan and Enceladus by Dragonfly.[161][162]
104
+
105
+ Saturn is the most distant of the five planets easily visible to the naked eye from Earth, the other four being Mercury, Venus, Mars and Jupiter. (Uranus, and occasionally 4 Vesta, are visible to the naked eye in dark skies.) Saturn appears to the naked eye in the night sky as a bright, yellowish point of light. The mean apparent magnitude of Saturn is 0.46 with a standard deviation of 0.34.[16] Most of the magnitude variation is due to the inclination of the ring system relative to the Sun and Earth. The brightest magnitude, −0.55, occurs near in time to when the plane of the rings is inclined most highly, and the faintest magnitude, 1.17, occurs around the time when they are least inclined.[16] It takes approximately 29.5 years for the planet to complete an entire circuit of the ecliptic against the background constellations of the zodiac. Most people will require an optical aid (very large binoculars or a small telescope) that magnifies at least 30 times to achieve an image of Saturn's rings, in which clear resolution is present.[37][138] When Earth passes through the ring plane, which occurs twice every Saturnian year (roughly every 15 Earth years), the rings briefly disappear from view because they are so thin. Such a "disappearance" will next occur in 2025, but Saturn will be too close to the Sun for observations.[163]
106
+
107
+ Saturn and its rings are best seen when the planet is at, or near, opposition, the configuration of a planet when it is at an elongation of 180°, and thus appears opposite the Sun in the sky. A Saturnian opposition occurs every year—approximately every 378 days—and results in the planet appearing at its brightest. Both the Earth and Saturn orbit the Sun on eccentric orbits, which means their distances from the Sun vary over time, and therefore so do their distances from each other, hence varying the brightness of Saturn from one opposition to the next. Saturn also appears brighter when the rings are angled such that they are more visible. For example, during the opposition of 17 December 2002, Saturn appeared at its brightest due to a favorable orientation of its rings relative to the Earth,[164] even though Saturn was closer to the Earth and Sun in late 2003.[164]
108
+
109
+ From time to time, Saturn is occulted by the Moon (that is, the Moon covers up Saturn in the sky). As with all the planets in the Solar System, occultations of Saturn occur in "seasons". Saturnian occultations will take place monthly for about a 12-month period, followed by about a five-year period in which no such activity is registered. The Moon's orbit is inclined by several degrees relative to Saturn's, so occultations will only occur when Saturn is near one of the points in the sky where the two planes intersect (both the length of Saturn's year and the 18.6-Earth year nodal precession period of the Moon's orbit influence the periodicity).[165]
110
+
111
+ Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Virgo Supercluster → Laniakea Supercluster → Observable universe → UniverseEach arrow (→) may be read as "within" or "part of".
112
+
en/4649.html.txt ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ A gas giant is a giant planet composed mainly of hydrogen and helium.[1] Gas giants are sometimes known as failed stars because they contain the same basic elements as a star. Jupiter and Saturn are the gas giants of the Solar System. The term "gas giant" was originally synonymous with "giant planet", but in the 1990s it became known that Uranus and Neptune are really a distinct class of giant planet, being composed mainly of heavier volatile substances (which are referred to as "ices"). For this reason, Uranus and Neptune are now often classified in the separate category of ice giants.[2]
2
+
3
+ Jupiter and Saturn consist mostly of hydrogen and helium, with heavier elements making up between 3 and 13 percent of the mass.[3] They are thought to consist of an outer layer of molecular hydrogen surrounding a layer of liquid metallic hydrogen, with probably a molten rocky core. The outermost portion of their hydrogen atmosphere is characterized by many layers of visible clouds that are mostly composed of water and ammonia. The layer of metallic hydrogen makes up the bulk of each planet, and is referred to as "metallic" because the very large pressure turns hydrogen into an electrical conductor. The gas giants' cores are thought to consist of heavier elements at such high temperatures (20,000 K) and pressures that their properties are poorly understood.[3]
4
+
5
+ The defining differences between a very low-mass brown dwarf and a gas giant (estimated at about 13 Jupiter masses) are debated.[4] One school of thought is based on formation; the other, on the physics of the interior.[4] Part of the debate concerns whether "brown dwarfs" must, by definition, have experienced nuclear fusion at some point in their history.
6
+
7
+ The term gas giant was coined in 1952 by the science fiction writer James Blish[5] and was originally used to refer to all giant planets. It is, arguably, something of a misnomer because throughout most of the volume of all giant planets, the pressure is so high that matter is not in gaseous form.[6] Other than solids in the core and the upper layers of the atmosphere, all matter is above the critical point, where there is no distinction between liquids and gases. The term has nevertheless caught on, because planetary scientists typically use "rock", "gas", and "ice" as shorthands for classes of elements and compounds commonly found as planetary constituents, irrespective of what phase the matter may appear in. In the outer Solar System, hydrogen and helium are referred to as "gases"; water, methane, and ammonia as "ices"; and silicates and metals as "rock". Because Uranus and Neptune are primarily composed of, in this terminology, ices, not gas, they are increasingly referred to as ice giants and separated from the gas giants.
8
+
9
+ Gas giants can, theoretically, be divided into five distinct classes according to their modeled physical atmospheric properties, and hence their appearance: ammonia clouds (I), water clouds (II), cloudless (III), alkali-metal clouds (IV), and silicate clouds (V). Jupiter and Saturn are both class I. Hot Jupiters are class IV or V.
10
+
11
+ A cold hydrogen-rich gas giant more massive than Jupiter but less than about 500 M⊕ (1.6 MJ) will only be slightly larger in volume than Jupiter.[7] For masses above 500 M⊕, gravity will cause the planet to shrink (see degenerate matter).[7]
12
+
13
+ Kelvin–Helmholtz heating can cause a gas giant to radiate more energy than it receives from its host star.[8][9]
14
+
15
+ Although the words "gas" and "giant" are often combined, hydrogen planets need not be as large as the familiar gas giants from the Solar System. However, smaller gas planets and planets closer to their star will lose atmospheric mass more quickly via hydrodynamic escape than larger planets and planets farther out.[10][11]
16
+
17
+ A gas dwarf could be defined as a planet with a rocky core that has accumulated a thick envelope of hydrogen, helium and other volatiles, having as result a total radius between 1.7 and 3.9 Earth-radii.[12][13]
18
+
19
+ The smallest known extrasolar planet that is likely a "gas planet" is Kepler-138d, which has the same mass as Earth but is 60% larger and therefore has a density that indicates a thick gas envelope.[14]
20
+
21
+ A low-mass gas planet can still have a radius resembling that of a gas giant if it has the right temperature.[15]
22
+
23
+ Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Virgo Supercluster → Laniakea Supercluster → Observable universe → UniverseEach arrow (→) may be read as "within" or "part of".
en/465.html.txt ADDED
@@ -0,0 +1,195 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ Coordinates: 25°S 133°E / 25°S 133°E / -25; 133
4
+
5
+ Australia, officially the Commonwealth of Australia,[12] is a sovereign country comprising the mainland of the Australian continent, the island of Tasmania, and numerous smaller islands. It is the largest country in Oceania and the world's sixth-largest country by total area. The population of 26 million[6] is highly urbanised and heavily concentrated on the eastern seaboard.[13] Australia's capital is Canberra, and its largest city is Sydney. The country's other major metropolitan areas are Melbourne, Brisbane, Perth, and Adelaide.
6
+
7
+ Indigenous Australians inhabited the continent for about 65,000 years[14] prior to the first arrival of Dutch explorers in the early 17th century, who named it New Holland. In 1770, Australia's eastern half was claimed by Great Britain and initially settled through penal transportation to the colony of New South Wales from 26 January 1788, a date which became Australia's national day. The population grew steadily in subsequent decades, and by the time of an 1850s gold rush, most of the continent had been explored by European settlers and an additional five self-governing crown colonies established. On 1 January 1901, the six colonies federated, forming the Commonwealth of Australia. Australia has since maintained a stable liberal democratic political system that functions as a federal parliamentary constitutional monarchy, comprising six states and ten territories.
8
+
9
+ Australia is the oldest,[15] flattest,[16] and driest inhabited continent,[17][18] with the least fertile soils.[19][20] It has a landmass of 7,617,930 square kilometres (2,941,300 sq mi).[21] A megadiverse country, its size gives it a wide variety of landscapes, with deserts in the centre, tropical rainforests in the north-east, and mountain ranges in the south-east. Australia generates its income from various sources, including mining-related exports, telecommunications, banking, manufacturing, and international education.[22][23][24]
10
+
11
+ Australia is a highly developed country, with the world's 14th-largest economy. It has a high-income economy, with the world's tenth-highest per capita income.[25] It is a regional power and has the world's 13th-highest military expenditure.[26] Immigrants account for 30% of the population,[27] the highest proportion in any country with a population over 10 million.[28] Having the third-highest human development index and the eighth-highest ranked democracy globally, the country ranks highly in quality of life, health, education, economic freedom, civil liberties, and political rights,[29] with all its major cities faring well in global comparative livability surveys.[30] Australia is a member of the United Nations, G20, Commonwealth of Nations, ANZUS, Organisation for Economic Co-operation and Development (OECD), World Trade Organization, Asia-Pacific Economic Cooperation, Pacific Islands Forum, and the ASEAN Plus Six mechanism.
12
+
13
+ The name Australia (pronounced /əˈstreɪliə/ in Australian English[31]) is derived from the Latin Terra Australis ("southern land"), a name used for a hypothetical continent in the Southern Hemisphere since ancient times.[32] When Europeans first began visiting and mapping Australia in the 17th century, the name Terra Australis was naturally applied to the new territories.[N 5]
14
+
15
+ Until the early 19th century, Australia was best known as "New Holland", a name first applied by the Dutch explorer Abel Tasman in 1644 (as Nieuw-Holland) and subsequently anglicised. Terra Australis still saw occasional usage, such as in scientific texts.[N 6] The name Australia was popularised by the explorer Matthew Flinders, who said it was "more agreeable to the ear, and an assimilation to the names of the other great portions of the earth".[38] Several famous early cartographers also made use of the word Australia on maps. Gerardus Mercator (1512–1594) used the phrase climata australia on his double cordiform map of the world of 1538, as did Gemma Frisius (1508–1555), who was Mercator's teacher and collaborator, on his own cordiform wall map in 1540. Australia appears in a book on astronomy by Cyriaco Jacob zum Barth published in Frankfurt-am-Main in 1545.[39]
16
+
17
+ The first time that Australia appears to have been officially used was in April 1817, when Governor Lachlan Macquarie acknowledged the receipt of Flinders' charts of Australia from Lord Bathurst.[40] In December 1817, Macquarie recommended to the Colonial Office that it be formally adopted.[41] In 1824, the Admiralty agreed that the continent should be known officially by that name.[42] The first official published use of the new name came with the publication in 1830 of The Australia Directory by the Hydrographic Office.[43]
18
+
19
+ Colloquial names for Australia include "Oz" and "the Land Down Under" (usually shortened to just "Down Under"). Other epithets include "the Great Southern Land", "the Lucky Country", "the Sunburnt Country", and "the Wide Brown Land". The latter two both derive from Dorothea Mackellar's 1908 poem "My Country".[44]
20
+
21
+ Human habitation of the Australian continent is known to have begun at least 65,000 years ago,[45][46] with the migration of people by land bridges and short sea-crossings from what is now Southeast Asia.[47] The Madjedbebe rock shelter in Arnhem Land is recognised as the oldest site showing the presence of humans in Australia.[48] The oldest human remains found are the Lake Mungo remains, which have been dated to around 41,000 years ago.[49][50] These people were the ancestors of modern Indigenous Australians.[51] Aboriginal Australian culture is one of the oldest continual cultures on earth.[52]
22
+
23
+ At the time of first European contact, most Indigenous Australians were hunter-gatherers with complex economies and societies.[53][54] Recent archaeological finds suggest that a population of 750,000 could have been sustained.[55][56] Indigenous Australians have an oral culture with spiritual values based on reverence for the land and a belief in the Dreamtime.[57] The Torres Strait Islanders, ethnically Melanesian, obtained their livelihood from seasonal horticulture and the resources of their reefs and seas.[58] The northern coasts and waters of Australia were visited sporadically by Makassan fishermen from what is now Indonesia.[59]
24
+
25
+ The first recorded European sighting of the Australian mainland, and the first recorded European landfall on the Australian continent, are attributed to the Dutch.[60] The first ship and crew to chart the Australian coast and meet with Aboriginal people was the Duyfken captained by Dutch navigator, Willem Janszoon.[61] He sighted the coast of Cape York Peninsula in early 1606, and made landfall on 26 February at the Pennefather River near the modern town of Weipa on Cape York.[62] Later that year, Spanish explorer Luís Vaz de Torres sailed through, and navigated, Torres Strait islands.[63] The Dutch charted the whole of the western and northern coastlines and named the island continent "New Holland" during the 17th century, and although no attempt at settlement was made,[62] a number of shipwrecks left men either stranded or, as in the case of the Batavia in 1629, marooned for mutiny and murder, thus becoming the first Europeans to permanently inhabit the continent.[64] William Dampier, an English explorer and privateer, landed on the north-west coast of New Holland in 1688 (while serving as a crewman under pirate Captain John Read[65]) and again in 1699 on a return trip.[66] In 1770, James Cook sailed along and mapped the east coast, which he named New South Wales and claimed for Great Britain.[67]
26
+
27
+ With the loss of its American colonies in 1783, the British Government sent a fleet of ships, the "First Fleet", under the command of Captain Arthur Phillip, to establish a new penal colony in New South Wales. A camp was set up and the Union flag raised at Sydney Cove, Port Jackson, on 26 January 1788,[68][69] a date which later became Australia's national day, Australia Day. Most early convicts were transported for petty crimes and assigned as labourers or servants upon arrival. While the majority settled into colonial society once emancipated, convict rebellions and uprisings were also staged, but invariably suppressed under martial law. The 1808 Rum Rebellion, the only successful armed takeover of government in Australia, instigated a two-year period of military rule.[70]
28
+
29
+ The indigenous population declined for 150 years following settlement, mainly due to infectious disease.[71] Thousands more died as a result of frontier conflict with settlers.[72] A government policy of "assimilation" beginning with the Aboriginal Protection Act 1869 resulted in the removal of many Aboriginal children from their families and communities—referred to as the Stolen Generations—a practice which also contributed to the decline in the indigenous population.[73] As a result of the 1967 referendum, the Federal government's power to enact special laws with respect to a particular race was extended to enable the making of laws with respect to Aboriginals.[74] Traditional ownership of land ("native title") was not recognised in law until 1992, when the High Court of Australia held in Mabo v Queensland (No 2) that the legal doctrine that Australia had been terra nullius ("land belonging to no one") did not apply to Australia at the time of British settlement.[75]
30
+
31
+ The expansion of British control over other areas of the continent began in the early 19th century, initially confined to coastal regions. A settlement was established in Van Diemen's Land (present-day Tasmania) in 1803, and it became a separate colony in 1825.[76] In 1813, Gregory Blaxland, William Lawson and William Wentworth crossed the Blue Mountains west of Sydney, opening the interior to European settlement.[77] The British claim was extended to the whole Australian continent in 1827 when Major Edmund Lockyer established a settlement on King George Sound (modern-day Albany, Western Australia).[78] The Swan River Colony was established in 1829, evolving into the largest Australian colony by area, Western Australia.[79] In accordance with population growth, separate colonies were carved from parts of New South Wales: South Australia in 1836, New Zealand in 1841, Victoria in 1851, and Queensland in 1859.[80] The Northern Territory was excised from South Australia in 1911.[81] South Australia was founded as a "free province"—it was never a penal colony.[82] Western Australia was also founded "free" but later accepted transported convicts, the last of which arrived in 1868, decades after transportation had ceased to the other colonies.[83] By 1850, Europeans still had not entered large areas of the inland. Explorers remained ambitious to discover new lands for agriculture or answers to scientific enquiries.[84]
32
+
33
+ A series of gold rushes beginning in the early 1850s led to an influx of new migrants from China, North America and mainland Europe,[85] and also spurred outbreaks of bushranging and civil unrest. The latter peaked in 1854 when Ballarat miners launched the Eureka Rebellion against gold license fees.[86] Between 1855 and 1890, the six colonies individually gained responsible government, managing most of their own affairs while remaining part of the British Empire.[87] The Colonial Office in London retained control of some matters, notably foreign affairs,[88] defence,[89] and international shipping.
34
+
35
+ On 1 January 1901, federation of the colonies was achieved after a decade of planning, consultation and voting.[90] After the 1907 Imperial Conference, Australia and the other self-governing British colonies were given the status of "dominion" within the British Empire.[91][92] The Federal Capital Territory (later renamed the Australian Capital Territory) was formed in 1911 as the location for the future federal capital of Canberra. Melbourne was the temporary seat of government from 1901 to 1927 while Canberra was being constructed.[93] The Northern Territory was transferred from the control of the South Australian government to the federal parliament in 1911.[94] Australia became the colonial ruler of the Territory of Papua (which had initially been annexed by Queensland in 1888) in 1902 and of the Territory of New Guinea (formerly German New Guinea) in 1920. The two were unified as the Territory of Papua and New Guinea in 1949 and gained independence from Australia in 1975.
36
+
37
+ In 1914, Australia joined Britain in fighting World War I, with support from both the outgoing Commonwealth Liberal Party and the incoming Australian Labor Party.[95][96] Australians took part in many of the major battles fought on the Western Front.[97] Of about 416,000 who served, about 60,000 were killed and another 152,000 were wounded.[98] Many Australians regard the defeat of the Australian and New Zealand Army Corps (ANZACs) at Gallipoli as the birth of the nation—its first major military action.[99][100] The Kokoda Track campaign is regarded by many as an analogous nation-defining event during World War II.[101]
38
+
39
+ Britain's Statute of Westminster 1931 formally ended most of the constitutional links between Australia and the UK. Australia adopted it in 1942,[102] but it was backdated to 1939 to confirm the validity of legislation passed by the Australian Parliament during World War II.[103][104] The shock of Britain's defeat in Asia in 1942, followed soon after by the Japanese bombing of Darwin and attack on Sydney Harbour, led to a widespread belief in Australia that an invasion was imminent, and a shift towards the United States as a new ally and protector.[105] Since 1951, Australia has been a formal military ally of the US, under the ANZUS treaty.[106]
40
+
41
+ After World War II, Australia encouraged immigration from mainland Europe. Since the 1970s and following the abolition of the White Australia policy, immigration from Asia and elsewhere was also promoted.[107] As a result, Australia's demography, culture, and self-image were transformed.[108] The Australia Act 1986 severed the remaining constitutional ties between Australia and the UK.[109] In a 1999 referendum, 55% of voters and a majority in every state rejected a proposal to become a republic with a president appointed by a two-thirds vote in both Houses of the Australian Parliament. There has been an increasing focus in foreign policy on ties with other Pacific Rim nations, while maintaining close ties with Australia's traditional allies and trading partners.[110]
42
+
43
+ Surrounded by the Indian and Pacific oceans,[N 7] Australia is separated from Asia by the Arafura and Timor seas, with the Coral Sea lying off the Queensland coast, and the Tasman Sea lying between Australia and New Zealand. The world's smallest continent[112] and sixth largest country by total area,[113] Australia—owing to its size and isolation—is often dubbed the "island continent"[114] and is sometimes considered the world's largest island.[115] Australia has 34,218 kilometres (21,262 mi) of coastline (excluding all offshore islands),[116] and claims an extensive Exclusive Economic Zone of 8,148,250 square kilometres (3,146,060 sq mi). This exclusive economic zone does not include the Australian Antarctic Territory.[117] Apart from Macquarie Island, Australia lies between latitudes 9° and 44°S, and longitudes 112° and 154°E.
44
+
45
+ Australia's size gives it a wide variety of landscapes, with tropical rainforests in the north-east, mountain ranges in the south-east, south-west and east, and desert in the centre.[118] The desert or semi-arid land commonly known as the outback makes up by far the largest portion of land.[119] Australia is the driest inhabited continent; its annual rainfall averaged over continental area is less than 500 mm.[120] The population density, 3.2 inhabitants per square kilometre, although a large proportion of the population lives along the temperate south-eastern coastline.[121]
46
+
47
+ The Great Barrier Reef, the world's largest coral reef,[122] lies a short distance off the north-east coast and extends for over 2,000 kilometres (1,240 mi). Mount Augustus, claimed to be the world's largest monolith,[123] is located in Western Australia. At 2,228 metres (7,310 ft), Mount Kosciuszko is the highest mountain on the Australian mainland. Even taller are Mawson Peak (at 2,745 metres or 9,006 feet), on the remote Australian external territory of Heard Island, and, in the Australian Antarctic Territory, Mount McClintock and Mount Menzies, at 3,492 metres (11,457 ft) and 3,355 metres (11,007 ft) respectively.[124]
48
+
49
+ Eastern Australia is marked by the Great Dividing Range, which runs parallel to the coast of Queensland, New South Wales and much of Victoria. The name is not strictly accurate, because parts of the range consist of low hills, and the highlands are typically no more than 1,600 metres (5,249 ft) in height.[125] The coastal uplands and a belt of Brigalow grasslands lie between the coast and the mountains, while inland of the dividing range are large areas of grassland and shrubland.[125][126] These include the western plains of New South Wales, and the Mitchell Grass Downs and Mulga Lands of inland Queensland. The northernmost point of the east coast is the tropical Cape York Peninsula.[127][128][129][130]
50
+
51
+ The landscapes of the Top End and the Gulf Country—with their tropical climate—include forest, woodland, wetland, grassland, rainforest and desert.[131][132][133] At the north-west corner of the continent are the sandstone cliffs and gorges of The Kimberley, and below that the Pilbara. The Victoria Plains tropical savanna lies south of the Kimberly and Arnhem Land savannas, forming a transition between the coastal savannas and the interior deserts.[134][135][136] At the heart of the country are the uplands of central Australia. Prominent features of the centre and south include Uluru (also known as Ayers Rock), the famous sandstone monolith, and the inland Simpson, Tirari and Sturt Stony, Gibson, Great Sandy, Tanami, and Great Victoria deserts, with the famous Nullarbor Plain on the southern coast.[137][138][139][140] The Western Australian mulga shrublands lie between the interior deserts and Mediterranean-climate Southwest Australia.[141][142]
52
+
53
+ Lying on the Indo-Australian Plate, the mainland of Australia is the lowest and most primordial landmass on Earth with a relatively stable geological history.[143][144] The landmass includes virtually all known rock types and from all geological time periods spanning over 3.8 billion years of the Earth's history. The Pilbara Craton is one of only two pristine Archaean 3.6–2.7 Ga (billion years ago) crusts identified on the Earth.[145]
54
+
55
+ Having been part of all major supercontinents, the Australian continent began to form after the breakup of Gondwana in the Permian, with the separation of the continental landmass from the African continent and Indian subcontinent. It separated from Antarctica over a prolonged period beginning in the Permian and continuing through to the Cretaceous.[146] When the last glacial period ended in about 10,000 BC, rising sea levels formed Bass Strait, separating Tasmania from the mainland. Then between about 8,000 and 6,500 BC, the lowlands in the north were flooded by the sea, separating New Guinea, the Aru Islands, and the mainland of Australia.[147] The Australian continent is currently moving toward Eurasia at the rate of 6 to 7 centimetres a year.[148]
56
+
57
+ The Australian mainland's continental crust, excluding the thinned margins, has an average thickness of 38 km, with a range in thickness from 24 km to 59 km.[149] Australia's geology can be divided into several main sections, showcasing that the continent grew from west to east: the Archaean cratonic shields found mostly in the west, Proterozoic fold belts in the centre and Phanerozoic sedimentary basins, metamorphic and igneous rocks in the east.[150]
58
+
59
+ The Australian mainland and Tasmania are situated in the middle of the tectonic plate and currently have no active volcanoes,[151] but due to passing over the East Australia hotspot, recent volcanism has occurred during the Holocene, in the Newer Volcanics Province of western Victoria and southeastern South Australia. Volcanism also occurs in the island of New Guinea (considered geologically as part of the Australian continent), and in the Australian external territory of Heard Island and McDonald Islands.[152] Seismic activity in the Australian mainland and Tasmania is also low, with the greatest number of fatalities having occurred in the 1989 Newcastle earthquake.[153]
60
+
61
+ The climate of Australia is significantly influenced by ocean currents, including the Indian Ocean Dipole and the El Niño–Southern Oscillation, which is correlated with periodic drought, and the seasonal tropical low-pressure system that produces cyclones in northern Australia.[155][156] These factors cause rainfall to vary markedly from year to year. Much of the northern part of the country has a tropical, predominantly summer-rainfall (monsoon).[120] The south-west corner of the country has a Mediterranean climate.[157] The south-east ranges from oceanic (Tasmania and coastal Victoria) to humid subtropical (upper half of New South Wales), with the highlands featuring alpine and subpolar oceanic climates. The interior is arid to semi-arid.[120]
62
+
63
+ According to the Bureau of Meteorology's 2011 Australian Climate Statement, Australia had lower than average temperatures in 2011 as a consequence of a La Niña weather pattern; however, "the country's 10-year average continues to demonstrate the rising trend in temperatures, with 2002–2011 likely to rank in the top two warmest 10-year periods on record for Australia, at 0.52 °C (0.94 °F) above the long-term average".[158] Furthermore, 2014 was Australia's third warmest year since national temperature observations commenced in 1910.[159][160]
64
+
65
+ Water restrictions are frequently in place in many regions and cities of Australia in response to chronic shortages due to urban population increases and localised drought.[161][162] Throughout much of the continent, major flooding regularly follows extended periods of drought, flushing out inland river systems, overflowing dams and inundating large inland flood plains, as occurred throughout Eastern Australia in 2010, 2011 and 2012 after the 2000s Australian drought.
66
+
67
+ Australia's carbon dioxide emissions per capita are among the highest in the world, lower than those of only a few other industrialised nations.[163]
68
+
69
+ January 2019 was the hottest month ever in Australia with average temperatures exceeding 30 °C (86 °F).[164][165]
70
+
71
+ The 2019–20 Australian bushfire season was Australia's worst bushfire season on record.[166]
72
+
73
+ Although most of Australia is semi-arid or desert, the continent includes a diverse range of habitats from alpine heaths to tropical rainforests. Fungi typify that diversity—an estimated 250,000 species—of which only 5% have been described—occur in Australia.[167] Because of the continent's great age, extremely variable weather patterns, and long-term geographic isolation, much of Australia's biota is unique. About 85% of flowering plants, 84% of mammals, more than 45% of birds, and 89% of in-shore, temperate-zone fish are endemic.[168] Australia has at least 755 species of reptile, more than any other country in the world.[169] Besides Antarctica, Australia is the only continent that developed without feline species. Feral cats may have been introduced in the 17th century by Dutch shipwrecks, and later in the 18th century by European settlers. They are now considered a major factor in the decline and extinction of many vulnerable and endangered native species.[170]
74
+
75
+ Australian forests are mostly made up of evergreen species, particularly eucalyptus trees in the less arid regions; wattles replace them as the dominant species in drier regions and deserts.[171] Among well-known Australian animals are the monotremes (the platypus and echidna); a host of marsupials, including the kangaroo, koala, and wombat, and birds such as the emu and the kookaburra.[171] Australia is home to many dangerous animals including some of the most venomous snakes in the world.[172] The dingo was introduced by Austronesian people who traded with Indigenous Australians around 3000 BCE.[173] Many animal and plant species became extinct soon after first human settlement,[174] including the Australian megafauna; others have disappeared since European settlement, among them the thylacine.[175][176]
76
+
77
+ Many of Australia's ecoregions, and the species within those regions, are threatened by human activities and introduced animal, chromistan, fungal and plant species.[177] All these factors have led to Australia's having the highest mammal extinction rate of any country in the world.[178] The federal Environment Protection and Biodiversity Conservation Act 1999 is the legal framework for the protection of threatened species.[179] Numerous protected areas have been created under the National Strategy for the Conservation of Australia's Biological Diversity to protect and preserve unique ecosystems;[180][181] 65 wetlands are listed under the Ramsar Convention,[182] and 16 natural World Heritage Sites have been established.[183] Australia was ranked 21st out of 178 countries in the world on the 2018 Environmental Performance Index.[184] There are more than 1,800 animals and plants on Australia's threatened species list, including more than 500 animals.[185]
78
+
79
+ Australia is a federal parliamentary constitutional monarchy.[186] The country has maintained a stable liberal democratic political system under its constitution, which is one of the world's oldest, since Federation in 1901. It is also one of the world's oldest federations, in which power is divided between the federal and state and territorial governments. The Australian system of government combines elements derived from the political systems of the United Kingdom (a fused executive, constitutional monarchy and strong party discipline) and the United States (federalism, a written constitution and strong bicameralism with an elected upper house), along with distinctive indigenous features.[187][188]
80
+
81
+ The federal government is separated into three branches:
82
+
83
+ Elizabeth II reigns as Queen of Australia and is represented in Australia by the governor-general at the federal level and by the governors at the state level, who by convention act on the advice of her ministers.[190][191] Thus, in practice the governor-general acts as a legal figurehead for the actions of the prime minister and the Federal Executive Council. The governor-general does have extraordinary reserve powers which may be exercised outside the prime minister's request in rare and limited circumstances, the most notable exercise of which was the dismissal of the Whitlam Government in the constitutional crisis of 1975.[192]
84
+
85
+ In the Senate (the upper house), there are 76 senators: twelve each from the states and two each from the mainland territories (the Australian Capital Territory and the Northern Territory).[193] The House of Representatives (the lower house) has 151 members elected from single-member electoral divisions, commonly known as "electorates" or "seats", allocated to states on the basis of population,[194] with each original state guaranteed a minimum of five seats.[195] Elections for both chambers are normally held every three years simultaneously; senators have overlapping six-year terms except for those from the territories, whose terms are not fixed but are tied to the electoral cycle for the lower house; thus only 40 of the 76 places in the Senate are put to each election unless the cycle is interrupted by a double dissolution.[193]
86
+
87
+ Australia's electoral system uses preferential voting for all lower house elections with the exception of Tasmania and the ACT which, along with the Senate and most state upper houses, combine it with proportional representation in a system known as the single transferable vote. Voting is compulsory for all enrolled citizens 18 years and over in every jurisdiction,[196] as is enrolment (with the exception of South Australia).[197] The party with majority support in the House of Representatives forms the government and its leader becomes Prime Minister. In cases where no party has majority support, the Governor-General has the constitutional power to appoint the Prime Minister and, if necessary, dismiss one that has lost the confidence of Parliament.[198]
88
+
89
+ There are two major political groups that usually form government, federally and in the states: the Australian Labor Party and the Coalition which is a formal grouping of the Liberal Party and its minor partner, the National Party.[199][200] Within Australian political culture, the Coalition is considered centre-right and the Labor Party is considered centre-left.[201] Independent members and several minor parties have achieved representation in Australian parliaments, mostly in upper houses. The Australian Greens are often considered the "third force" in politics, being the third largest party by both vote and membership.[202]
90
+
91
+ The most recent federal election was held on 18 May 2019 and resulted in the Coalition, led by Prime Minister Scott Morrison, retaining government.[203]
92
+
93
+ Australia has six states—New South Wales (NSW), Queensland (QLD), South Australia (SA), Tasmania (TAS), Victoria (VIC) and Western Australia (WA)—and two major mainland territories—the Australian Capital Territory (ACT) and the Northern Territory (NT). In most respects, these two territories function as states, except that the Commonwealth Parliament has the power to modify or repeal any legislation passed by the territory parliaments.[204]
94
+
95
+ Under the constitution, the states essentially have plenary legislative power to legislate on any subject, whereas the Commonwealth (federal) Parliament may legislate only within the subject areas enumerated under section 51. For example, state parliaments have the power to legislate with respect to education, criminal law and state police, health, transport, and local government, but the Commonwealth Parliament does not have any specific power to legislate in these areas.[205] However, Commonwealth laws prevail over state laws to the extent of the inconsistency.[206] In addition, the Commonwealth has the power to levy income tax which, coupled with the power to make grants to States, has given it the financial means to incentivise States to pursue specific legislative agendas within areas over which the Commonwealth does not have legislative power.
96
+
97
+ Each state and major mainland territory has its own parliament—unicameral in the Northern Territory, the ACT and Queensland, and bicameral in the other states. The states are sovereign entities, although subject to certain powers of the Commonwealth as defined by the Constitution. The lower houses are known as the Legislative Assembly (the House of Assembly in South Australia and Tasmania); the upper houses are known as the Legislative Council. The head of the government in each state is the Premier and in each territory the Chief Minister. The Queen is represented in each state by a governor; and in the Northern Territory, the administrator.[207] In the Commonwealth, the Queen's representative is the governor-general.[208]
98
+
99
+ The Commonwealth Parliament also directly administers the external territories of Ashmore and Cartier Islands, the Australian Antarctic Territory, Christmas Island, the Cocos (Keeling) Islands, the Coral Sea Islands, and Heard Island and McDonald Islands, as well as the internal Jervis Bay Territory, a naval base and sea port for the national capital in land that was formerly part of New South Wales.[189] The external territory of Norfolk Island previously exercised considerable autonomy under the Norfolk Island Act 1979 through its own legislative assembly and an Administrator to represent the Queen.[209] In 2015, the Commonwealth Parliament abolished self-government, integrating Norfolk Island into the Australian tax and welfare systems and replacing its legislative assembly with a council.[210] Macquarie Island is part of Tasmania,[211] and Lord Howe Island of New South Wales.[212]
100
+
101
+ Over recent decades, Australia's foreign relations have been driven by a close association with the United States through the ANZUS pact, and by a desire to develop relationships with Asia and the Pacific, particularly through ASEAN, the Pacific Islands Forum and the Pacific Community, of which Australia is a founding member. In 2005 Australia secured an inaugural seat at the East Asia Summit following its accession to the Treaty of Amity and Cooperation in Southeast Asia, and in 2011 attended the Sixth East Asia Summit in Indonesia. Australia is a member of the Commonwealth of Nations, in which the Commonwealth Heads of Government meetings provide the main forum for co-operation.[213] Australia has pursued the cause of international trade liberalisation.[214] It led the formation of the Cairns Group and Asia-Pacific Economic Cooperation.[215][216]
102
+
103
+ Australia is a member of the Organisation for Economic Co-operation and Development and the World Trade Organization,[217][218] and has pursued several major bilateral free trade agreements, most recently the Australia–United States Free Trade Agreement[219] and Closer Economic Relations with New Zealand,[220] with another free trade agreement being negotiated with China—the Australia–China Free Trade Agreement—and Japan,[221] South Korea in 2011,[222][223] Australia–Chile Free Trade Agreement, and as of November 2015[update] has put the Trans-Pacific Partnership before parliament for ratification.[224]
104
+
105
+ Australia maintains a deeply integrated relationship with neighbouring New Zealand, with free mobility of citizens between the two countries under the Trans-Tasman Travel Arrangement and free trade under the
106
+ Australia–New Zealand Closer Economic Relations Trade Agreement.[225] New Zealand, Canada and the United Kingdom are the most favourably viewed countries in the world by Australian people,[226][227] sharing a number of close diplomatic, military and cultural ties with Australia.
107
+
108
+ Along with New Zealand, the United Kingdom, Malaysia and Singapore, Australia is party to the Five Power Defence Arrangements, a regional defence agreement. A founding member country of the United Nations, Australia is strongly committed to multilateralism[228] and maintains an international aid program under which some 60 countries receive assistance. The 2005–06 budget provides A$2.5 billion for development assistance.[229] Australia ranks fifteenth overall in the Center for Global Development's 2012 Commitment to Development Index.[230]
109
+
110
+ Australia's armed forces—the Australian Defence Force (ADF)—comprise the Royal Australian Navy (RAN), the Australian Army and the Royal Australian Air Force (RAAF), in total numbering 81,214 personnel (including 57,982 regulars and 23,232 reservists) as of November 2015[update]. The titular role of Commander-in-Chief is vested in the Governor-General, who appoints a Chief of the Defence Force from one of the armed services on the advice of the government.[231] Day-to-day force operations are under the command of the Chief, while broader administration and the formulation of defence policy is undertaken by the Minister and Department of Defence.
111
+
112
+ In the 2016–17 budget, defence spending comprised 2% of GDP, representing the world's 12th largest defence budget.[232] Australia has been involved in UN and regional peacekeeping, disaster relief and armed conflict, including the 2003 invasion of Iraq; it currently has deployed about 2,241 personnel in varying capacities to 12 international operations in areas including Iraq and Afghanistan.[233]
113
+
114
+ A wealthy country, Australia has a market economy, a high GDP per capita, and a relatively low rate of poverty. In terms of average wealth, Australia ranked second in the world after Switzerland from 2013 until 2018.[235] In 2018, Australia overtook Switzerland and became the country with the highest average wealth.[235] Australia's poverty rate increased from 10.2% to 11.8%, from 2000/01 to 2013.[236][237] It was identified by the Credit Suisse Research Institute as the nation with the highest median wealth in the world and the second-highest average wealth per adult in 2013.[236]
115
+
116
+ The Australian dollar is the currency for the nation, including Christmas Island, Cocos (Keeling) Islands, and Norfolk Island, as well as the independent Pacific Island states of Kiribati, Nauru, and Tuvalu. With the 2006 merger of the Australian Stock Exchange and the Sydney Futures Exchange, the Australian Securities Exchange became the ninth largest in the world.[238]
117
+
118
+ Ranked fifth in the Index of Economic Freedom (2017),[239] Australia is the world's 14th largest economy and has the tenth highest per capita GDP (nominal) at US$55,692.[240] The country was ranked third in the United Nations 2017 Human Development Index.[241] Melbourne reached top spot for the fourth year in a row on The Economist's 2014 list of the world's most liveable cities,[242] followed by Adelaide, Sydney, and Perth in the fifth, seventh, and ninth places respectively. Total government debt in Australia is about A$190 billion[243]—20% of GDP in 2010.[244] Australia has among the highest house prices and some of the highest household debt levels in the world.[245]
119
+
120
+ An emphasis on exporting commodities rather than manufactured goods has underpinned a significant increase in Australia's terms of trade since the start of the 21st century, due to rising commodity prices. Australia has a balance of payments that is more than 7% of GDP negative, and has had persistently large current account deficits for more than 50 years.[246] Australia has grown at an average annual rate of 3.6% for over 15 years, in comparison to the OECD annual average of 2.5%.[246]
121
+
122
+ Australia was the only advanced economy not to experience a recession due to the global financial downturn in 2008–2009.[247] However, the economies of six of Australia's major trading partners have been in recession[when?], which in turn has affected Australia, significantly hampering its economic growth in recent years[when?].[248][249] From 2012 to early 2013, Australia's national economy grew, but some non-mining states and Australia's non-mining economy experienced a recession.[250][251][252]
123
+
124
+ The Hawke Government floated the Australian dollar in 1983 and partially deregulated the financial system.[253] The Howard Government followed with a partial deregulation of the labour market and the further privatisation of state-owned businesses, most notably in the telecommunications industry.[254] The indirect tax system was substantially changed in July 2000 with the introduction of a 10% Goods and Services Tax (GST).[255] In Australia's tax system, personal and company income tax are the main sources of government revenue.[256]
125
+
126
+ As of September 2018[update], there were 12,640,800 people employed (either full- or part-time), with an unemployment rate of 5.2%.[257] Data released in mid-November 2013 showed that the number of welfare recipients had grown by 55%. In 2007 228,621 Newstart unemployment allowance recipients were registered, a total that increased to 646,414 in March 2013.[258] According to the Graduate Careers Survey, full-time employment for newly qualified professionals from various occupations has declined since 2011 but it increases for graduates three years after graduation.[259][260]
127
+
128
+ Since 2008[when?], inflation has typically been 2–3% and the base interest rate 5–6%. The service sector of the economy, including tourism, education, and financial services, accounts for about 70% of GDP.[261] Rich in natural resources, Australia is a major exporter of agricultural products, particularly wheat and wool, minerals such as iron-ore and gold, and energy in the forms of liquified natural gas and coal. Although agriculture and natural resources account for only 3% and 5% of GDP respectively, they contribute substantially to export performance. Australia's largest export markets are Japan, China, the United States, South Korea, and New Zealand.[262] Australia is the world's fourth largest exporter of wine, and the wine industry contributes A$5.5 billion per year to the nation's economy.[263]
129
+
130
+ Access to biocapacity in Australia is much higher than world average. In 2016, Australia had 12.3 global hectares[264] of biocapacity per person within its territory, much more than the world average of 1.6 global hectares per person.[265] In 2016 Australia used 6.6 global hectares of biocapacity per person – their ecological footprint of consumption. This means they use half as much biocapacity as Australia contains. As a result, Australia is running a biocapacity reserve.[264]
131
+
132
+ In 2020 ACOSS released a new report revealing that poverty is growing in Australia, with an estimated 3.2 million people, or 13.6% of the population, living below the internationally accepted poverty line of 50% of a country's median income. It also estimated that there are 774,000 (17.7%) children under the age of 15 that are in poverty.[266][267]
133
+
134
+ Australia has an average population density of 3.3 persons per square kilometre of total land area, which makes it is one of the most sparsely populated countries in the world. The population is heavily concentrated on the east coast, and in particular in the south-eastern region between South East Queensland to the north-east and Adelaide to the south-west.[268]
135
+
136
+ Australia is highly urbanised, with 67% of the population living in the Greater Capital City Statistical Areas (metropolitan areas of the state and mainland territorial capital cities) in 2018.[269] Metropolitan areas with more than one million inhabitants are Sydney, Melbourne, Brisbane, Perth and Adelaide.
137
+
138
+ In common with many other developed countries, Australia is experiencing a demographic shift towards an older population, with more retirees and fewer people of working age. In 2018 the average age of the Australian population was 38.8 years.[270] In 2015, 2.15% of the Australian population lived overseas, one of the lowest proportions worldwide.[271]
139
+
140
+ Between 1788 and the Second World War, the vast majority of settlers and immigrants came from the British Isles (principally England, Ireland and Scotland), although there was significant immigration from China and Germany during the 19th century. In the decades immediately following the Second World War, Australia received a large wave of immigration from across Europe, with many more immigrants arriving from Southern and Eastern Europe than in previous decades. Since the end of the White Australia policy in 1973, Australia has pursued an official policy of multiculturalism,[274] and there has been a large and continuing wave of immigration from across the world, with Asia being the largest source of immigrants in the 21st century.[275]
141
+
142
+ Today, Australia has the world's eighth-largest immigrant population, with immigrants accounting for 30% of the population, a higher proportion than in any other nation with a population of over 10 million.[27][276] 160,323 permanent immigrants were admitted to Australia in 2018–19 (excluding refugees),[275] whilst there was a net population gain of 239,600 people from all permanent and temporary immigration in that year.[277] The majority of immigrants are skilled,[275] but the immigration program includes categories for family members and refugees.[277] In 2019 the largest foreign-born populations were those born in England (3.9%), Mainland China (2.7%), India (2.6%), New Zealand (2.2%), the Philippines (1.2%) and Vietnam (1%).[27]
143
+
144
+ In the 2016 Australian census, the most commonly nominated ancestries were:[N 9][278][279]
145
+
146
+ At the 2016 census, 649,171 people (2.8% of the total population) identified as being Indigenous—Aboriginal Australians and Torres Strait Islanders.[N 12][281] Indigenous Australians experience higher than average rates of imprisonment and unemployment, lower levels of education, and life expectancies for males and females that are, respectively, 11 and 17 years lower than those of non-indigenous Australians.[262][282][283] Some remote Indigenous communities have been described as having "failed state"-like conditions.[284]
147
+
148
+ Although Australia has no official language, English is the de facto national language.[2] Australian English is a major variety of the language with a distinctive accent and lexicon,[285] and differs slightly from other varieties of English in grammar and spelling.[286] General Australian serves as the standard dialect.
149
+
150
+ According to the 2016 census, English is the only language spoken in the home for 72.7% of the population. The next most common languages spoken at home are Mandarin (2.5%), Arabic (1.4%), Cantonese (1.2%), Vietnamese (1.2%) and Italian (1.2%).[278] A considerable proportion of first- and second-generation migrants are bilingual.
151
+
152
+ Over 250 Indigenous Australian languages are thought to have existed at the time of first European contact,[287] of which fewer than twenty are still in daily use by all age groups.[288][289] About 110 others are spoken exclusively by older people.[289] At the time of the 2006 census, 52,000 Indigenous Australians, representing 12% of the Indigenous population, reported that they spoke an Indigenous language at home.[290] Australia has a sign language known as Auslan, which is the main language of about 10,112 deaf people who reported that they spoke Auslan language at home in the 2016 census.[291]
153
+
154
+ Australia has no state religion; Section 116 of the Australian Constitution prohibits the federal government from making any law to establish any religion, impose any religious observance, or prohibit the free exercise of any religion.[293] In the 2016 census, 52.1% of Australians were counted as Christian, including 22.6% as Catholic and 13.3% as Anglican; 30.1% of the population reported having "no religion"; 8.2% identify with non-Christian religions, the largest of these being Islam (2.6%), followed by Buddhism (2.4%), Hinduism (1.9%), Sikhism (0.5%) and Judaism (0.4%). The remaining 9.7% of the population did not provide an adequate answer. Those who reported having no religion increased conspicuously from 19% in 2006 to 22% in 2011 to 30.1% in 2016.[292]
155
+
156
+ Before European settlement, the animist beliefs of Australia's indigenous people had been practised for many thousands of years. Mainland Aboriginal Australians' spirituality is known as the Dreamtime and it places a heavy emphasis on belonging to the land. The collection of stories that it contains shaped Aboriginal law and customs. Aboriginal art, story and dance continue to draw on these spiritual traditions. The spirituality and customs of Torres Strait Islanders, who inhabit the islands between Australia and New Guinea, reflected their Melanesian origins and dependence on the sea. The 1996 Australian census counted more than 7000 respondents as followers of a traditional Aboriginal religion.[294]
157
+
158
+ Since the arrival of the First Fleet of British ships in 1788, Christianity has become the major religion practised in Australia. Christian churches have played an integral role in the development of education, health and welfare services in Australia. For much of Australian history, the Church of England (now known as the Anglican Church of Australia) was the largest religious denomination, with a large Roman Catholic minority. However, multicultural immigration has contributed to a steep decline in its relative position since the Second World War. Similarly, Islam, Buddhism, Hinduism, Sikhism and Judaism have all grown in Australia over the past half-century.[295]
159
+
160
+ Australia has one of the lowest levels of religious adherence in the world.[296] In 2001, only 8.8% of Australians attended church on a weekly basis.[297]
161
+
162
+ Australia's life expectancy is the third highest in the world for males and the seventh highest for females.[298] Life expectancy in Australia in 2010 was 79.5 years for males and 84.0 years for females.[299] Australia has the highest rates of skin cancer in the world,[300] while cigarette smoking is the largest preventable cause of death and disease, responsible for 7.8% of the total mortality and disease. Ranked second in preventable causes is hypertension at 7.6%, with obesity third at 7.5%.[301][302] Australia ranks 35th in the world[303] and near the top of developed nations for its proportion of obese adults[304] and nearly two thirds (63%) of its adult population is either overweight or obese.[305]
163
+
164
+ Total expenditure on health (including private sector spending) is around 9.8% of GDP.[306] Australia introduced universal health care in 1975.[307] Known as Medicare, it is now nominally funded by an income tax surcharge known as the Medicare levy, currently set at 2%.[308] The states manage hospitals and attached outpatient services, while the Commonwealth funds the Pharmaceutical Benefits Scheme (subsidising the costs of medicines) and general practice.[307]
165
+
166
+ School attendance, or registration for home schooling,[310] is compulsory throughout Australia. Education is the responsibility of the individual states and territories[311] so the rules vary between states, but in general children are required to attend school from the age of about 5 until about 16.[312][313] In some states (e.g., Western Australia,[314] the Northern Territory[315] and New South Wales[316][317]), children aged 16–17 are required to either attend school or participate in vocational training, such as an apprenticeship.
167
+
168
+ Australia has an adult literacy rate that was estimated to be 99% in 2003.[318] However, a 2011–12 report for the Australian Bureau of Statistics reported that Tasmania has a literacy and numeracy rate of only 50%.[319] In the Programme for International Student Assessment, Australia regularly scores among the top five of thirty major developed countries (member countries of the Organisation for Economic Co-operation and Development). Catholic education accounts for the largest non-government sector.
169
+
170
+ Australia has 37 government-funded universities and two private universities, as well as a number of other specialist institutions that provide approved courses at the higher education level.[320] The OECD places Australia among the most expensive nations to attend university.[321] There is a state-based system of vocational training, known as TAFE, and many trades conduct apprenticeships for training new tradespeople.[322] About 58% of Australians aged from 25 to 64 have vocational or tertiary qualifications,[262] and the tertiary graduation rate of 49% is the highest among OECD countries. 30.9 percent of Australia's population has attained a higher education qualification, which is among the highest percentages in the world.[323][324][325]
171
+
172
+ Australia has the highest ratio of international students per head of population in the world by a large margin, with 812,000 international students enrolled in the nation's universities and vocational institutions in 2019.[326][327] Accordingly, in 2019, international students represented on average 26.7% of the student bodies of Australian universities. International education therefore represents one of the country's largest exports and has a pronounced influence on the country's demographics, with a significant proportion of international students remaining in Australia after graduation on various skill and employment visas.[328]
173
+
174
+ Since 1788, the primary influence behind Australian culture has been Anglo-Celtic Western culture, with some Indigenous influences.[330][331] The divergence and evolution that has occurred in the ensuing centuries has resulted in a distinctive Australian culture.[332][333] The culture of the United States has also been highly influential, particularly through television and cinema. Other cultural influences come from neighbouring Asian countries, and through large-scale immigration from non-English-speaking nations.[334]
175
+
176
+ Australia has over 100,000 Aboriginal rock art sites,[335] and traditional designs, patterns and stories infuse contemporary Indigenous Australian art, "the last great art movement of the 20th century" according to critic Robert Hughes;[336] its exponents include Emily Kame Kngwarreye.[337] Early colonial artists showed a fascination with the unfamiliar land.[338] The impressionistic works of Arthur Streeton, Tom Roberts and other members of the 19th-century Heidelberg School—the first "distinctively Australian" movement in Western art—gave expression to nationalist sentiments in the lead-up to Federation.[338] While the school remained influential into the 1900s, modernists such as Margaret Preston, and, later, Sidney Nolan and Arthur Boyd, explored new artistic trends.[338] The landscape remained a central subject matter for Fred Williams, Brett Whiteley and other post-war artists whose works, eclectic in style yet uniquely Australian, moved between the figurative and the abstract.[338][339] The national and state galleries maintain collections of local and international art.[340] Australia has one of the world's highest attendances of art galleries and museums per head of population.[341]
177
+
178
+ Australian literature grew slowly in the decades following European settlement though Indigenous oral traditions, many of which have since been recorded in writing, are much older.[343] In the 1870s, Adam Lindsay Gordon posthumously became the first Australian poet to attain a wide readership. Following in his footsteps, Henry Lawson and Banjo Paterson captured the experience of the bush using a distinctive Australian vocabulary.[344] Their works are still popular; Paterson's bush poem "Waltzing Matilda" (1895) is regarded as Australia's unofficial national anthem.[345] Miles Franklin is the namesake of Australia's most prestigious literary prize, awarded annually to the best novel about Australian life.[346] Its first recipient, Patrick White, went on to win the Nobel Prize in Literature in 1973.[347] Australian Booker Prize winners include Peter Carey, Thomas Keneally and Richard Flanagan.[348] Author David Malouf, playwright David Williamson and poet Les Murray are also renowned.[349][350]
179
+
180
+ Many of Australia's performing arts companies receive funding through the federal government's Australia Council.[351] There is a symphony orchestra in each state,[352] and a national opera company, Opera Australia,[353] well known for its famous soprano Joan Sutherland.[354] At the beginning of the 20th century, Nellie Melba was one of the world's leading opera singers.[355] Ballet and dance are represented by The Australian Ballet and various state companies. Each state has a publicly funded theatre company.[356]
181
+
182
+ The Story of the Kelly Gang (1906), the world's first feature-length narrative film, spurred a boom in Australian cinema during the silent film era.[357] After World War I, Hollywood monopolised the industry,[358] and by the 1960s Australian film production had effectively ceased.[359] With the benefit of government support, the Australian New Wave of the 1970s brought provocative and successful films, many exploring themes of national identity, such as Wake in Fright and Gallipoli,[360] while Crocodile Dundee and the Ozploitation movement's Mad Max series became international blockbusters.[361] In a film market flooded with foreign content, Australian films delivered a 7.7% share of the local box office in 2015.[362] The AACTAs are Australia's premier film and television awards, and notable Academy Award winners from Australia include Geoffrey Rush, Nicole Kidman, Cate Blanchett and Heath Ledger.[363]
183
+
184
+ Australia has two public broadcasters (the Australian Broadcasting Corporation and the multicultural Special Broadcasting Service), three commercial television networks, several pay-TV services,[364] and numerous public, non-profit television and radio stations. Each major city has at least one daily newspaper,[364] and there are two national daily newspapers, The Australian and The Australian Financial Review.[364] In 2010, Reporters Without Borders placed Australia 18th on a list of 178 countries ranked by press freedom, behind New Zealand (8th) but ahead of the United Kingdom (19th) and United States (20th).[365] This relatively low ranking is primarily because of the limited diversity of commercial media ownership in Australia;[366] most print media are under the control of News Corporation and, after Fairfax Media was merged with Nine, Nine Entertainment Co.[367]
185
+
186
+ Most Indigenous Australian groups subsisted on a simple hunter-gatherer diet of native fauna and flora, otherwise called bush tucker.[368] The first settlers introduced British food to the continent, much of which is now considered typical Australian food, such as the Sunday roast.[369][370] Multicultural immigration transformed Australian cuisine; post-World War II European migrants, particularly from the Mediterranean, helped to build a thriving Australian coffee culture, and the influence of Asian cultures has led to Australian variants of their staple foods, such as the Chinese-inspired dim sim and Chiko Roll.[371] Vegemite, pavlova, lamingtons and meat pies are regarded as iconic Australian foods.[372] Australian wine is produced mainly in the southern, cooler parts of the country.
187
+
188
+ Australia is also known for its cafe and coffee culture in urban centres, which has influenced coffee culture abroad, including New York City.[373] Australia was responsible for the flat white coffee–purported to have originated in a Sydney cafe in the mid-1980s.[374]
189
+
190
+ Cricket and football are the predominate sports in Australia during the summer and winter months, respectively. Australia is unique in that it has professional leagues for four football codes. Originating in Melbourne in the 1850s, Australian rules football is the most popular code in all states except New South Wales and Queensland, where rugby league holds sway, followed by rugby union. Soccer, while ranked fourth in popularity and resources, has the highest overall participation rates.[376] Cricket is popular across all borders and has been regarded by many Australians as the national sport. The Australian national cricket team competed against England in the first Test match (1877) and the first One Day International (1971), and against New Zealand in the first Twenty20 International (2004), winning all three games. It has also participated in every edition of the Cricket World Cup, winning the tournament a record five times.[377]
191
+
192
+ Australia is a powerhouse in water-based sports, such as swimming and surfing.[378] The surf lifesaving movement originated in Australia, and the volunteer lifesaver is one of the country's icons.[379] Nationally, other popular sports include horse racing, basketball, and motor racing. The annual Melbourne Cup horse race and the Sydney to Hobart yacht race attract intense interest.[380] In 2016, the Australian Sports Commission revealed that swimming, cycling and soccer are the three most popular participation sports.[381][382]
193
+
194
+ Australia is one of five nations to have participated in every Summer Olympics of the modern era,[383] and has hosted the Games twice: 1956 in Melbourne and 2000 in Sydney.[384] Australia has also participated in every Commonwealth Games,[385] hosting the event in 1938, 1962, 1982, 2006 and 2018.[386] Australia made its inaugural appearance at the Pacific Games in 2015. As well as being a regular FIFA World Cup participant, Australia has won the OFC Nations Cup four times and the AFC Asian Cup once—the only country to have won championships in two different FIFA confederations.[387] In June 2020, Australia won its bid to co-host the 2023 FIFA Women's World Cup with New Zealand.[388][389] The country regularly competes among the world elite basketball teams as it is among the global top three teams in terms of qualifications to the Basketball Tournament at the Summer Olympics. Other major international events held in Australia include the Australian Open tennis grand slam tournament, international cricket matches, and the Australian Formula One Grand Prix. The highest-rating television programs include sports telecasts such as the Summer Olympics, FIFA World Cup, The Ashes, Rugby League State of Origin, and the grand finals of the National Rugby League and Australian Football League.[390] Skiing in Australia began in the 1860s and snow sports take place in the Australian Alps and parts of Tasmania.
195
+
en/4650.html.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ A planet, in astronomy, is one of a class of celestial bodies that orbit stars. (A dwarf planet is a similar, but officially mutually exclusive, class of body.)
2
+
3
+ Planet or Planets may also refer to:
en/4651.html.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ A planet, in astronomy, is one of a class of celestial bodies that orbit stars. (A dwarf planet is a similar, but officially mutually exclusive, class of body.)
2
+
3
+ Planet or Planets may also refer to:
en/4652.html.txt ADDED
@@ -0,0 +1,121 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ A dwarf planet is a planetary-mass object that does not dominate its region of space (as a true or classical planet does) and is not a satellite. That is, it is in direct orbit of the Sun and is massive enough to be plastic – for its gravity to maintain it in a hydrostatically equilibrious shape (usually a spheroid) – but has not cleared the neighborhood around its orbit of other material.[2] The prototype dwarf planet is Pluto.[3]
4
+
5
+ The number of dwarf planets in the Solar System is unknown, as determining whether a potential body is a dwarf planet requires close observation. The half-dozen largest candidates have at least one known moon, allowing determination of their masses. The interest of dwarf planets to planetary geologists is that, being differentiated and perhaps geologically active bodies, they are likely to display planetary geology, an expectation borne out by the 2015 New Horizons mission to Pluto and Dawn mission to Ceres.
6
+
7
+ The term dwarf planet was coined by planetary scientist Alan Stern as part of a three-way categorization of planetary-mass objects in the Solar System: classical planets (the big eight), dwarf planets and satellite planets. Dwarf planets were thus originally conceived of as a kind of planet, as the name suggests. However, in 2006 the term was adopted by the International Astronomical Union (IAU) as a category of sub-planetary objects, part of a three-way recategorization of bodies orbiting the Sun[2] precipitated by the discovery of Eris, an object farther away from the Sun than Neptune that was more massive than Pluto but still much smaller than the classical planets, after discoveries of a number of other objects that rivaled Pluto in size had forced a reconsideration of what Pluto was.[4] Thus Stern and many other planetary geologists distinguish dwarf planets from classical planets, but since 2006 the IAU and the majority of astronomers have excluded bodies such as Eris and Pluto from the roster of planets altogether. This redefinition of what constitutes a planet has been both praised and criticized.[5][6][7][8][9][10]
8
+
9
+ Starting in 1801, astronomers discovered Ceres and other bodies between Mars and Jupiter that for decades were considered to be planets. Between then and around 1851, when the number of planets had reached 23, astronomers started using the word asteroid for the smaller bodies and then stopped naming or classifying them as planets.[12]
10
+
11
+ With the discovery of Pluto in 1930, most astronomers considered the Solar System to have nine planets, along with thousands of significantly smaller bodies (asteroids and comets). For almost 50 years Pluto was thought to be larger than Mercury,[13][14] but with the discovery in 1978 of Pluto's moon Charon, it became possible to measure Pluto's mass accurately and to determine that it was much smaller than initial estimates.[15] It was roughly one-twentieth the mass of Mercury, which made Pluto by far the smallest planet. Although it was still more than ten times as massive as the largest object in the asteroid belt, Ceres, it had only one-fifth the mass of Earth's Moon.[16] Furthermore, having some unusual characteristics, such as large orbital eccentricity and a high orbital inclination, it became evident that it was a different kind of body from any of the other planets.[17]
12
+
13
+ In the 1990s, astronomers began to find objects in the same region of space as Pluto (now known as the Kuiper belt), and some even farther away.[18]
14
+ Many of these shared several of Pluto's key orbital characteristics, and Pluto started being seen as the largest member of a new class of objects, the plutinos. It became clear that either the larger of these bodies would also have to be classified as planets, or Pluto would have to be reclassified, much as Ceres had been reclassified after the discovery of additional asteroids.[19]
15
+ This led some astronomers to stop referring to Pluto as a planet. Several terms, including subplanet and planetoid, started to be used for the bodies now known as dwarf planets.[20][21]
16
+ Astronomers were also confident that more objects as large as Pluto would be discovered, and the number of planets would start growing quickly if Pluto were to remain classified as a planet.[22]
17
+
18
+ Eris (then known as 2003 UB313) was discovered in January 2005;[23] it was thought to be slightly larger than Pluto, and some reports informally referred to it as the tenth planet.[24] As a consequence, the issue became a matter of intense debate during the IAU General Assembly in August 2006.[25] The IAU's initial draft proposal included Charon, Eris, and Ceres in the list of planets. After many astronomers objected to this proposal, an alternative was drawn up by the Uruguayan astronomers Julio Ángel Fernández and Gonzalo Tancredi: they proposed an intermediate category for objects large enough to be round but which had not cleared their orbits of planetesimals. Dropping Charon from the list, the new proposal also removed Pluto, Ceres, and Eris, because they have not cleared their orbits.[26]
19
+
20
+ The IAU's final Resolution 5A preserved this three-category system for the celestial bodies orbiting the Sun. It reads:
21
+
22
+ The IAU ... resolves that planets and other bodies, except satellites, in our Solar System be defined into three distinct categories in the following way:
23
+
24
+ (1) A planet1 is a celestial body that (a) is in orbit around the Sun, (b) has sufficient mass for its self-gravity to overcome rigid body forces so that it assumes a hydrostatic equilibrium (nearly round) shape, and (c) has cleared the neighbourhood around its orbit.
25
+ (2) A "dwarf planet" is a celestial body that (a) is in orbit around the Sun, (b) has sufficient mass for its self-gravity to overcome rigid body forces so that it assumes a hydrostatic equilibrium (nearly round) shape,2 (c) has not cleared the neighbourhood around its orbit, and (d) is not a satellite.
26
+ (3) All other objects,3 except satellites, orbiting the Sun shall be referred to collectively as "Small Solar System Bodies."
27
+
28
+ The IAU never did establish a process to assign borderline objects, leaving such judgements to astronomers. However, it did subsequently establish guidelines under which an IAU committee would oversee the naming of possible dwarf planets: unnamed trans-Neptunian objects with an absolute magnitude brighter than +1 (and hence a minimum diameter of 838 km corresponding to a geometric albedo of 1)[27] were to be named by the dwarf-planet naming committee.[28] At the time (and still as of 2019), the only bodies to meet the naming criterion were Haumea and Makemake.
29
+
30
+ These five bodies – the three under consideration in 2006 (Pluto, Ceres and Eris) plus the two named in 2008 (Haumea and Makemake) – are commonly presented as the dwarf planets of the Solar System by naming authorities.[29]
31
+ However, only one of them – Pluto – has been observed in enough detail to verify that its current shapes fits what would be expected from hydrostatic equilibrium.[30] Ceres is close to equilibrium, but some gravitational anomalies remain unexplained.[31]
32
+
33
+ On the other hand, the astronomical community typically refers to the larger TNOs as dwarf planets.[32] For instance, JPL/NASA characterized Gonggong as a dwarf planet after observations in 2016,[33] and Simon Porter spoke of "the big eight [TNO] dwarf planets" in 2018.[34]
34
+
35
+ Although concerns were raised about the classification of planets orbiting other stars,[35] the issue was not resolved; it was proposed instead to decide this only when such objects start to be observed.[26]
36
+
37
+ Names for large subplanetary bodies include dwarf planet, planetoid, meso-planet, quasi-planet and (in the transneptunian region) plutoid. Dwarf planet, however, was originally coined as a term for the smallest planets, not the largest sub-planets, and is still used that way by many planetary astronomers.
38
+
39
+ Alan Stern coined the term dwarf planet, analogous to the term dwarf star, as part of a three-fold classification of planets, and he and many of his colleagues continue to classify dwarf planets as a class of planets. The IAU decided that dwarf planets are not to be considered planets, but kept Stern's term for them. Other terms for the IAU definition of the largest subplanetary bodies that do not have such conflicting connotations or usage include quasi-planet[36]
40
+ and the older term planetoid ("having the form of a planet").[37]
41
+ Michael E. Brown stated that planetoid is "a perfectly good word" that has been used for these bodies for years, and that the use of the term dwarf planet for a non-planet is "dumb", but that it was motivated by an attempt by the IAU division III plenary session to reinstate Pluto as a planet in a second resolution.[38] Indeed, the draft of Resolution 5A had called these median bodies planetoids,[39][40] but the plenary session voted unanimously to change the name to dwarf planet.[2] The second resolution, 5B, defined dwarf planets as a subtype of planet, as Stern had originally intended, distinguished from the other eight that were to be called "classical planets". Under this arrangement, the twelve planets of the rejected proposal were to be preserved in a distinction between eight classical planets and four dwarf planets. Resolution 5B was defeated in the same session that 5A was passed.[38] Because of the semantic inconsistency of a dwarf planet not being a planet due to the failure of Resolution 5B, alternative terms such as nanoplanet and subplanet were discussed, but there was no consensus among the CSBN to change it.[41]
42
+
43
+ In most languages equivalent terms have been created by translating dwarf planet more-or-less literally: French planète naine, Spanish planeta enano, German Zwergplanet, Russian karlikovaya planeta (карликовая планета), Arabic kaukab qazm (كوكب قزم), Chinese ǎixíngxīng (矮行星), Korean waesohangseong or waehangseong (왜소행성; 矮小行星, 왜행성; 矮行星), but in Japanese they are called junwakusei (準惑星), meaning "quasi-planets" or "peneplanets".
44
+
45
+ IAU Resolution 6a of 2006[3] recognizes Pluto as "the prototype of a new category of trans-Neptunian objects". The name and precise nature of this category were not specified but left for the IAU to establish at a later date; in the debate leading up to the resolution, the members of the category were variously referred to as plutons and plutonian objects but neither name was carried forward, perhaps due to objections from geologists that this would create confusion with their pluton.[2]
46
+
47
+ On June 11, 2008, the IAU Executive Committee announced a name, plutoid, and a definition: all trans-Neptunian dwarf planets are plutoids.[28] The authority of that initial announcement has not been universally recognized:
48
+
49
+ ...in part because of an email miscommunication, the WG-PSN [Working Group for Planetary System Nomenclature] was not involved in choosing the word plutoid. ... In fact, a vote taken by the WG-PSN subsequent to the Executive Committee meeting has rejected the use of that specific term..."[42]
50
+
51
+ The category of 'plutoid' captured an earlier distinction between the 'terrestrial dwarf' Ceres and the 'ice dwarfs' of the outer Solar system,[43] part of a conception of a threefold division of the Solar System into inner terrestrial planets, central gas giants and outer ice dwarfs, of which Pluto was the principal member.[44] 'Ice dwarf' however also saw some use as an umbrella term for all trans-Neptunian minor planets, or for the ice asteroids of the outer Solar System; one attempted definition was that an ice dwarf "is larger than the nucleus of a normal comet and icier than a typical asteroid."[45]
52
+
53
+ Before the Dawn mission, Ceres was sometimes called a 'terrestrial dwarf' to distinguish it from the 'ice dwarfs' Pluto and Eris. However, since Dawn it has been recognized that Ceres is an icy body more similar to the icy moons of the outer planets and to TNOs such as Pluto than it is to the terrestrial planets, blurring the distinction,[46][47]
54
+ and Ceres has since been called an ice dwarf as well.[48]
55
+
56
+ Showing the planets and the largest known sub-planetary objects (purple) covering the orbital zones containing likely dwarf planets. All known possible dwarf planets have smaller discriminants than those shown for that zone.
57
+
58
+ Alan Stern and Harold F. Levison introduced a parameter Λ (lambda), expressing the likelihood of an encounter resulting in a given deflection of orbit.[51] The value of this parameter in Stern's model is proportional to the square of the mass and inversely proportional to the period. This value can be used to estimate the capacity of a body to clear the neighbourhood of its orbit, where Λ > 1 will eventually clear it. A gap of five orders of magnitude in Λ was found between the smallest terrestrial planets and the largest asteroids and Kuiper belt objects.[49]
59
+
60
+ Using this parameter, Steven Soter and other astronomers argued for a distinction between planets and dwarf planets based on the inability of the latter to "clear the neighbourhood around their orbits": planets are able to remove smaller bodies near their orbits by collision, capture, or gravitational disturbance (or establish orbital resonances that prevent collisions), whereas dwarf planets lack the mass to do so.[51] Soter went on to propose a parameter he called the planetary discriminant, designated with the symbol µ (mu), that represents an experimental measure of the actual degree of cleanliness of the orbital zone (where µ is calculated by dividing the mass of the candidate body by the total mass of the other objects that share its orbital zone), where µ > 100 is deemed to be cleared.[49]
61
+
62
+ Jean-Luc Margot refined Stern and Levison's concept to produce a similar parameter Π (Pi).[52] It is based on theory, avoiding the empirical data used by Λ. Π > 1 indicates a planet, and there is again a gap of several orders of magnitude between planets and dwarf planets.
63
+
64
+ There are several other schemes that try to differentiate between planets and dwarf planets,[8] but the 2006 definition uses this concept.[2]
65
+
66
+ Sufficient internal pressure, caused by the body's gravitation, will turn a body plastic, and sufficient plasticity will allow high elevations to sink and hollows to fill in, a process known as gravitational relaxation. Bodies smaller than a few kilometers are dominated by non-gravitational forces and tend to have an irregular shape and may be rubble piles. Larger objects, where gravitation is significant but not dominant, are "potato" shaped; the more massive the body is, the higher its internal pressure, the more solid it is and the more rounded its shape, until the pressure is sufficient to overcome its internal compressive strength and it achieves hydrostatic equilibrium. At this point a body is as round as it is possible to be, given its rotation and tidal effects, and is an ellipsoid in shape. This is the defining limit of a dwarf planet.[53]
67
+
68
+ When an object is in hydrostatic equilibrium, a global layer of liquid covering its surface would form a liquid surface of the same shape as the body, apart from small-scale surface features such as craters and fissures. If the body does not rotate, it will be a sphere, but the faster it rotates, the more oblate or even scalene it becomes. If such a rotating body were to be heated until it melted, its overall shape would not change when liquid. The extreme example of a body that may be scalene due to rapid rotation is Haumea, which is twice as long along its major axis as it is at the poles. If the body has a massive nearby companion, then tidal forces cause its rotation to gradually slow until it is tidally locked, such that it always presents the same face to its companion. An extreme example of this is the Pluto–Charon system, where both bodies are tidally locked to each other. Tidally locked bodies are also scalene, though sometimes only slightly so. Earth's Moon is also tidally locked, as are all rounded satellites of the gas giants.
69
+
70
+ The upper and lower size and mass limits of dwarf planets have not been specified by the IAU. There is no defined upper limit, and an object larger or more massive than Mercury that has not "cleared the neighbourhood around its orbit" would be classified as a dwarf planet.[54] The lower limit is determined by the requirements of achieving a hydrostatic equilibrium shape, but the size or mass at which an object attains this shape depends on its composition and thermal history. The original draft of the 2006 IAU resolution redefined hydrostatic equilibrium shape as applying "to objects with mass above 5×1020 kg and diameter greater than 800 km",[35] but this was not retained in the final draft.[2]
71
+
72
+ The number of dwarf planets in the Solar system is not known. The three objects under consideration during the debates leading up to the 2006 IAU acceptance of the category of dwarf planet – Ceres, Pluto and Eris – are generally accepted as dwarf planets, including by those astronomers who continue to classify dwarf planets as planets. In 2015, Ceres and Pluto were determined to have shapes consistent with hydrostatic equilibrium (and thus with being dwarf planets) by the Dawn and New Horizons missions, respectively, though there is still some question about Ceres. Eris is assumed to be a dwarf planet because it is more massive than Pluto.
73
+
74
+ In order of discovery, these three bodies are:
75
+
76
+ Due to the 2008 decision to assign the naming of Haumea and Makemake to the dwarf-planet naming committee and their announcement as dwarf planets in IAU press releases, these two bodies are also generally accepted as dwarf planets:
77
+
78
+ Four additional bodies meet the criteria of Brown, Tancredi et al. and Grundy et al. for candidate objects:
79
+
80
+ Additional bodies have been proposed, such as Salacia and 2002 MS4 by Brown, or Varuna and Ixion by Tancredi et al. Most of the larger bodies have moons, which enables a determination of their masses and thus their densities, which inform estimates as to whether they could be dwarf planets. The largest TNOs that are not known to have moons are Sedna, 2002 MS4 and 2002 AW197.
81
+
82
+ At the time Makemake and Haumea were named, it was thought that trans-Neptunian objects (TNOs) with icy cores would require a diameter of only perhaps 400 km (250 mi)—about 3% of that of Earth—to relax into gravitational equilibrium.[56] Researchers thought that the number of such bodies could prove to be around 200 in the Kuiper belt, with thousands more beyond.[56][57][58]
83
+ This was one of the reasons (keeping the roster of 'planets' to a reasonable number) that Pluto was reclassified in the first place.
84
+ However, research since then has cast doubt on the idea that bodies that small could have achieved or maintained equilibrium under common conditions.
85
+
86
+ Individual astronomers have recognized a number of such objects as dwarf planets or as highly likely to prove to be dwarf planets. In 2008, Tancredi et al. advised the IAU to officially accept Orcus, Sedna and Quaoar as dwarf planets, though the IAU did not address the issue then and has not since. In addition, Tancredi considered the five TNOs Varuna, Ixion, 2003 AZ84, 2004 GV9, and 2002 AW197 to be mostly likely dwarf planets as well.[59] In 2012, Stern stated that there are more than a dozen known dwarf planets, though he did not specify which they were.[58]
87
+ Since 2011, Brown has maintained a list of hundreds of candidate objects, ranging from "nearly certain" to "possible" dwarf planets, based solely on estimated size.[60]
88
+ As of 13 September 2019, Brown's list identifies ten trans-Neptunian objects with diameters greater than 900 km (the four named by the IAU plus Gonggong, Quaoar, Sedna, Orcus, 2002 MS4 and Salacia) as "near certain" to be dwarf planets, and another 16, with diameters greater than 600 km, as "highly likely".[61] Notably, Gonggong may have a larger diameter (1230±50 km) than Pluto's largest moon Charon (1212 km). Pinilla-Alonso et al. (2019) propose that the surface compositions of 40 bodies possibly larger than 450 km in diameter be compared with the planned James Webb Space Telescope.[32]
89
+
90
+ However, in 2019 Grundy et al. proposed that dark, low-density bodies smaller than about 900–1000 km in diameter, such as Salacia and Varda, never fully collapsed into solid planetary bodies and retain internal porosity from their formation (in which case they could not be dwarf-planets), while accepting that brighter (albedo > ≈0.2)[62] or denser (> ≈1.4 g/cc) Orcus and Quaoar probably were fully solid.[63]
91
+
92
+ Observations in 2017–2019 led researchers to suggest that the large icy asteroids 10 Hygiea and 704 Interamnia may be transitional between dwarf planets and smaller objects.[64][65]
93
+
94
+ The following Trans-Neptunian objects are agreed by Brown, Tancredi et al. and Grundy et al. to be likely to be dwarf planets. Charon, a moon of Pluto that was proposed as a dwarf planet by the IAU in 2006, is included for comparison. Those objects that have absolute magnitudes greater than +1, and so meet the criteria for the dwarf-planet naming committee of the IAU, are highlighted, as is Ceres, which has been accepted as a dwarf planet by the IAU since they first debated the concept, though it's is not yet demonstrated to meet the definition.
95
+
96
+ On March 6, 2015, the Dawn spacecraft began to orbit Ceres, becoming the first spacecraft to orbit a dwarf planet.[67] On July 14, 2015, the New Horizons space probe flew by Pluto and its five moons.
97
+ Ceres displays such planetary-geologic features as surface salt deposits and cryovolcanos, while Pluto has water-ice mountains drifting in nitrogen-ice glaciers, as well as of course an atmosphere.
98
+ For both bodies, there is at least the possibility of a subsurface ocean or brine layer.
99
+
100
+ Dawn has also orbited the former dwarf planet Vesta. Phoebe has been explored by Cassini (most recently) and Voyager 2, which also explored Neptune's moon Triton. These three bodies are thought to be former dwarf planets and therefore their exploration helps in the study of the evolution of dwarf planets.
101
+
102
+ In the immediate aftermath of the IAU definition of dwarf planet, some scientists expressed their disagreement with the IAU resolution.[8] Campaigns included car bumper stickers and T-shirts.[68] Mike Brown (the discoverer of Eris) agrees with the reduction of the number of planets to eight.[69]
103
+
104
+ NASA has announced that it will use the new guidelines established by the IAU.[70] Alan Stern, the director of NASA's mission to Pluto, rejects the current IAU definition of planet, both in terms of defining dwarf planets as something other than a type of planet, and in using orbital characteristics (rather than intrinsic characteristics) of objects to define them as dwarf planets.[71] Thus, in 2011, he still referred to Pluto as a planet,[72] and accepted other likely dwarf planets such as Ceres and Eris, as well as the larger moons, as additional planets.[73] Several years before the IAU definition, he used orbital characteristics to separate "überplanets" (the dominant eight) from "unterplanets" (the dwarf planets), considering both types "planets".[51]
105
+
106
+ A number of bodies physically resemble dwarf planets. This include former dwarf planets, which may still have an equilibrium shape; planetary-mass moons, which meet the physical but not the orbital definition for dwarf planets; and Charon in the Pluto–Charon system, which is arguably a binary dwarf planet. The categories may overlap: Triton, for example, is both a former dwarf planet and a planetary-mass moon.
107
+
108
+ Vesta, the next-most-massive body in the asteroid belt after Ceres, was once in hydrostatic equilibrium and is roughly spherical, deviating mainly because of massive impacts that formed the Rheasilvia and Veneneia craters after it solidified.[74]
109
+ Its dimensions are not consistent with it currently being in hydrostatic equilibrium.[75][76]
110
+ Triton is more massive than Eris or Pluto, has an equilibrium shape, and is thought to be a captured dwarf planet (likely a member of a binary system), but no longer directly orbits the sun.[77]
111
+ Phoebe is a captured centaur that, like Vesta, is no longer in hydrostatic equilibrium, but is thought to have been so early in its history due to radiogenic heating.[78]
112
+
113
+ Evidence from 2019 suggests that Theia, the former planet that collided with Earth in the giant-impact hypothesis, may have originated in the outer Solar System rather than in the inner Solar System and that Earth's water originated on Theia, thus implying that Theia may have been a former dwarf planet from the Kuiper Belt.[79]
114
+
115
+ Nineteen moons have an equilibrium shape from having relaxed under their own gravity at some point in their history, though some have since frozen solid and are no longer in equilibrium. Seven are more massive than either Eris or Pluto. These moons are not physically distinct from the dwarf planets, but do not fit the IAU definition because they do not directly orbit the Sun. (Indeed, Neptune's moon Triton is a captured dwarf planet, and Ceres formed in the same region of the Solar System as the moons of Jupiter and Saturn.) Alan Stern calls planetary-mass moons "satellite planets", one of three categories of planet, together with dwarf planets and classical planets.[73] The term planemo ("planetary-mass object") also covers all three populations.[80]
116
+
117
+ There has been some debate as to whether the Pluto–Charon system should be considered a double dwarf planet.
118
+ In a draft resolution for the IAU definition of planet, both Pluto and Charon were considered planets in a binary system.[note 1][35] The IAU currently states that Charon is not considered to be a dwarf planet but rather is a satellite of Pluto, although the idea that Charon might qualify as a dwarf planet in its own right may be considered at a later date.[81] However, it is no longer clear that Charon is in hydrostatic equilibrium. Further, the location of the barycenter depends not only on the relative masses of the bodies, but also on the distance between them; the barycenter of the Sun–Jupiter orbit, for example, lies outside the Sun, but they are not considered a binary object.
119
+
120
+ Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Virgo Supercluster → Laniakea Supercluster → Observable universe → UniverseEach arrow (→) may be read as "within" or "part of".
121
+
en/4653.html.txt ADDED
@@ -0,0 +1,193 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+
5
+ Earth is the third planet from the Sun and the only astronomical object known to harbor life. According to radiometric dating estimation and other evidence, Earth formed over 4.5 billion years ago. Earth's gravity interacts with other objects in space, especially the Sun and the Moon, which is Earth's only natural satellite. Earth orbits around the Sun in 365.256 solar days, a period known as an Earth sidereal year. During this time, Earth rotates about its axis 366.256 times, that is, a sidereal year has 366.256 sidereal days.[n 6]
6
+
7
+ Earth's axis of rotation is tilted with respect to its orbital plane, producing seasons on Earth. The gravitational interaction between Earth and the Moon causes tides, stabilizes Earth's orientation on its axis, and gradually slows its rotation. Earth is the densest planet in the Solar System and the largest and most massive of the four rocky planets.
8
+
9
+ Earth's outer layer (lithosphere) is divided into several rigid tectonic plates that migrate across the surface over many millions of years. About 29% of Earth's surface is land consisting of continents and islands. The remaining 71% is covered with water, mostly by oceans but also lakes, rivers and other fresh water, which all together constitute the hydrosphere. The majority of Earth's polar regions are covered in ice, including the Antarctic ice sheet and the sea ice of the Arctic ice pack. Earth's interior remains active with a solid iron inner core, a liquid outer core that generates Earth's magnetic field, and a convecting mantle that drives plate tectonics.
10
+
11
+ Within the first billion years of Earth's history, life appeared in the oceans and began to affect Earth's atmosphere and surface, leading to the proliferation of anaerobic and, later, aerobic organisms. Some geological evidence indicates that life may have arisen as early as 4.1 billion years ago. Since then, the combination of Earth's distance from the Sun, physical properties and geological history have allowed life to evolve and thrive. In the history of life on Earth, biodiversity has gone through long periods of expansion, occasionally punctuated by mass extinctions. Over 99% of all species that ever lived on Earth are extinct. Estimates of the number of species on Earth today vary widely; most species have not been described. Over 7.7 billion humans live on Earth and depend on its biosphere and natural resources for their survival.[23]
12
+
13
+ The modern English word Earth developed, via Middle English,[n 7] from an Old English noun most often spelled eorðe.[24] It has cognates in every Germanic language, and their ancestral root has been reconstructed as *erþō. In its earliest attestation, the word eorðe was already being used to translate the many senses of Latin terra and Greek γῆ gē: the ground,[n 8] its soil,[n 9] dry land,[n 10] the human world,[n 11] the surface of the world (including the sea),[n 12] and the globe itself.[n 13] As with Roman Terra/Tellūs and Greek Gaia, Earth may have been a personified goddess in Germanic paganism: late Norse mythology included Jörð ('Earth'), a giantess often given as the mother of Thor.[33]
14
+
15
+ Originally, earth was written in lowercase, and from early Middle English, its definite sense as "the globe" was expressed as the earth. By Early Modern English, many nouns were capitalized, and the earth became (and often remained) the Earth, particularly when referenced along with other heavenly bodies. More recently, the name is sometimes simply given as Earth, by analogy with the names of the other planets.[24] House styles now vary: Oxford spelling recognizes the lowercase form as the most common, with the capitalized form an acceptable variant. Another convention capitalizes "Earth" when appearing as a name (e.g. "Earth's atmosphere") but writes it in lowercase when preceded by the (e.g. "the atmosphere of the earth"). It almost always appears in lowercase in colloquial expressions such as "what on earth are you doing?"[34]
16
+
17
+ Occasionally, the name Terra /ˈtɛrə/ is used in scientific writing and especially in science fiction to distinguish our inhabited planet from others,[35] while in poetry Tellus /ˈtɛləs/ has been used to denote personification of the Earth.[36] The Greek poetic name Gaea (Gæa) /ˈdʒiːə/ is rare, though the alternative spelling Gaia has become common due to the Gaia hypothesis, in which case its pronunciation is /ˈɡaɪə/ rather than the more Classical /ˈɡeɪə/.[37]
18
+
19
+ There are a number of adjectives for the planet Earth. From Earth itself comes earthly. From Latin Terra come Terran /ˈtɛrən/,[38] Terrestrial /təˈrɛstriəl/,[39] and (via French) Terrene /təˈriːn/,[40] and from Latin Tellus come Tellurian /tɛˈlʊəriən/[41] and, more rarely, Telluric and Tellural. From Greek Gaia and Gaea comes Gaian and Gaean.
20
+
21
+ An inhabitant of the Earth is an Earthling, a Terran, a Terrestrial, a Tellurian or, rarely, an Earthian.
22
+
23
+ The oldest material found in the Solar System is dated to 4.5672±0.0006 billion years ago (BYA).[42] By 4.54±0.04 BYA[43] the primordial Earth had formed. The bodies in the Solar System formed and evolved with the Sun. In theory, a solar nebula partitions a volume out of a molecular cloud by gravitational collapse, which begins to spin and flatten into a circumstellar disk, and then the planets grow out of that disk with the Sun. A nebula contains gas, ice grains, and dust (including primordial nuclides). According to nebular theory, planetesimals formed by accretion, with the primordial Earth taking 10–20 million years (Mys) to form.[44]
24
+
25
+ A subject of research is the formation of the Moon, some 4.53 BYA.[45] A leading hypothesis is that it was formed by accretion from material loosed from Earth after a Mars-sized object, named Theia, hit Earth.[46] In this view, the mass of Theia was approximately 10 percent of Earth;[47] it hit Earth with a glancing blow and some of its mass merged with Earth.[48] Between approximately 4.1 and 3.8 BYA, numerous asteroid impacts during the Late Heavy Bombardment caused significant changes to the greater surface environment of the Moon and, by inference, to that of Earth.
26
+
27
+ Earth's atmosphere and oceans were formed by volcanic activity and outgassing.[49] Water vapor from these sources condensed into the oceans, augmented by water and ice from asteroids, protoplanets, and comets.[50] In this model, atmospheric "greenhouse gases" kept the oceans from freezing when the newly forming Sun had only 70% of its current luminosity.[51] By 3.5 BYA, Earth's magnetic field was established, which helped prevent the atmosphere from being stripped away by the solar wind.[52]
28
+
29
+ A crust formed when the molten outer layer of Earth cooled to form a solid. The two models[53] that explain land mass propose either a steady growth to the present-day forms[54] or, more likely, a rapid growth[55] early in Earth history[56] followed by a long-term steady continental area.[57][58][59] Continents formed by plate tectonics, a process ultimately driven by the continuous loss of heat from Earth's interior. Over the period of hundreds of millions of years, the supercontinents have assembled and broken apart. Roughly 750 million years ago (MYA), one of the earliest known supercontinents, Rodinia, began to break apart. The continents later recombined to form Pannotia 600–540 MYA, then finally Pangaea, which also broke apart 180 MYA.[60]
30
+
31
+ The present pattern of ice ages began about 40 MYA,[61] and then intensified during the Pleistocene about 3 MYA.[62] High-latitude regions have since undergone repeated cycles of glaciation and thaw, repeating about every 40,000–100,000 years. The last continental glaciation ended 10,000 years ago.[63]
32
+
33
+ Chemical reactions led to the first self-replicating molecules about four billion years ago. A half billion years later, the last common ancestor of all current life arose.[64] The evolution of photosynthesis allowed the Sun's energy to be harvested directly by life forms. The resultant molecular oxygen (O2) accumulated in the atmosphere and due to interaction with ultraviolet solar radiation, formed a protective ozone layer (O3) in the upper atmosphere.[65] The incorporation of smaller cells within larger ones resulted in the development of complex cells called eukaryotes.[66] True multicellular organisms formed as cells within colonies became increasingly specialized. Aided by the absorption of harmful ultraviolet radiation by the ozone layer, life colonized Earth's surface.[67] Among the earliest fossil evidence for life is microbial mat fossils found in 3.48 billion-year-old sandstone in Western Australia,[68] biogenic graphite found in 3.7 billion-year-old metasedimentary rocks in Western Greenland,[69] and remains of biotic material found in 4.1 billion-year-old rocks in Western Australia.[70][71] The earliest direct evidence of life on Earth is contained in 3.45 billion-year-old Australian rocks showing fossils of microorganisms.[72][73]
34
+
35
+ During the Neoproterozoic, 750 to 580 MYA, much of Earth might have been covered in ice. This hypothesis has been termed "Snowball Earth", and it is of particular interest because it preceded the Cambrian explosion, when multicellular life forms significantly increased in complexity.[74] Following the Cambrian explosion, 535 MYA, there have been five mass extinctions.[75] The most recent such event was 66 MYA, when an asteroid impact triggered the extinction of the non-avian dinosaurs and other large reptiles, but spared some small animals such as mammals, which at the time resembled shrews. Mammalian life has diversified over the past 66 Mys, and several million years ago an African ape-like animal such as Orrorin tugenensis gained the ability to stand upright.[76] This facilitated tool use and encouraged communication that provided the nutrition and stimulation needed for a larger brain, which led to the evolution of humans. The development of agriculture, and then civilization, led to humans having an influence on Earth and the nature and quantity of other life forms that continues to this day.[77]
36
+
37
+ Earth's expected long-term future is tied to that of the Sun. Over the next 1.1 billion years, solar luminosity will increase by 10%, and over the next 3.5 billion years by 40%.[78] Earth's increasing surface temperature will accelerate the inorganic carbon cycle, reducing CO2 concentration to levels lethally low for plants (10 ppm for C4 photosynthesis) in approximately 100–900 million years.[79][80] The lack of vegetation will result in the loss of oxygen in the atmosphere, making animal life impossible.[81] About a billion years from now, all surface water will have disappeared[82] and the mean global temperature will reach 70 °C (158 °F).[81] Earth is expected to be habitable until the end of photosynthesis about 500 million years from now,[79] but if nitrogen is removed from the atmosphere, life may continue until a runaway greenhouse effect occurs 2.3 billion years from now.[80] Anthropogenic emissions are "probably insufficient" to cause a runaway greenhouse at current solar luminosity.[83] Even if the Sun were eternal and stable, 27% of the water in the modern oceans will descend to the mantle in one billion years, due to reduced steam venting from mid-ocean ridges.[84]
38
+
39
+ The Sun will evolve to become a red giant in about 5 billion years. Models predict that the Sun will expand to roughly 1 AU (150 million km; 93 million mi), about 250 times its present radius.[78][85] Earth's fate is less clear. As a red giant, the Sun will lose roughly 30% of its mass, so, without tidal effects, Earth will move to an orbit 1.7 AU (250 million km; 160 million mi) from the Sun when the star reaches its maximum radius. Most, if not all, remaining life will be destroyed by the Sun's increased luminosity (peaking at about 5,000 times its present level).[78] A 2008 simulation indicates that Earth's orbit will eventually decay due to tidal effects and drag, causing it to enter the Sun's atmosphere and be vaporized.[85]
40
+
41
+ The shape of Earth is nearly spherical. There is a small flattening at the poles and bulging around the equator due to Earth's rotation.[89] To second order, Earth is approximately an oblate spheroid, whose equatorial diameter is 43 kilometres (27 mi) larger than the pole-to-pole diameter,[90] although the variation is less than 1% of the average radius of the Earth.
42
+
43
+ The point on the surface farthest from Earth's center of mass is the summit of the equatorial Chimborazo volcano in Ecuador (6,384.4 km or 3,967.1 mi).[91][92][93][94] The average diameter of the reference spheroid is 12,742 kilometres (7,918 mi). Local topography deviates from this idealized spheroid, although on a global scale these deviations are small compared to Earth's radius: the maximum deviation of only 0.17% is at the Mariana Trench (10,911 metres or 35,797 feet below local sea level), whereas Mount Everest (8,848 metres or 29,029 feet above local sea level) represents a deviation of 0.14%.[n 14]
44
+
45
+ In geodesy, the exact shape that Earth's oceans would adopt in the absence of land and perturbations such as tides and winds is called the geoid. More precisely, the geoid is the surface of gravitational equipotential at mean sea level.
46
+
47
+ Earth's mass is approximately 5.97×1024 kg (5,970 Yg). It is composed mostly of iron (32.1%), oxygen (30.1%), silicon (15.1%), magnesium (13.9%), sulphur (2.9%), nickel (1.8%), calcium (1.5%), and aluminum (1.4%), with the remaining 1.2% consisting of trace amounts of other elements. Due to mass segregation, the core region is estimated to be primarily composed of iron (88.8%), with smaller amounts of nickel (5.8%), sulphur (4.5%), and less than 1% trace elements.[98]
48
+
49
+ The most common rock constituents of the crust are nearly all oxides: chlorine, sulphur, and fluorine are the important exceptions to this and their total amount in any rock is usually much less than 1%. Over 99% of the crust is composed of 11 oxides, principally silica, alumina, iron oxides, lime, magnesia, potash and soda.[99][98][100]
50
+
51
+ Earth's interior, like that of the other terrestrial planets, is divided into layers by their chemical or physical (rheological) properties. The outer layer is a chemically distinct silicate solid crust, which is underlain by a highly viscous solid mantle. The crust is separated from the mantle by the Mohorovičić discontinuity. The thickness of the crust varies from about 6 kilometres (3.7 mi) under the oceans to 30–50 km (19–31 mi) for the continents. The crust and the cold, rigid, top of the upper mantle are collectively known as the lithosphere, and it is of the lithosphere that the tectonic plates are composed. Beneath the lithosphere is the asthenosphere, a relatively low-viscosity layer on which the lithosphere rides. Important changes in crystal structure within the mantle occur at 410 and 660 km (250 and 410 mi) below the surface, spanning a transition zone that separates the upper and lower mantle. Beneath the mantle, an extremely low viscosity liquid outer core lies above a solid inner core.[101] Earth's inner core might rotate at a slightly higher angular velocity than the remainder of the planet, advancing by 0.1–0.5° per year.[102] The radius of the inner core is about one fifth of that of Earth.
52
+
53
+ Earth's internal heat comes from a combination of residual heat from planetary accretion (about 20%) and heat produced through radioactive decay (80%).[105] The major heat-producing isotopes within Earth are potassium-40, uranium-238, and thorium-232.[106] At the center, the temperature may be up to 6,000 °C (10,830 °F),[107] and the pressure could reach 360 GPa (52 million psi).[108] Because much of the heat is provided by radioactive decay, scientists postulate that early in Earth's history, before isotopes with short half-lives were depleted, Earth's heat production was much higher. At approximately 3 Gyr, twice the present-day heat would have been produced, increasing the rates of mantle convection and plate tectonics, and allowing the production of uncommon igneous rocks such as komatiites that are rarely formed today.[105][109]
54
+
55
+ The mean heat loss from Earth is 87 mW m−2, for a global heat loss of 4.42×1013 W.[111] A portion of the core's thermal energy is transported toward the crust by mantle plumes, a form of convection consisting of upwellings of higher-temperature rock. These plumes can produce hotspots and flood basalts.[112] More of the heat in Earth is lost through plate tectonics, by mantle upwelling associated with mid-ocean ridges. The final major mode of heat loss is through conduction through the lithosphere, the majority of which occurs under the oceans because the crust there is much thinner than that of the continents.[113]
56
+
57
+ Earth's mechanically rigid outer layer, the lithosphere, is divided into tectonic plates. These plates are rigid segments that move relative to each other at one of three boundaries types: At convergent boundaries, two plates come together; at divergent boundaries, two plates are pulled apart; and at transform boundaries, two plates slide past one another laterally. Along these plate boundaries, earthquakes, volcanic activity, mountain-building, and oceanic trench formation can occur.[115] The tectonic plates ride on top of the asthenosphere, the solid but less-viscous part of the upper mantle that can flow and move along with the plates.[116]
58
+
59
+ As the tectonic plates migrate, oceanic crust is subducted under the leading edges of the plates at convergent boundaries. At the same time, the upwelling of mantle material at divergent boundaries creates mid-ocean ridges. The combination of these processes recycles the oceanic crust back into the mantle. Due to this recycling, most of the ocean floor is less than 100 Ma old. The oldest oceanic crust is located in the Western Pacific and is estimated to be 200 Ma old.[117][118] By comparison, the oldest dated continental crust is 4,030 Ma.[119]
60
+
61
+ The seven major plates are the Pacific, North American, Eurasian, African, Antarctic, Indo-Australian, and South American. Other notable plates include the Arabian Plate, the Caribbean Plate, the Nazca Plate off the west coast of South America and the Scotia Plate in the southern Atlantic Ocean. The Australian Plate fused with the Indian Plate between 50 and 55 Mya. The fastest-moving plates are the oceanic plates, with the Cocos Plate advancing at a rate of 75 mm/a (3.0 in/year)[120] and the Pacific Plate moving 52–69 mm/a (2.0–2.7 in/year). At the other extreme, the slowest-moving plate is the Eurasian Plate, progressing at a typical rate of 21 mm/a (0.83 in/year).[121]
62
+
63
+ The total surface area of Earth is about 510 million km2 (197 million sq mi).[12] Of this, 70.8%,[12] or 361.13 million km2 (139.43 million sq mi), is below sea level and covered by ocean water.[122] Below the ocean's surface are much of the continental shelf, mountains, volcanoes,[90] oceanic trenches, submarine canyons, oceanic plateaus, abyssal plains, and a globe-spanning mid-ocean ridge system. The remaining 29.2%, or 148.94 million km2 (57.51 million sq mi), not covered by water has terrain that varies greatly from place to place and consists of mountains, deserts, plains, plateaus, and other landforms. Tectonics and erosion, volcanic eruptions, flooding, weathering, glaciation, the growth of coral reefs, and meteorite impacts are among the processes that constantly reshape Earth's surface over geological time.[123][124]
64
+
65
+ The continental crust consists of lower density material such as the igneous rocks granite and andesite. Less common is basalt, a denser volcanic rock that is the primary constituent of the ocean floors.[125] Sedimentary rock is formed from the accumulation of sediment that becomes buried and compacted together. Nearly 75% of the continental surfaces are covered by sedimentary rocks, although they form about 5% of the crust.[126] The third form of rock material found on Earth is metamorphic rock, which is created from the transformation of pre-existing rock types through high pressures, high temperatures, or both. The most abundant silicate minerals on Earth's surface include quartz, feldspars, amphibole, mica, pyroxene and olivine.[127] Common carbonate minerals include calcite (found in limestone) and dolomite.[128]
66
+
67
+ The elevation of the land surface varies from the low point of −418 m (−1,371 ft) at the Dead Sea, to a maximum altitude of 8,848 m (29,029 ft) at the top of Mount Everest. The mean height of land above sea level is about 797 m (2,615 ft).[129]
68
+
69
+ The pedosphere is the outermost layer of Earth's continental surface and is composed of soil and subject to soil formation processes. The total arable land is 10.9% of the land surface, with 1.3% being permanent cropland.[130][131] Close to 40% of Earth's land surface is used for agriculture, or an estimated 16.7 million km2 (6.4 million sq mi) of cropland and 33.5 million km2 (12.9 million sq mi) of pastureland.[132]
70
+
71
+ The abundance of water on Earth's surface is a unique feature that distinguishes the "Blue Planet" from other planets in the Solar System. Earth's hydrosphere consists chiefly of the oceans, but technically includes all water surfaces in the world, including inland seas, lakes, rivers, and underground waters down to a depth of 2,000 m (6,600 ft). The deepest underwater location is Challenger Deep of the Mariana Trench in the Pacific Ocean with a depth of 10,911.4 m (35,799 ft).[n 18][133]
72
+
73
+ The mass of the oceans is approximately 1.35×1018 metric tons or about 1/4400 of Earth's total mass. The oceans cover an area of 361.8 million km2 (139.7 million sq mi) with a mean depth of 3,682 m (12,080 ft), resulting in an estimated volume of 1.332 billion km3 (320 million cu mi).[134] If all of Earth's crustal surface were at the same elevation as a smooth sphere, the depth of the resulting world ocean would be 2.7 to 2.8 km (1.68 to 1.74 mi).[135][136]
74
+
75
+ About 97.5% of the water is saline; the remaining 2.5% is fresh water. Most fresh water, about 68.7%, is present as ice in ice caps and glaciers.[137]
76
+
77
+ The average salinity of Earth's oceans is about 35 grams of salt per kilogram of sea water (3.5% salt).[138] Most of this salt was released from volcanic activity or extracted from cool igneous rocks.[139] The oceans are also a reservoir of dissolved atmospheric gases, which are essential for the survival of many aquatic life forms.[140] Sea water has an important influence on the world's climate, with the oceans acting as a large heat reservoir.[141] Shifts in the oceanic temperature distribution can cause significant weather shifts, such as the El Niño–Southern Oscillation.[142]
78
+
79
+ The atmospheric pressure at Earth's sea level averages 101.325 kPa (14.696 psi),[143] with a scale height of about 8.5 km (5.3 mi).[3] A dry atmosphere is composed of 78.084% nitrogen, 20.946% oxygen, 0.934% argon, and trace amounts of carbon dioxide and other gaseous molecules.[143] Water vapor content varies between 0.01% and 4%[143] but averages about 1%.[3] The height of the troposphere varies with latitude, ranging between 8 km (5 mi) at the poles to 17 km (11 mi) at the equator, with some variation resulting from weather and seasonal factors.[144]
80
+
81
+ Earth's biosphere has significantly altered its atmosphere. Oxygenic photosynthesis evolved 2.7 Gya, forming the primarily nitrogen–oxygen atmosphere of today.[65] This change enabled the proliferation of aerobic organisms and, indirectly, the formation of the ozone layer due to the subsequent conversion of atmospheric O2 into O3. The ozone layer blocks ultraviolet solar radiation, permitting life on land.[145] Other atmospheric functions important to life include transporting water vapor, providing useful gases, causing small meteors to burn up before they strike the surface, and moderating temperature.[146] This last phenomenon is known as the greenhouse effect: trace molecules within the atmosphere serve to capture thermal energy emitted from the ground, thereby raising the average temperature. Water vapor, carbon dioxide, methane, nitrous oxide, and ozone are the primary greenhouse gases in the atmosphere. Without this heat-retention effect, the average surface temperature would be −18 °C (0 °F), in contrast to the current +15 °C (59 °F),[147] and life on Earth probably would not exist in its current form.[148] In May 2017, glints of light, seen as twinkling from an orbiting satellite a million miles away, were found to be reflected light from ice crystals in the atmosphere.[149][150]
82
+
83
+ Earth's atmosphere has no definite boundary, slowly becoming thinner and fading into outer space. Three-quarters of the atmosphere's mass is contained within the first 11 km (6.8 mi) of the surface. This lowest layer is called the troposphere. Energy from the Sun heats this layer, and the surface below, causing expansion of the air. This lower-density air then rises and is replaced by cooler, higher-density air. The result is atmospheric circulation that drives the weather and climate through redistribution of thermal energy.[151]
84
+
85
+ The primary atmospheric circulation bands consist of the trade winds in the equatorial region below 30° latitude and the westerlies in the mid-latitudes between 30° and 60°.[152] Ocean currents are also important factors in determining climate, particularly the thermohaline circulation that distributes thermal energy from the equatorial oceans to the polar regions.[153]
86
+
87
+ Water vapor generated through surface evaporation is transported by circulatory patterns in the atmosphere. When atmospheric conditions permit an uplift of warm, humid air, this water condenses and falls to the surface as precipitation.[151] Most of the water is then transported to lower elevations by river systems and usually returned to the oceans or deposited into lakes. This water cycle is a vital mechanism for supporting life on land and is a primary factor in the erosion of surface features over geological periods. Precipitation patterns vary widely, ranging from several meters of water per year to less than a millimeter. Atmospheric circulation, topographic features, and temperature differences determine the average precipitation that falls in each region.[154]
88
+
89
+ The amount of solar energy reaching Earth's surface decreases with increasing latitude. At higher latitudes, the sunlight reaches the surface at lower angles, and it must pass through thicker columns of the atmosphere. As a result, the mean annual air temperature at sea level decreases by about 0.4 °C (0.7 °F) per degree of latitude from the equator.[155] Earth's surface can be subdivided into specific latitudinal belts of approximately homogeneous climate. Ranging from the equator to the polar regions, these are the tropical (or equatorial), subtropical, temperate and polar climates.[156]
90
+
91
+ This latitudinal rule has several anomalies:
92
+
93
+ The commonly used Köppen climate classification system has five broad groups (humid tropics, arid, humid middle latitudes, continental and cold polar), which are further divided into more specific subtypes.[152] The Köppen system rates regions of terrain based on observed temperature and precipitation.
94
+
95
+ The highest air temperature ever measured on Earth was 56.7 °C (134.1 °F) in Furnace Creek, California, in Death Valley, in 1913.[159] The lowest air temperature ever directly measured on Earth was −89.2 °C (−128.6 °F) at Vostok Station in 1983,[160] but satellites have used remote sensing to measure temperatures as low as −94.7 °C (−138.5 °F) in East Antarctica.[161] These temperature records are only measurements made with modern instruments from the 20th century onwards and likely do not reflect the full range of temperature on Earth.
96
+
97
+ Above the troposphere, the atmosphere is usually divided into the stratosphere, mesosphere, and thermosphere.[146] Each layer has a different lapse rate, defining the rate of change in temperature with height. Beyond these, the exosphere thins out into the magnetosphere, where the geomagnetic fields interact with the solar wind.[162] Within the stratosphere is the ozone layer, a component that partially shields the surface from ultraviolet light and thus is important for life on Earth. The Kármán line, defined as 100 km above Earth's surface, is a working definition for the boundary between the atmosphere and outer space.[163]
98
+
99
+ Thermal energy causes some of the molecules at the outer edge of the atmosphere to increase their velocity to the point where they can escape from Earth's gravity. This causes a slow but steady loss of the atmosphere into space. Because unfixed hydrogen has a low molecular mass, it can achieve escape velocity more readily, and it leaks into outer space at a greater rate than other gases.[164] The leakage of hydrogen into space contributes to the shifting of Earth's atmosphere and surface from an initially reducing state to its current oxidizing one. Photosynthesis provided a source of free oxygen, but the loss of reducing agents such as hydrogen is thought to have been a necessary precondition for the widespread accumulation of oxygen in the atmosphere.[165] Hence the ability of hydrogen to escape from the atmosphere may have influenced the nature of life that developed on Earth.[166] In the current, oxygen-rich atmosphere most hydrogen is converted into water before it has an opportunity to escape. Instead, most of the hydrogen loss comes from the destruction of methane in the upper atmosphere.[167]
100
+
101
+ The gravity of Earth is the acceleration that is imparted to objects due to the distribution of mass within Earth. Near Earth's surface, gravitational acceleration is approximately 9.8 m/s2 (32 ft/s2). Local differences in topography, geology, and deeper tectonic structure cause local and broad, regional differences in Earth's gravitational field, known as gravity anomalies.[168]
102
+
103
+ The main part of Earth's magnetic field is generated in the core, the site of a dynamo process that converts the kinetic energy of thermally and compositionally driven convection into electrical and magnetic field energy. The field extends outwards from the core, through the mantle, and up to Earth's surface, where it is, approximately, a dipole. The poles of the dipole are located close to Earth's geographic poles. At the equator of the magnetic field, the magnetic-field strength at the surface is 3.05×10−5 T, with a magnetic dipole moment of 7.79×1022 Am2 at epoch 2000, decreasing nearly 6% per century.[169] The convection movements in the core are chaotic; the magnetic poles drift and periodically change alignment. This causes secular variation of the main field and field reversals at irregular intervals averaging a few times every million years. The most recent reversal occurred approximately 700,000 years ago.[170][171]
104
+
105
+ The extent of Earth's magnetic field in space defines the magnetosphere. Ions and electrons of the solar wind are deflected by the magnetosphere; solar wind pressure compresses the dayside of the magnetosphere, to about 10 Earth radii, and extends the nightside magnetosphere into a long tail.[172] Because the velocity of the solar wind is greater than the speed at which waves propagate through the solar wind, a supersonic bow shock precedes the dayside magnetosphere within the solar wind.[173] Charged particles are contained within the magnetosphere; the plasmasphere is defined by low-energy particles that essentially follow magnetic field lines as Earth rotates;[174][175] the ring current is defined by medium-energy particles that drift relative to the geomagnetic field, but with paths that are still dominated by the magnetic field,[176] and the Van Allen radiation belt are formed by high-energy particles whose motion is essentially random, but otherwise contained by the magnetosphere.[172][177]
106
+
107
+ During magnetic storms and substorms, charged particles can be deflected from the outer magnetosphere and especially the magnetotail, directed along field lines into Earth's ionosphere, where atmospheric atoms can be excited and ionized, causing the aurora.[178]
108
+
109
+ Earth's rotation period relative to the Sun—its mean solar day—is 86,400 seconds of mean solar time (86,400.0025 SI seconds).[179] Because Earth's solar day is now slightly longer than it was during the 19th century due to tidal deceleration, each day varies between 0 and 2 SI ms longer.[180][181]
110
+
111
+ Earth's rotation period relative to the fixed stars, called its stellar day by the International Earth Rotation and Reference Systems Service (IERS), is 86,164.0989 seconds of mean solar time (UT1), or 23h 56m 4.0989s.[2][n 19] Earth's rotation period relative to the precessing or moving mean March equinox, misnamed its sidereal day, is 86,164.0905 seconds of mean solar time (UT1) (23h 56m 4.0905s).[2] Thus the sidereal day is shorter than the stellar day by about 8.4 ms.[182] The length of the mean solar day in SI seconds is available from the IERS for the periods 1623–2005[183] and 1962–2005.[184]
112
+
113
+ Apart from meteors within the atmosphere and low-orbiting satellites, the main apparent motion of celestial bodies in Earth's sky is to the west at a rate of 15°/h = 15'/min. For bodies near the celestial equator, this is equivalent to an apparent diameter of the Sun or the Moon every two minutes; from Earth's surface, the apparent sizes of the Sun and the Moon are approximately the same.[185][186]
114
+
115
+ Earth orbits the Sun at an average distance of about 150 million km (93 million mi) every 365.2564 mean solar days, or one sidereal year. This gives an apparent movement of the Sun eastward with respect to the stars at a rate of about 1°/day, which is one apparent Sun or Moon diameter every 12 hours. Due to this motion, on average it takes 24 hours—a solar day—for Earth to complete a full rotation about its axis so that the Sun returns to the meridian. The orbital speed of Earth averages about 29.78 km/s (107,200 km/h; 66,600 mph), which is fast enough to travel a distance equal to Earth's diameter, about 12,742 km (7,918 mi), in seven minutes, and the distance to the Moon, 384,000 km (239,000 mi), in about 3.5 hours.[3]
116
+
117
+ The Moon and Earth orbit a common barycenter every 27.32 days relative to the background stars. When combined with the Earth–Moon system's common orbit around the Sun, the period of the synodic month, from new moon to new moon, is 29.53 days. Viewed from the celestial north pole, the motion of Earth, the Moon, and their axial rotations are all counterclockwise. Viewed from a vantage point above the north poles of both the Sun and Earth, Earth orbits in a counterclockwise direction about the Sun. The orbital and axial planes are not precisely aligned: Earth's axis is tilted some 23.44 degrees from the perpendicular to the Earth–Sun plane (the ecliptic), and the Earth–Moon plane is tilted up to ±5.1 degrees against the Earth–Sun plane. Without this tilt, there would be an eclipse every two weeks, alternating between lunar eclipses and solar eclipses.[3][188]
118
+
119
+ The Hill sphere, or the sphere of gravitational influence, of Earth is about 1.5 million km (930,000 mi) in radius.[189][n 20] This is the maximum distance at which Earth's gravitational influence is stronger than the more distant Sun and planets. Objects must orbit Earth within this radius, or they can become unbound by the gravitational perturbation of the Sun.
120
+
121
+ Earth, along with the Solar System, is situated in the Milky Way and orbits about 28,000 light-years from its center. It is about 20 light-years above the galactic plane in the Orion Arm.[190]
122
+
123
+ The axial tilt of Earth is approximately 23.439281°[2] with the axis of its orbit plane, always pointing towards the Celestial Poles. Due to Earth's axial tilt, the amount of sunlight reaching any given point on the surface varies over the course of the year. This causes the seasonal change in climate, with summer in the Northern Hemisphere occurring when the Tropic of Cancer is facing the Sun, and winter taking place when the Tropic of Capricorn in the Southern Hemisphere faces the Sun. During the summer, the day lasts longer, and the Sun climbs higher in the sky. In winter, the climate becomes cooler and the days shorter. In northern temperate latitudes, the Sun rises north of true east during the summer solstice, and sets north of true west, reversing in the winter. The Sun rises south of true east in the summer for the southern temperate zone and sets south of true west.
124
+
125
+ Above the Arctic Circle, an extreme case is reached where there is no daylight at all for part of the year, up to six months at the North Pole itself, a polar night. In the Southern Hemisphere, the situation is exactly reversed, with the South Pole oriented opposite the direction of the North Pole. Six months later, this pole will experience a midnight sun, a day of 24 hours, again reversing with the South Pole.
126
+
127
+ By astronomical convention, the four seasons can be determined by the solstices—the points in the orbit of maximum axial tilt toward or away from the Sun—and the equinoxes, when Earth's rotational axis is aligned with its orbital axis. In the Northern Hemisphere, winter solstice currently occurs around 21 December; summer solstice is near 21 June, spring equinox is around 20 March and autumnal equinox is about 22 or 23 September. In the Southern Hemisphere, the situation is reversed, with the summer and winter solstices exchanged and the spring and autumnal equinox dates swapped.[191]
128
+
129
+ The angle of Earth's axial tilt is relatively stable over long periods of time. Its axial tilt does undergo nutation; a slight, irregular motion with a main period of 18.6 years.[192] The orientation (rather than the angle) of Earth's axis also changes over time, precessing around in a complete circle over each 25,800 year cycle; this precession is the reason for the difference between a sidereal year and a tropical year. Both of these motions are caused by the varying attraction of the Sun and the Moon on Earth's equatorial bulge. The poles also migrate a few meters across Earth's surface. This polar motion has multiple, cyclical components, which collectively are termed quasiperiodic motion. In addition to an annual component to this motion, there is a 14-month cycle called the Chandler wobble. Earth's rotational velocity also varies in a phenomenon known as length-of-day variation.[193]
130
+
131
+ In modern times, Earth's perihelion occurs around 3 January, and its aphelion around 4 July. These dates change over time due to precession and other orbital factors, which follow cyclical patterns known as Milankovitch cycles. The changing Earth–Sun distance causes an increase of about 6.9%[n 21] in solar energy reaching Earth at perihelion relative to aphelion. Because the Southern Hemisphere is tilted toward the Sun at about the same time that Earth reaches the closest approach to the Sun, the Southern Hemisphere receives slightly more energy from the Sun than does the northern over the course of a year. This effect is much less significant than the total energy change due to the axial tilt, and most of the excess energy is absorbed by the higher proportion of water in the Southern Hemisphere.[194]
132
+
133
+ A study from 2016 suggested that Planet Nine tilted all the planets of the Solar System, including Earth, by about six degrees.[195]
134
+
135
+ A planet that can sustain life is termed habitable, even if life did not originate there. Earth provides liquid water—an environment where complex organic molecules can assemble and interact, and sufficient energy to sustain metabolism.[196] The distance of Earth from the Sun, as well as its orbital eccentricity, rate of rotation, axial tilt, geological history, sustaining atmosphere, and magnetic field all contribute to the current climatic conditions at the surface.[197]
136
+
137
+ A planet's life forms inhabit ecosystems, whose total is sometimes said to form a "biosphere".[198] Earth's biosphere is thought to have begun evolving about 3.5 Gya.[65] The biosphere is divided into a number of biomes, inhabited by broadly similar plants and animals.[199] On land, biomes are separated primarily by differences in latitude, height above sea level and humidity. Terrestrial biomes lying within the Arctic or Antarctic Circles, at high altitudes or in extremely arid areas are relatively barren of plant and animal life; species diversity reaches a peak in humid lowlands at equatorial latitudes.[200]
138
+
139
+ In July 2016, scientists reported identifying a set of 355 genes from the last universal common ancestor (LUCA) of all organisms living on Earth.[201]
140
+
141
+ Earth has resources that have been exploited by humans.[203] Those termed non-renewable resources, such as fossil fuels, only renew over geological timescales.[204]
142
+
143
+ Large deposits of fossil fuels are obtained from Earth's crust, consisting of coal, petroleum, and natural gas.[205] These deposits are used by humans both for energy production and as feedstock for chemical production.[206] Mineral ore bodies have also been formed within the crust through a process of ore genesis, resulting from actions of magmatism, erosion, and plate tectonics.[207] These bodies form concentrated sources for many metals and other useful elements.
144
+
145
+ Earth's biosphere produces many useful biological products for humans, including food, wood, pharmaceuticals, oxygen, and the recycling of many organic wastes. The land-based ecosystem depends upon topsoil and fresh water, and the oceanic ecosystem depends upon dissolved nutrients washed down from the land.[208] In 1980, 50.53 million km2 (19.51 million sq mi) of Earth's land surface consisted of forest and woodlands, 67.88 million km2 (26.21 million sq mi) was grasslands and pasture, and 15.01 million km2 (5.80 million sq mi) was cultivated as croplands.[209] The estimated amount of irrigated land in 1993 was 2,481,250 km2 (958,020 sq mi).[13] Humans also live on the land by using building materials to construct shelters.
146
+
147
+ Large areas of Earth's surface are subject to extreme weather such as tropical cyclones, hurricanes, or typhoons that dominate life in those areas. From 1980 to 2000, these events caused an average of 11,800 human deaths per year.[210] Many places are subject to earthquakes, landslides, tsunamis, volcanic eruptions, tornadoes, sinkholes, blizzards, floods, droughts, wildfires, and other calamities and disasters.
148
+
149
+ Many localized areas are subject to human-made pollution of the air and water, acid rain and toxic substances, loss of vegetation (overgrazing, deforestation, desertification), loss of wildlife, species extinction, soil degradation, soil depletion and erosion.
150
+
151
+ There is a scientific consensus linking human activities to global warming due to industrial carbon dioxide emissions. This is predicted to produce changes such as the melting of glaciers and ice sheets, more extreme temperature ranges, significant changes in weather and a global rise in average sea levels.[211]
152
+
153
+ Cartography, the study and practice of map-making, and geography, the study of the lands, features, inhabitants and phenomena on Earth, have historically been the disciplines devoted to depicting Earth. Surveying, the determination of locations and distances, and to a lesser extent navigation, the determination of position and direction, have developed alongside cartography and geography, providing and suitably quantifying the requisite information.
154
+
155
+ Earth's human population reached approximately seven billion on 31 October 2011.[213] Projections indicate that the world's human population will reach 9.2 billion in 2050.[214] Most of the growth is expected to take place in developing nations. Human population density varies widely around the world, but a majority live in Asia. By 2020, 60% of the world's population is expected to be living in urban, rather than rural, areas.[215]
156
+
157
+ 68% of the land mass of the world is in the northern hemisphere.[216] Partly due to the predominance of land mass, 90% of humans live in the northern hemisphere.[217]
158
+
159
+ It is estimated that one-eighth of Earth's surface is suitable for humans to live on – three-quarters of Earth's surface is covered by oceans, leaving one-quarter as land. Half of that land area is desert (14%),[218] high mountains (27%),[219] or other unsuitable terrains. The northernmost permanent settlement in the world is Alert, on Ellesmere Island in Nunavut, Canada.[220] (82°28′N) The southernmost is the Amundsen–Scott South Pole Station, in Antarctica, almost exactly at the South Pole. (90°S)
160
+
161
+ States claim the planet's entire land surface, except for parts of Antarctica and a few other unclaimed areas. Earth has never had a planetwide government, but the United Nations is the leading worldwide intergovernmental organization.
162
+
163
+ The first human to orbit Earth was Yuri Gagarin on 12 April 1961.[221] In total, about 487 people have visited outer space and reached orbit as of 30 July 2010[update], and, of these, twelve have walked on the Moon.[222][223][224] Normally, the only humans in space are those on the International Space Station. The station's crew, made up of six people, is usually replaced every six months.[225] The farthest that humans have traveled from Earth is 400,171 km (248,655 mi), achieved during the Apollo 13 mission in 1970.[226]
164
+
165
+ The Moon is a relatively large, terrestrial, planet-like natural satellite, with a diameter about one-quarter of Earth's. It is the largest moon in the Solar System relative to the size of its planet, although Charon is larger relative to the dwarf planet Pluto. The natural satellites of other planets are also referred to as "moons", after Earth's.
166
+
167
+ The gravitational attraction between Earth and the Moon causes tides on Earth. The same effect on the Moon has led to its tidal locking: its rotation period is the same as the time it takes to orbit Earth. As a result, it always presents the same face to the planet. As the Moon orbits Earth, different parts of its face are illuminated by the Sun, leading to the lunar phases; the dark part of the face is separated from the light part by the solar terminator.
168
+
169
+ Due to their tidal interaction, the Moon recedes from Earth at the rate of approximately 38 mm/a (1.5 in/year). Over millions of years, these tiny modifications—and the lengthening of Earth's day by about 23 µs/yr—add up to significant changes.[227] During the Devonian period, for example, (approximately 410 Mya) there were 400 days in a year, with each day lasting 21.8 hours.[228]
170
+
171
+ The Moon may have dramatically affected the development of life by moderating the planet's climate. Paleontological evidence and computer simulations show that Earth's axial tilt is stabilized by tidal interactions with the Moon.[229] Some theorists think that without this stabilization against the torques applied by the Sun and planets to Earth's equatorial bulge, the rotational axis might be chaotically unstable, exhibiting chaotic changes over millions of years, as appears to be the case for Mars.[230]
172
+
173
+ Viewed from Earth, the Moon is just far enough away to have almost the same apparent-sized disk as the Sun. The angular size (or solid angle) of these two bodies match because, although the Sun's diameter is about 400 times as large as the Moon's, it is also 400 times more distant.[186] This allows total and annular solar eclipses to occur on Earth.
174
+
175
+ The most widely accepted theory of the Moon's origin, the giant-impact hypothesis, states that it formed from the collision of a Mars-size protoplanet called Theia with the early Earth. This hypothesis explains (among other things) the Moon's relative lack of iron and volatile elements and the fact that its composition is nearly identical to that of Earth's crust.[48]
176
+
177
+ Earth has at least five co-orbital asteroids, including 3753 Cruithne and 2002 AA29.[231][232] A trojan asteroid companion, 2010 TK7, is librating around the leading Lagrange triangular point, L4, in Earth's orbit around the Sun.[233][234]
178
+
179
+ The tiny near-Earth asteroid 2006 RH120 makes close approaches to the Earth–Moon system roughly every twenty years. During these approaches, it can orbit Earth for brief periods of time.[235]
180
+
181
+ As of April 2018[update], there are 1,886 operational, human-made satellites orbiting Earth.[5] There are also inoperative satellites, including Vanguard 1, the oldest satellite currently in orbit, and over 16,000 pieces of tracked space debris.[n 3] Earth's largest artificial satellite is the International Space Station.
182
+
183
+ The standard astronomical symbol of Earth consists of a cross circumscribed by a circle, ,[236] representing the four corners of the world.
184
+
185
+ Human cultures have developed many views of the planet.[237] Earth is sometimes personified as a deity. In many cultures it is a mother goddess that is also the primary fertility deity,[238] and by the mid-20th century, the Gaia Principle compared Earth's environments and life as a single self-regulating organism leading to broad stabilization of the conditions of habitability.[239][240][241] Creation myths in many religions involve the creation of Earth by a supernatural deity or deities.[238]
186
+
187
+ The Hindu Vedas (1500–900 BC) refer to the Earth as Bhūgola (भूगोल), which comes from Bhū (earth, ground) and Gola (ball, sphere, globe). It means the "globe of earth". There is no direct evidence that the Hindus of that time knew that the Earth was sphere-shaped, but this name has been used extensively since the inception of the Vedas.[citation needed]
188
+
189
+ Scientific investigation has resulted in several culturally transformative shifts in people's view of the planet. Initial belief in a flat Earth was gradually displaced in the Greek colonies of southern Italy during the late 6th century BC by the idea of spherical Earth,[242][243][244] which was attributed to both the philosophers Pythagoras and Parmenides.[243][244] By the end of the 5th century BC, the sphericity of Earth was universally accepted among Greek intellectuals.[245] Earth was generally believed to be the center of the universe until the 16th century, when scientists first conclusively demonstrated that it was a moving object, comparable to the other planets in the Solar System.[246] Due to the efforts of influential Christian scholars and clerics such as James Ussher, who sought to determine the age of Earth through analysis of genealogies in Scripture, Westerners before the 19th century generally believed Earth to be a few thousand years old at most. It was only during the 19th century that geologists realized Earth's age was at least many millions of years.[247]
190
+
191
+ Lord Kelvin used thermodynamics to estimate the age of Earth to be between 20 million and 400 million years in 1864, sparking a vigorous debate on the subject; it was only when radioactivity and radioactive dating were discovered in the late 19th and early 20th centuries that a reliable mechanism for determining Earth's age was established, proving the planet to be billions of years old.[248][249] The perception of Earth shifted again[further explanation needed] in the 20th century when humans first viewed it from orbit, and especially with photographs of Earth returned by the Apollo program.[250][251][252]
192
+
193
+ Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Virgo Supercluster → Laniakea Supercluster → Observable universe → UniverseEach arrow (→) may be read as "within" or "part of".
en/4654.html.txt ADDED
@@ -0,0 +1,154 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ Gases:
4
+
5
+ Ices:
6
+
7
+ Uranus is the seventh planet from the Sun. The name "Uranus" is a reference to the Greek god of the sky, Uranus. According to Greek mythology, Uranus was the grandfather of Zeus (Jupiter) and father of Cronus (Saturn). It has the third-largest planetary radius and fourth-largest planetary mass in the Solar System. Uranus is similar in composition to Neptune, and both have bulk chemical compositions which differ from that of the larger gas giants Jupiter and Saturn. For this reason, scientists often classify Uranus and Neptune as "ice giants" to distinguish them from the gas giants. Uranus' atmosphere is similar to Jupiter's and Saturn's in its primary composition of hydrogen and helium, but it contains more "ices" such as water, ammonia, and methane, along with traces of other hydrocarbons.[15] It has the coldest planetary atmosphere in the Solar System, with a minimum temperature of 49 K (−224 °C; −371 °F), and has a complex, layered cloud structure with water thought to make up the lowest clouds and methane the uppermost layer of clouds.[15] The interior of Uranus is mainly composed of ices and rock.[14]
8
+
9
+ Like the other giant planets, Uranus has a ring system, a magnetosphere, and numerous moons. The Uranian system has a unique configuration because its axis of rotation is tilted sideways, nearly into the plane of its solar orbit. Its north and south poles, therefore, lie where most other planets have their equators.[20] In 1986, images from Voyager 2 showed Uranus as an almost featureless planet in visible light, without the cloud bands or storms associated with the other giant planets.[20] Voyager 2 remains the only spacecraft to visit the planet.[21] Observations from Earth have shown seasonal change and increased weather activity as Uranus approached its equinox in 2007. Wind speeds can reach 250 metres per second (900 km/h; 560 mph).[22]
10
+
11
+ Like the classical planets, Uranus is visible to the naked eye, but it was never recognised as a planet by ancient observers because of its dimness and slow orbit.[23] Sir William Herschel first observed Uranus on 13 March 1781, leading to its discovery as a planet, expanding the known boundaries of the Solar System for the first time in history and making Uranus the first planet classified as such with the aid of a telescope.
12
+
13
+
14
+
15
+ Uranus had been observed on many occasions before its recognition as a planet, but it was generally mistaken for a star. Possibly the earliest known observation was by Hipparchos, who in 128 BC might have recorded it as a star for his star catalogue that was later incorporated into Ptolemy's Almagest.[24] The earliest definite sighting was in 1690, when John Flamsteed observed it at least six times, cataloguing it as 34 Tauri. The French astronomer Pierre Charles Le Monnier observed Uranus at least twelve times between 1750 and 1769,[25] including on four consecutive nights.
16
+
17
+ Sir William Herschel observed Uranus on 13 March 1781 from the garden of his house at 19 New King Street in Bath, Somerset, England (now the Herschel Museum of Astronomy),[26] and initially reported it (on 26 April 1781) as a comet.[27] With a telescope, Herschel "engaged in a series of observations on the parallax of the fixed stars."[28]
18
+
19
+ Herschel recorded in his journal: "In the quartile near ζ Tauri ... either [a] Nebulous star or perhaps a comet."[29] On 17 March he noted: "I looked for the Comet or Nebulous Star and found that it is a Comet, for it has changed its place."[30] When he presented his discovery to the Royal Society, he continued to assert that he had found a comet, but also implicitly compared it to a planet:[28]
20
+
21
+ The power I had on when I first saw the comet was 227. From experience I know that the diameters of the fixed stars are not proportionally magnified with higher powers, as planets are; therefore I now put the powers at 460 and 932, and found that the diameter of the comet increased in proportion to the power, as it ought to be, on the supposition of its not being a fixed star, while the diameters of the stars to which I compared it were not increased in the same ratio. Moreover, the comet being magnified much beyond what its light would admit of, appeared hazy and ill-defined with these great powers, while the stars preserved that lustre and distinctness which from many thousand observations I knew they would retain. The sequel has shown that my surmises were well-founded, this proving to be the Comet we have lately observed.[28]
22
+
23
+ Herschel notified the Astronomer Royal Nevil Maskelyne of his discovery and received this flummoxed reply from him on 23 April 1781: "I don't know what to call it. It is as likely to be a regular planet moving in an orbit nearly circular to the sun as a Comet moving in a very eccentric ellipsis. I have not yet seen any coma or tail to it."[31]
24
+
25
+ Although Herschel continued to describe his new object as a comet, other astronomers had already begun to suspect otherwise. Finnish-Swedish astronomer Anders Johan Lexell, working in Russia, was the first to compute the orbit of the new object.[32] Its nearly circular orbit led him to a conclusion that it was a planet rather than a comet. Berlin astronomer Johann Elert Bode described Herschel's discovery as "a moving star that can be deemed a hitherto unknown planet-like object circulating beyond the orbit of Saturn".[33] Bode concluded that its near-circular orbit was more like a planet's than a comet's.[34]
26
+
27
+ The object was soon universally accepted as a new planet. By 1783, Herschel acknowledged this to Royal Society president Joseph Banks: "By the observation of the most eminent Astronomers in Europe it appears that the new star, which I had the honour of pointing out to them in March 1781, is a Primary Planet of our Solar System."[35] In recognition of his achievement, King George III gave Herschel an annual stipend of £200 on condition that he move to Windsor so that the Royal Family could look through his telescopes (equivalent to £24,000 in 2019).[36][37]
28
+
29
+ The name of Uranus references the ancient Greek deity of the sky Uranus (Ancient Greek: Οὐρανός), the father of Cronus (Saturn) and grandfather of Zeus (Jupiter), which in Latin became Ūranus (IPA: [ˈuːranʊs]).[1] It is the only planet whose English name is derived directly from a figure of Greek mythology. The adjectival form of Uranus is "Uranian".[38] The pronunciation of the name Uranus preferred among astronomers is /ˈjʊərənəs/,[2] with stress on the first syllable as in Latin Ūranus, in contrast to /jʊˈreɪnəs/, with stress on the second syllable and a long a, though both are considered acceptable.[f]
30
+
31
+ Consensus on the name was not reached until almost 70 years after the planet's discovery. During the original discussions following discovery, Maskelyne asked Herschel to "do the astronomical world the faver [sic] to give a name to your planet, which is entirely your own, [and] which we are so much obliged to you for the discovery of".[40] In response to Maskelyne's request, Herschel decided to name the object Georgium Sidus (George's Star), or the "Georgian Planet" in honour of his new patron, King George III.[41] He explained this decision in a letter to Joseph Banks:[35]
32
+
33
+ In the fabulous ages of ancient times the appellations of Mercury, Venus, Mars, Jupiter and Saturn were given to the Planets, as being the names of their principal heroes and divinities. In the present more philosophical era it would hardly be allowable to have recourse to the same method and call it Juno, Pallas, Apollo or Minerva, for a name to our new heavenly body. The first consideration of any particular event, or remarkable incident, seems to be its chronology: if in any future age it should be asked, when this last-found Planet was discovered? It would be a very satisfactory answer to say, 'In the reign of King George the Third'.
34
+
35
+ Herschel's proposed name was not popular outside Britain, and alternatives were soon proposed. Astronomer Jérôme Lalande proposed that it be named Herschel in honour of its discoverer.[42] Swedish astronomer Erik Prosperin proposed the name Neptune, which was supported by other astronomers who liked the idea to commemorate the victories of the British Royal Naval fleet in the course of the American Revolutionary War by calling the new planet even Neptune George III or Neptune Great Britain.[32]
36
+
37
+ In a March 1782 treatise, Bode proposed Uranus, the Latinised version of the Greek god of the sky, Ouranos.[43] Bode argued that the name should follow the mythology so as not to stand out as different from the other planets, and that Uranus was an appropriate name as the father of the first generation of the Titans.[43] He also noted that elegance of the name in that just as Saturn was the father of Jupiter, the new planet should be named after the father of Saturn.[37][43][44][45] In 1789, Bode's Royal Academy colleague Martin Klaproth named his newly discovered element uranium in support of Bode's choice.[46] Ultimately, Bode's suggestion became the most widely used, and became universal in 1850 when HM Nautical Almanac Office, the final holdout, switched from using Georgium Sidus to Uranus.[44]
38
+
39
+ Uranus has two astronomical symbols. The first to be proposed, ♅,[g] was suggested by Lalande in 1784. In a letter to Herschel, Lalande described it as "un globe surmonté par la première lettre de votre nom" ("a globe surmounted by the first letter of your surname").[42] A later proposal, ⛢,[h] is a hybrid of the symbols for Mars and the Sun because Uranus was the Sky in Greek mythology, which was thought to be dominated by the combined powers of the Sun and Mars.[47]
40
+
41
+ Uranus is called by a variety of translations in other languages. In Chinese, Japanese, Korean, and Vietnamese, its name is literally translated as the "sky king star" (天王星).[48][49][50][51] In Thai, its official name is Dao Yurenat (ดาวยูเรนัส), as in English. Its other name in Thai is Dao Maritayu (ดาวมฤตยู, Star of Mṛtyu), after the Sanskrit word for 'death', Mrtyu (मृत्यु). In Mongolian, its name is Tengeriin Van (Тэнгэрийн ван), translated as 'King of the Sky', reflecting its namesake god's role as the ruler of the heavens. In Hawaiian, its name is Heleʻekala, a loanword for the discoverer Herschel.[52] In Māori, its name is Whērangi.[53][54]
42
+
43
+ Uranus orbits the Sun once every 84 years, taking an average of seven years to pass through each constellation of the zodiac. In 2033, the planet will have made its third complete orbit around the Sun since being discovered in 1781. The planet has returned to the point of its discovery northeast of Zeta Tauri twice since then, in 1862 and 1943, one day later each time as the precession of the equinoxes has shifted it 1° west every 72 years. Uranus will return to this location again in 2030-31. Its average distance from the Sun is roughly 20 AU (3 billion km; 2 billion mi). The difference between its minimum and maximum distance from the Sun is 1.8 AU, larger than that of any other planet, though not as large as that of dwarf planet Pluto.[55] The intensity of sunlight varies inversely with the square of distance, and so on Uranus (at about 20 times the distance from the Sun compared to Earth) it is about 1/400 the intensity of light on Earth.[56] Its orbital elements were first calculated in 1783 by Pierre-Simon Laplace.[57] With time, discrepancies began to appear between the predicted and observed orbits, and in 1841, John Couch Adams first proposed that the differences might be due to the gravitational tug of an unseen planet. In 1845, Urbain Le Verrier began his own independent research into Uranus' orbit. On 23 September 1846, Johann Gottfried Galle located a new planet, later named Neptune, at nearly the position predicted by Le Verrier.[58]
44
+
45
+ The rotational period of the interior of Uranus is 17 hours, 14 minutes. As on all the giant planets, its upper atmosphere experiences strong winds in the direction of rotation. At some latitudes, such as about 60 degrees south, visible features of the atmosphere move much faster, making a full rotation in as little as 14 hours.[59]
46
+
47
+ The Uranian axis of rotation is approximately parallel with the plane of the Solar System, with an axial tilt of 97.77° (as defined by prograde rotation). This gives it seasonal changes completely unlike those of the other planets. Near the solstice, one pole faces the Sun continuously and the other faces away. Only a narrow strip around the equator experiences a rapid day–night cycle, but with the Sun low over the horizon. At the other side of Uranus' orbit the orientation of the poles towards the Sun is reversed. Each pole gets around 42 years of continuous sunlight, followed by 42 years of darkness.[60] Near the time of the equinoxes, the Sun faces the equator of Uranus giving a period of day–night cycles similar to those seen on most of the other planets.
48
+
49
+ Uranus reached its most recent equinox on 7 December 2007.[61][62]
50
+
51
+ One result of this axis orientation is that, averaged over the Uranian year, the polar regions of Uranus receive a greater energy input from the Sun than its equatorial regions. Nevertheless, Uranus is hotter at its equator than at its poles. The underlying mechanism that causes this is unknown. The reason for Uranus' unusual axial tilt is also not known with certainty, but the usual speculation is that during the formation of the Solar System, an Earth-sized protoplanet collided with Uranus, causing the skewed orientation.[63] Research by Jacob Kegerreis of Durham University suggests that the tilt resulted from a rock larger than the Earth crashing into the planet 3 to 4 billion years ago.[64]
52
+ Uranus' south pole was pointed almost directly at the Sun at the time of Voyager 2's flyby in 1986. The labelling of this pole as "south" uses the definition currently endorsed by the International Astronomical Union, namely that the north pole of a planet or satellite is the pole that points above the invariable plane of the Solar System, regardless of the direction the planet is spinning.[65][66] A different convention is sometimes used, in which a body's north and south poles are defined according to the right-hand rule in relation to the direction of rotation.[67]
53
+
54
+ The mean apparent magnitude of Uranus is 5.68 with a standard deviation of 0.17, while the extremes are 5.38 and +6.03.[16] This range of brightness is near the limit of naked eye visibility. Much of the variability is dependent upon the planetary latitudes being illuminated from the Sun and viewed from the Earth.[68] Its angular diameter is between 3.4 and 3.7 arcseconds, compared with 16 to 20 arcseconds for Saturn and 32 to 45 arcseconds for Jupiter.[69] At opposition, Uranus is visible to the naked eye in dark skies, and becomes an easy target even in urban conditions with binoculars.[6] In larger amateur telescopes with an objective diameter of between 15 and 23 cm, Uranus appears as a pale cyan disk with distinct limb darkening. With a large telescope of 25 cm or wider, cloud patterns, as well as some of the larger satellites, such as Titania and Oberon, may be visible.[70]
55
+
56
+ Uranus' mass is roughly 14.5 times that of Earth, making it the least massive of the giant planets. Its diameter is slightly larger than Neptune's at roughly four times that of Earth. A resulting density of 1.27 g/cm3 makes Uranus the second least dense planet, after Saturn.[9][10] This value indicates that it is made primarily of various ices, such as water, ammonia, and methane.[14] The total mass of ice in Uranus' interior is not precisely known, because different figures emerge depending on the model chosen; it must be between 9.3 and 13.5 Earth masses.[14][71] Hydrogen and helium constitute only a small part of the total, with between 0.5 and 1.5 Earth masses.[14] The remainder of the non-ice mass (0.5 to 3.7 Earth masses) is accounted for by rocky material.[14]
57
+
58
+ The standard model of Uranus' structure is that it consists of three layers: a rocky (silicate/iron–nickel) core in the centre, an icy mantle in the middle and an outer gaseous hydrogen/helium envelope.[14][72] The core is relatively small, with a mass of only 0.55 Earth masses and a radius less than 20% of Uranus'; the mantle comprises its bulk, with around 13.4 Earth masses, and the upper atmosphere is relatively insubstantial, weighing about 0.5 Earth masses and extending for the last 20% of Uranus' radius.[14][72] Uranus' core density is around 9 g/cm3, with a pressure in the centre of 8 million bars (800 GPa) and a temperature of about 5000 K.[71][72] The ice mantle is not in fact composed of ice in the conventional sense, but of a hot and dense fluid consisting of water, ammonia and other volatiles.[14][72] This fluid, which has a high electrical conductivity, is sometimes called a water–ammonia ocean.[73]
59
+
60
+ The extreme pressure and temperature deep within Uranus may break up the methane molecules, with the carbon atoms condensing into crystals of diamond that rain down through the mantle like hailstones.[74][75][76] Very-high-pressure experiments at the Lawrence Livermore National Laboratory suggest that the base of the mantle may comprise an ocean of liquid diamond, with floating solid 'diamond-bergs'.[77][78] Scientists also believe that rainfalls of solid diamonds occur on Uranus, as well as on Jupiter, Saturn, and Neptune.[79][80]
61
+
62
+ The bulk compositions of Uranus and Neptune are different from those of Jupiter and Saturn, with ice dominating over gases, hence justifying their separate classification as ice giants. There may be a layer of ionic water where the water molecules break down into a soup of hydrogen and oxygen ions, and deeper down superionic water in which the oxygen crystallises but the hydrogen ions move freely within the oxygen lattice.[81]
63
+
64
+ Although the model considered above is reasonably standard, it is not unique; other models also satisfy observations. For instance, if substantial amounts of hydrogen and rocky material are mixed in the ice mantle, the total mass of ices in the interior will be lower, and, correspondingly, the total mass of rocks and hydrogen will be higher. Presently available data does not allow a scientific determination of which model is correct.[71] The fluid interior structure of Uranus means that it has no solid surface. The gaseous atmosphere gradually transitions into the internal liquid layers.[14] For the sake of convenience, a revolving oblate spheroid set at the point at which atmospheric pressure equals 1 bar (100 kPa) is conditionally designated as a "surface". It has equatorial and polar radii of 25,559 ± 4 km (15,881.6 ± 2.5 mi) and 24,973 ± 20 km (15,518 ± 12 mi), respectively.[9] This surface is used throughout this article as a zero point for altitudes.
65
+
66
+ Uranus' internal heat appears markedly lower than that of the other giant planets; in astronomical terms, it has a low thermal flux.[22][82] Why Uranus' internal temperature is so low is still not understood. Neptune, which is Uranus' near twin in size and composition, radiates 2.61 times as much energy into space as it receives from the Sun,[22] but Uranus radiates hardly any excess heat at all. The total power radiated by Uranus in the far infrared (i.e. heat) part of the spectrum is 1.06±0.08 times the solar energy absorbed in its atmosphere.[15][83] Uranus' heat flux is only 0.042±0.047 W/m2, which is lower than the internal heat flux of Earth of about 0.075 W/m2.[83] The lowest temperature recorded in Uranus' tropopause is 49 K (−224.2 °C; −371.5 °F), making Uranus the coldest planet in the Solar System.[15][83]
67
+
68
+ One of the hypotheses for this discrepancy suggests that when Uranus was hit by a supermassive impactor, which caused it to expel most of its primordial heat, it was left with a depleted core temperature.[84] This impact hypothesis is also used in some attempts to explain the planet's axial tilt. Another hypothesis is that some form of barrier exists in Uranus' upper layers that prevents the core's heat from reaching the surface.[14] For example, convection may take place in a set of compositionally different layers, which may inhibit the upward heat transport;[15][83] perhaps double diffusive convection is a limiting factor.[14]
69
+
70
+ Although there is no well-defined solid surface within Uranus' interior, the outermost part of Uranus' gaseous envelope that is accessible to remote sensing is called its atmosphere.[15] Remote-sensing capability extends down to roughly 300 km below the 1 bar (100 kPa) level, with a corresponding pressure around 100 bar (10 MPa) and temperature of 320 K (47 °C; 116 °F).[86] The tenuous thermosphere extends over two planetary radii from the nominal surface, which is defined to lie at a pressure of 1 bar.[87] The Uranian atmosphere can be divided into three layers: the troposphere, between altitudes of −300 and 50 km (−186 and 31 mi) and pressures from 100 to 0.1 bar (10 MPa to 10 kPa); the stratosphere, spanning altitudes between 50 and 4,000 km (31 and 2,485 mi) and pressures of between 0.1 and 10−10 bar (10 kPa to 10 µPa); and the thermosphere extending from 4,000 km to as high as 50,000 km from the surface.[15] There is no mesosphere.
71
+
72
+ The composition of Uranus' atmosphere is different from its bulk, consisting mainly of molecular hydrogen and helium.[15] The helium molar fraction, i.e. the number of helium atoms per molecule of gas, is 0.15±0.03[19] in the upper troposphere, which corresponds to a mass fraction 0.26±0.05.[15][83] This value is close to the protosolar helium mass fraction of 0.275±0.01,[88] indicating that helium has not settled in its centre as it has in the gas giants.[15] The third-most-abundant component of Uranus' atmosphere is methane (CH4).[15] Methane has prominent absorption bands in the visible and near-infrared (IR), making Uranus aquamarine or cyan in colour.[15] Methane molecules account for 2.3% of the atmosphere by molar fraction below the methane cloud deck at the pressure level of 1.3 bar (130 kPa); this represents about 20 to 30 times the carbon abundance found in the Sun.[15][18][89] The mixing ratio[i] is much lower in the upper atmosphere due to its extremely low temperature, which lowers the saturation level and causes excess methane to freeze out.[90] The abundances of less volatile compounds such as ammonia, water, and hydrogen sulfide in the deep atmosphere are poorly known. They are probably also higher than solar values.[15][91] Along with methane, trace amounts of various hydrocarbons are found in the stratosphere of Uranus, which are thought to be produced from methane by photolysis induced by the solar ultraviolet (UV) radiation.[92] They include ethane (C2H6), acetylene (C2H2), methylacetylene (CH3C2H), and diacetylene (C2HC2H).[90][93][94] Spectroscopy has also uncovered traces of water vapour, carbon monoxide and carbon dioxide in the upper atmosphere, which can only originate from an external source such as infalling dust and comets.[93][94][95]
73
+
74
+ The troposphere is the lowest and densest part of the atmosphere and is characterised by a decrease in temperature with altitude.[15] The temperature falls from about 320 K (47 °C; 116 °F) at the base of the nominal troposphere at −300 km to 53 K (−220 °C; −364 °F) at 50 km.[86][89] The temperatures in the coldest upper region of the troposphere (the tropopause) actually vary in the range between 49 and 57 K (−224 and −216 °C; −371 and −357 °F) depending on planetary latitude.[15][82] The tropopause region is responsible for the vast majority of Uranus' thermal far infrared emissions, thus determining its effective temperature of 59.1 ± 0.3 K (−214.1 ± 0.3 °C; −353.3 ± 0.5 °F).[82][83]
75
+
76
+ The troposphere is thought to have a highly complex cloud structure; water clouds are hypothesised to lie in the pressure range of 50 to 100 bar (5 to 10 MPa), ammonium hydrosulfide clouds in the range of 20 to 40 bar (2 to 4 MPa), ammonia or hydrogen sulfide clouds at between 3 and 10 bar (0.3 and 1 MPa) and finally directly detected thin methane clouds at 1 to 2 bar (0.1 to 0.2 MPa).[15][18][86][96] The troposphere is a dynamic part of the atmosphere, exhibiting strong winds, bright clouds and seasonal changes.[22]
77
+
78
+ The middle layer of the Uranian atmosphere is the stratosphere, where temperature generally increases with altitude from 53 K (−220 °C; −364 °F) in the tropopause to between 800 and 850 K (527 and 577 °C; 980 and 1,070 °F) at the base of the thermosphere.[87] The heating of the stratosphere is caused by absorption of solar UV and IR radiation by methane and other hydrocarbons,[98] which form in this part of the atmosphere as a result of methane photolysis.[92] Heat is also conducted from the hot thermosphere.[98] The hydrocarbons occupy a relatively narrow layer at altitudes of between 100 and 300 km corresponding to a pressure range of 1000 to 10 Pa and temperatures of between 75 and 170 K (−198 and −103 °C; −325 and −154 °F).[90][93] The most abundant hydrocarbons are methane, acetylene and ethane with mixing ratios of around 10−7 relative to hydrogen. The mixing ratio of carbon monoxide is similar at these altitudes.[90][93][95] Heavier hydrocarbons and carbon dioxide have mixing ratios three orders of magnitude lower.[93] The abundance ratio of water is around 7×10−9.[94] Ethane and acetylene tend to condense in the colder lower part of stratosphere and tropopause (below 10 mBar level) forming haze layers,[92] which may be partly responsible for the bland appearance of Uranus. The concentration of hydrocarbons in the Uranian stratosphere above the haze is significantly lower than in the stratospheres of the other giant planets.[90][99]
79
+
80
+ The outermost layer of the Uranian atmosphere is the thermosphere and corona, which has a uniform temperature around 800 to 850 K.[15][99] The heat sources necessary to sustain such a high level are not understood, as neither the solar UV nor the auroral activity can provide the necessary energy to maintain these temperatures. The weak cooling efficiency due to the lack of hydrocarbons in the stratosphere above 0.1 mBar pressure level may contribute too.[87][99] In addition to molecular hydrogen, the thermosphere-corona contains many free hydrogen atoms. Their small mass and high temperatures explain why the corona extends as far as 50,000 km (31,000 mi), or two Uranian radii, from its surface.[87][99] This extended corona is a unique feature of Uranus.[99] Its effects include a drag on small particles orbiting Uranus, causing a general depletion of dust in the Uranian rings.[87] The Uranian thermosphere, together with the upper part of the stratosphere, corresponds to the ionosphere of Uranus.[89] Observations show that the ionosphere occupies altitudes from 2,000 to 10,000 km (1,200 to 6,200 mi).[89] The Uranian ionosphere is denser than that of either Saturn or Neptune, which may arise from the low concentration of hydrocarbons in the stratosphere.[99][100] The ionosphere is mainly sustained by solar UV radiation and its density depends on the solar activity.[101] Auroral activity is insignificant as compared to Jupiter and Saturn.[99][102]
81
+
82
+ Temperature profile of the Uranian troposphere and lower stratosphere. Cloud and haze layers are also indicated.
83
+
84
+ Zonal wind speeds on Uranus. Shaded areas show the southern collar and its future northern counterpart. The red curve is a symmetrical fit to the data.
85
+
86
+ Before the arrival of Voyager 2, no measurements of the Uranian magnetosphere had been taken, so its nature remained a mystery. Before 1986, scientists had expected the magnetic field of Uranus to be in line with the solar wind, because it would then align with Uranus' poles that lie in the ecliptic.[103]
87
+
88
+ Voyager's observations revealed that Uranus' magnetic field is peculiar, both because it does not originate from its geometric centre, and because it is tilted at 59° from the axis of rotation.[103][104] In fact the magnetic dipole is shifted from Uranus' centre towards the south rotational pole by as much as one third of the planetary radius.[103] This unusual geometry results in a highly asymmetric magnetosphere, where the magnetic field strength on the surface in the southern hemisphere can be as low as 0.1 gauss (10 µT), whereas in the northern hemisphere it can be as high as 1.1 gauss (110 µT).[103] The average field at the surface is 0.23 gauss (23 µT).[103] Studies of Voyager 2 data in 2017 suggest that this asymmetry causes Uranus' magnetosphere to connect with the solar wind once a Uranian day, opening the planet to the Sun's particles.[105] In comparison, the magnetic field of Earth is roughly as strong at either pole, and its "magnetic equator" is roughly parallel with its geographical equator.[104] The dipole moment of Uranus is 50 times that of Earth.[103][104] Neptune has a similarly displaced and tilted magnetic field, suggesting that this may be a common feature of ice giants.[104] One hypothesis is that, unlike the magnetic fields of the terrestrial and gas giants, which are generated within their cores, the ice giants' magnetic fields are generated by motion at relatively shallow depths, for instance, in the water–ammonia ocean.[73][106] Another possible explanation for the magnetosphere's alignment is that there are oceans of liquid diamond in Uranus' interior that would deter the magnetic field.[77]
89
+
90
+ Despite its curious alignment, in other respects the Uranian magnetosphere is like those of other planets: it has a bow shock at about 23 Uranian radii ahead of it, a magnetopause at 18 Uranian radii, a fully developed magnetotail, and radiation belts.[103][104][107] Overall, the structure of Uranus' magnetosphere is different from Jupiter's and more similar to Saturn's.[103][104] Uranus' magnetotail trails behind it into space for millions of kilometres and is twisted by its sideways rotation into a long corkscrew.[103][108]
91
+
92
+ Uranus' magnetosphere contains charged particles: mainly protons and electrons, with a small amount of H2+ ions.[104][107] Many of these particles probably derive from the thermosphere.[107] The ion and electron energies can be as high as 4 and 1.2 megaelectronvolts, respectively.[107] The density of low-energy (below 1 kiloelectronvolt) ions in the inner magnetosphere is about 2 cm−3.[109] The particle population is strongly affected by the Uranian moons, which sweep through the magnetosphere, leaving noticeable gaps.[107] The particle flux is high enough to cause darkening or space weathering of their surfaces on an astronomically rapid timescale of 100,000 years.[107] This may be the cause of the uniformly dark colouration of the Uranian satellites and rings.[110] Uranus has relatively well developed aurorae, which are seen as bright arcs around both magnetic poles.[99] Unlike Jupiter's, Uranus' aurorae seem to be insignificant for the energy balance of the planetary thermosphere.[102]
93
+
94
+ In March 2020, NASA astronomers reported the detection of a large atmospheric magnetic bubble, also known as a plasmoid, released into outer space from the planet Uranus, after reevaluating old data recorded by the Voyager 2 space probe during a flyby of the planet in 1986.[111][112]
95
+
96
+ At ultraviolet and visible wavelengths, Uranus' atmosphere is bland in comparison to the other giant planets, even to Neptune, which it otherwise closely resembles.[22] When Voyager 2 flew by Uranus in 1986, it observed a total of ten cloud features across the entire planet.[20][113] One proposed explanation for this dearth of features is that Uranus' internal heat appears markedly lower than that of the other giant planets. The lowest temperature recorded in Uranus' tropopause is 49 K (−224 °C; −371 °F), making Uranus the coldest planet in the Solar System.[15][83]
97
+
98
+ In 1986, Voyager 2 found that the visible southern hemisphere of Uranus can be subdivided into two regions: a bright polar cap and dark equatorial bands.[20] Their boundary is located at about −45° of latitude. A narrow band straddling the latitudinal range from −45 to −50° is the brightest large feature on its visible surface.[20][114] It is called a southern "collar". The cap and collar are thought to be a dense region of methane clouds located within the pressure range of 1.3 to 2 bar (see above).[115] Besides the large-scale banded structure, Voyager 2 observed ten small bright clouds, most lying several degrees to the north from the collar.[20] In all other respects Uranus looked like a dynamically dead planet in 1986. Voyager 2 arrived during the height of Uranus' southern summer and could not observe the northern hemisphere. At the beginning of the 21st century, when the northern polar region came into view, the Hubble Space Telescope (HST) and Keck telescope initially observed neither a collar nor a polar cap in the northern hemisphere.[114] So Uranus appeared to be asymmetric: bright near the south pole and uniformly dark in the region north of the southern collar.[114] In 2007, when Uranus passed its equinox, the southern collar almost disappeared, and a faint northern collar emerged near 45° of latitude.[116]
99
+
100
+ In the 1990s, the number of the observed bright cloud features grew considerably partly because new high-resolution imaging techniques became available.[22] Most were found in the northern hemisphere as it started to become visible.[22] An early explanation—that bright clouds are easier to identify in its dark part, whereas in the southern hemisphere the bright collar masks them – was shown to be incorrect.[117][118] Nevertheless there are differences between the clouds of each hemisphere. The northern clouds are smaller, sharper and brighter.[118] They appear to lie at a higher altitude.[118] The lifetime of clouds spans several orders of magnitude. Some small clouds live for hours; at least one southern cloud may have persisted since the Voyager 2 flyby.[22][113] Recent observation also discovered that cloud features on Uranus have a lot in common with those on Neptune.[22] For example, the dark spots common on Neptune had never been observed on Uranus before 2006, when the first such feature dubbed Uranus Dark Spot was imaged.[119] The speculation is that Uranus is becoming more Neptune-like during its equinoctial season.[120]
101
+
102
+ The tracking of numerous cloud features allowed determination of zonal winds blowing in the upper troposphere of Uranus.[22] At the equator winds are retrograde, which means that they blow in the reverse direction to the planetary rotation. Their speeds are from −360 to −180 km/h (−220 to −110 mph).[22][114] Wind speeds increase with the distance from the equator, reaching zero values near ±20° latitude, where the troposphere's temperature minimum is located.[22][82] Closer to the poles, the winds shift to a prograde direction, flowing with Uranus' rotation. Wind speeds continue to increase reaching maxima at ±60° latitude before falling to zero at the poles.[22] Wind speeds at −40° latitude range from 540 to 720 km/h (340 to 450 mph). Because the collar obscures all clouds below that parallel, speeds between it and the southern pole are impossible to measure.[22] In contrast, in the northern hemisphere maximum speeds as high as 860 km/h (540 mph) are observed near +50° latitude.[22][114][121]
103
+
104
+ For a short period from March to May 2004, large clouds appeared in the Uranian atmosphere, giving it a Neptune-like appearance.[118][122] Observations included record-breaking wind speeds of 820 km/h (510 mph) and a persistent thunderstorm referred to as "Fourth of July fireworks".[113] On 23 August 2006, researchers at the Space Science Institute (Boulder, Colorado) and the University of Wisconsin observed a dark spot on Uranus' surface, giving scientists more insight into Uranus atmospheric activity.[119] Why this sudden upsurge in activity occurred is not fully known, but it appears that Uranus' extreme axial tilt results in extreme seasonal variations in its weather.[62][120] Determining the nature of this seasonal variation is difficult because good data on Uranus' atmosphere have existed for less than 84 years, or one full Uranian year. Photometry over the course of half a Uranian year (beginning in the 1950s) has shown regular variation in the brightness in two spectral bands, with maxima occurring at the solstices and minima occurring at the equinoxes.[123] A similar periodic variation, with maxima at the solstices, has been noted in microwave measurements of the deep troposphere begun in the 1960s.[124] Stratospheric temperature measurements beginning in the 1970s also showed maximum values near the 1986 solstice.[98] The majority of this variability is thought to occur owing to changes in the viewing geometry.[117]
105
+
106
+ There are some indications that physical seasonal changes are happening in Uranus. Although Uranus is known to have a bright south polar region, the north pole is fairly dim, which is incompatible with the model of the seasonal change outlined above.[120] During its previous northern solstice in 1944, Uranus displayed elevated levels of brightness, which suggests that the north pole was not always so dim.[123] This information implies that the visible pole brightens some time before the solstice and darkens after the equinox.[120] Detailed analysis of the visible and microwave data revealed that the periodical changes of brightness are not completely symmetrical around the solstices, which also indicates a change in the meridional albedo patterns.[120] In the 1990s, as Uranus moved away from its solstice, Hubble and ground-based telescopes revealed that the south polar cap darkened noticeably (except the southern collar, which remained bright),[115] whereas the northern hemisphere demonstrated increasing activity,[113] such as cloud formations and stronger winds, bolstering expectations that it should brighten soon.[118] This indeed happened in 2007 when it passed an equinox: a faint northern polar collar arose, and the southern collar became nearly invisible, although the zonal wind profile remained slightly asymmetric, with northern winds being somewhat slower than southern.[116]
107
+
108
+ The mechanism of these physical changes is still not clear.[120] Near the summer and winter solstices, Uranus' hemispheres lie alternately either in full glare of the Sun's rays or facing deep space. The brightening of the sunlit hemisphere is thought to result from the local thickening of the methane clouds and haze layers located in the troposphere.[115] The bright collar at −45° latitude is also connected with methane clouds.[115] Other changes in the southern polar region can be explained by changes in the lower cloud layers.[115] The variation of the microwave emission from Uranus is probably caused by changes in the deep tropospheric circulation, because thick polar clouds and haze may inhibit convection.[125] Now that the spring and autumn equinoxes are arriving on Uranus, the dynamics are changing and convection can occur again.[113][125]
109
+
110
+ Many argue that the differences between the ice giants and the gas giants extend to their formation.[126][127] The Solar System is hypothesised to have formed from a giant rotating ball of gas and dust known as the presolar nebula. Much of the nebula's gas, primarily hydrogen and helium, formed the Sun, and the dust grains collected together to form the first protoplanets. As the planets grew, some of them eventually accreted enough matter for their gravity to hold on to the nebula's leftover gas.[126][127] The more gas they held onto, the larger they became; the larger they became, the more gas they held onto until a critical point was reached, and their size began to increase exponentially. The ice giants, with only a few Earth masses of nebular gas, never reached that critical point.[126][127][128] Recent simulations of planetary migration have suggested that both ice giants formed closer to the Sun than their present positions, and moved outwards after formation (the Nice model).[126]
111
+
112
+ Uranus has 27 known natural satellites.[128] The names of these satellites are chosen from characters in the works of Shakespeare and Alexander Pope.[72][129] The five main satellites are Miranda, Ariel, Umbriel, Titania, and Oberon.[72] The Uranian satellite system is the least massive among those of the giant planets; the combined mass of the five major satellites would be less than half that of Triton (largest moon of Neptune) alone.[10] The largest of Uranus' satellites, Titania, has a radius of only 788.9 km (490.2 mi), or less than half that of the Moon, but slightly more than Rhea, the second-largest satellite of Saturn, making Titania the eighth-largest moon in the Solar System. Uranus' satellites have relatively low albedos; ranging from 0.20 for Umbriel to 0.35 for Ariel (in green light).[20] They are ice–rock conglomerates composed of roughly 50% ice and 50% rock. The ice may include ammonia and carbon dioxide.[110][130]
113
+
114
+ Among the Uranian satellites, Ariel appears to have the youngest surface with the fewest impact craters and Umbriel's the oldest.[20][110] Miranda has fault canyons 20 km (12 mi) deep, terraced layers, and a chaotic variation in surface ages and features.[20] Miranda's past geologic activity is thought to have been driven by tidal heating at a time when its orbit was more eccentric than currently, probably as a result of a former 3:1 orbital resonance with Umbriel.[131] Extensional processes associated with upwelling diapirs are the likely origin of Miranda's 'racetrack'-like coronae.[132][133] Ariel is thought to have once been held in a 4:1 resonance with Titania.[134]
115
+
116
+ Uranus has at least one horseshoe orbiter occupying the Sun–Uranus L3 Lagrangian point—a gravitationally unstable region at 180° in its orbit, 83982 Crantor.[135][136] Crantor moves inside Uranus' co-orbital region on a complex, temporary horseshoe orbit.
117
+ 2010 EU65 is also a promising Uranus horseshoe librator candidate.[136]
118
+
119
+ The Uranian rings are composed of extremely dark particles, which vary in size from micrometres to a fraction of a metre.[20] Thirteen distinct rings are presently known, the brightest being the ε ring. All except two rings of Uranus are extremely narrow – they are usually a few kilometres wide. The rings are probably quite young; the dynamics considerations indicate that they did not form with Uranus. The matter in the rings may once have been part of a moon (or moons) that was shattered by high-speed impacts. From numerous pieces of debris that formed as a result of those impacts, only a few particles survived, in stable zones corresponding to the locations of the present rings.[110][137]
120
+
121
+ William Herschel described a possible ring around Uranus in 1789. This sighting is generally considered doubtful, because the rings are quite faint, and in the two following centuries none were noted by other observers. Still, Herschel made an accurate description of the epsilon ring's size, its angle relative to Earth, its red colour, and its apparent changes as Uranus travelled around the Sun.[138][139] The ring system was definitively discovered on 10 March 1977 by James L. Elliot, Edward W. Dunham, and Jessica Mink using the Kuiper Airborne Observatory. The discovery was serendipitous; they planned to use the occultation of the star SAO 158687 (also known as HD 128598) by Uranus to study its atmosphere. When their observations were analysed, they found that the star had disappeared briefly from view five times both before and after it disappeared behind Uranus. They concluded that there must be a ring system around Uranus.[140] Later they detected four additional rings.[140] The rings were directly imaged when Voyager 2 passed Uranus in 1986.[20] Voyager 2 also discovered two additional faint rings, bringing the total number to eleven.[20]
122
+
123
+ In December 2005, the Hubble Space Telescope detected a pair of previously unknown rings. The largest is located twice as far from Uranus as the previously known rings. These new rings are so far from Uranus that they are called the "outer" ring system. Hubble also spotted two small satellites, one of which, Mab, shares its orbit with the outermost newly discovered ring. The new rings bring the total number of Uranian rings to 13.[141] In April 2006, images of the new rings from the Keck Observatory yielded the colours of the outer rings: the outermost is blue and the other one red.[142][143]
124
+ One hypothesis concerning the outer ring's blue colour is that it is composed of minute particles of water ice from the surface of Mab that are small enough to scatter blue light.[142][144] In contrast, Uranus' inner rings appear grey.[142]
125
+
126
+ Animation about the discovering occultation in 1977. (Click on it to start)
127
+
128
+ Uranus has a complicated planetary ring system, which was the second such system to be discovered in the Solar System after Saturn's.[137]
129
+
130
+ Uranus' aurorae against its equatorial rings, imaged by the Hubble telescope. Unlike the aurorae of Earth and Jupiter, those of Uranus are not in line with its poles, due to its lopsided magnetic field.
131
+
132
+ In 1986, NASA's Voyager 2 interplanetary probe encountered Uranus. This flyby remains the only investigation of Uranus carried out from a short distance and no other visits are planned. Launched in 1977, Voyager 2 made its closest approach to Uranus on 24 January 1986, coming within 81,500 km (50,600 mi) of the cloudtops, before continuing its journey to Neptune. The spacecraft studied the structure and chemical composition of Uranus' atmosphere,[89] including its unique weather, caused by its axial tilt of 97.77°. It made the first detailed investigations of its five largest moons and discovered 10 new ones. It examined all nine of the system's known rings and discovered two more.[20][110][145] It also studied the magnetic field, its irregular structure, its tilt and its unique corkscrew magnetotail caused by Uranus' sideways orientation.[103]
133
+
134
+ Voyager 1 was unable to visit Uranus because investigation of Saturn's moon Titan was considered a priority. This trajectory took Voyager 1 out of the plane of the ecliptic, ending its planetary science mission.[146]:118
135
+
136
+ The possibility of sending the Cassini spacecraft from Saturn to Uranus was evaluated during a mission extension planning phase in 2009, but was ultimately rejected in favour of destroying it in the Saturnian atmosphere.[147] It would have taken about twenty years to get to the Uranian system after departing Saturn.[147] A Uranus orbiter and probe was recommended by the 2013–2022 Planetary Science Decadal Survey published in 2011; the proposal envisages launch during 2020–2023 and a 13-year cruise to Uranus.[148] A Uranus entry probe could use Pioneer Venus Multiprobe heritage and descend to 1–5 atmospheres.[148] The ESA evaluated a "medium-class" mission called Uranus Pathfinder.[149] A New Frontiers Uranus Orbiter has been evaluated and recommended in the study, The Case for a Uranus Orbiter.[150] Such a mission is aided by the ease with which a relatively big mass can be sent to the system—over 1500 kg with an Atlas 521 and 12-year journey.[151] For more concepts see Proposed Uranus missions.
137
+
138
+ In astrology, the planet Uranus () is the ruling planet of Aquarius. Because Uranus is cyan and Uranus is associated with electricity, the colour electric blue, which is close to cyan, is associated with the sign Aquarius[152] (see Uranus in astrology).
139
+
140
+ The chemical element uranium, discovered in 1789 by the German chemist Martin Heinrich Klaproth, was named after the newly discovered planet Uranus.[153]
141
+
142
+ "Uranus, the Magician" is a movement in Gustav Holst's orchestral suite The Planets, written between 1914 and 1916.
143
+
144
+ Operation Uranus was the successful military operation in World War II by the Red Army to take back Stalingrad and marked the turning point in the land war against the Wehrmacht.
145
+
146
+ The lines "Then felt I like some watcher of the skies/When a new planet swims into his ken", from John Keats's "On First Looking into Chapman's Homer", are a reference to Herschel's discovery of Uranus.[154]
147
+
148
+ Many references to Uranus in English language popular culture and news involve humour about one pronunciation of its name resembling that of the phrase "your anus".[155]
149
+
150
+ Bereits in der am 12ten März 1782 bei der hiesigen naturforschenden Gesellschaft vorgelesenen Abhandlung, habe ich den Namen des Vaters vom Saturn, nemlich Uranos, oder wie er mit der lateinischen Endung gewöhnlicher ist, Uranus vorgeschlagen, und habe seit dem das Vergnügen gehabt, daß verschiedene Astronomen und Mathematiker in ihren Schriften oder in Briefen an mich, diese Benennung aufgenommen oder gebilligt. Meines Erachtens muß man bei dieser Wahl die Mythologie befolgen, aus welcher die uralten Namen der übrigen Planeten entlehnen worden; denn in der Reihe der bisher bekannten, würde der von einer merkwürdigen Person oder Begebenheit der neuern Zeit wahrgenommene Name eines Planeten sehr auffallen. Diodor von Cicilien erzahlt die Geschichte der Atlanten, eines uralten Volks, welches eine der fruchtbarsten Gegenden in Africa bewohnte, und die Meeresküsten seines Landes als das Vaterland der Götter ansah. Uranus war ihr, erster König, Stifter ihres gesitteter Lebens und Erfinder vieler nützlichen Künste. Zugleich wird er auch als ein fleißiger und geschickter Himmelsforscher des Alterthums beschrieben... Noch mehr: Uranus war der Vater des Saturns und des Atlas, so wie der erstere der Vater des Jupiters.
151
+
152
+ Already in the pre-read at the local Natural History Society on 12th March 1782 treatise, I have the father's name from Saturn, namely Uranos, or as it is usually with the Latin suffix, proposed Uranus, and have since had the pleasure that various astronomers and mathematicians, cited in their writings or letters to me approving this designation. In my view, it is necessary to follow the mythology in this election, which had been borrowed from the ancient name of the other planets; because in the series of previously known, perceived by a strange person or event of modern times name of a planet would very noticeable. Diodorus of Cilicia tells the story of Atlas, an ancient people that inhabited one of the most fertile areas in Africa, and looked at the sea shores of his country as the homeland of the gods. Uranus was her first king, founder of their civilized life and inventor of many useful arts. At the same time he is also described as a diligent and skilful astronomers of antiquity ... even more: Uranus was the father of Saturn and the Atlas, as the former is the father of Jupiter.
153
+
154
+ Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Virgo Supercluster → Laniakea Supercluster → Observable universe → UniverseEach arrow (→) may be read as "within" or "part of".
en/4655.html.txt ADDED
@@ -0,0 +1,158 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+
5
+ Venus is the second planet from the Sun. It is named after the Roman goddess of love and beauty. As the second-brightest natural object in the night sky after the Moon, Venus can cast shadows and can be, on rare occasion, visible to the naked eye in broad daylight.[15][16] Venus lies within Earth's orbit, and so never appears to venture far from the Sun, either setting in the west just after dusk or rising in the east a bit before dawn. Venus orbits the Sun every 224.7 Earth days.[17] With a rotation period of 243 Earth days, it takes longer to rotate about its axis than any other planet in the Solar System and does so in the opposite direction to all but Uranus (meaning the Sun rises in the west and sets in the east).[18] Venus does not have any moons, a distinction it shares only with Mercury among planets in the Solar System.[19]
6
+
7
+ Venus is a terrestrial planet and is sometimes called Earth's "sister planet" because of their similar size, mass, proximity to the Sun, and bulk composition. It is radically different from Earth in other respects. It has the densest atmosphere of the four terrestrial planets, consisting of more than 96% carbon dioxide. The atmospheric pressure at the planet's surface is 92 times that of Earth, or roughly the pressure found 900 m (3,000 ft) underwater on Earth. Venus is by far the hottest planet in the Solar System, with a mean surface temperature of 737 K (464 °C; 867 °F), even though Mercury is closer to the Sun. Venus is shrouded by an opaque layer of highly reflective clouds of sulfuric acid, preventing its surface from being seen from space in visible light. It may have had water oceans in the past,[20][21] but these would have vaporized as the temperature rose due to a runaway greenhouse effect.[22] The water has probably photodissociated, and the free hydrogen has been swept into interplanetary space by the solar wind because of the lack of a planetary magnetic field.[23] Venus's surface is a dry desertscape interspersed with slab-like rocks and is periodically resurfaced by volcanism.
8
+
9
+ As one of the brightest objects in the sky, Venus has been a major fixture in human culture for as long as records have existed. It has been made sacred to gods of many cultures, and has been a prime inspiration for writers and poets as the morning star and evening star. Venus was the first planet to have its motions plotted across the sky, as early as the second millennium BC.[24]
10
+
11
+ As the planet with the closest approach to Earth, Venus has been a prime target for early interplanetary exploration. It was the first planet beyond Earth visited by a spacecraft (Mariner 2 in 1962), and the first to be successfully landed on (by Venera 7 in 1970). Venus's thick clouds render observation of its surface impossible in visible light, and the first detailed maps did not emerge until the arrival of the Magellan orbiter in 1991. Plans have been proposed for rovers or more complex missions, but they are hindered by Venus's hostile surface conditions.
12
+
13
+ In January 2020, astronomers reported evidence that suggests that Venus is currently volcanically active.[25][26]
14
+
15
+ Venus is one of the four terrestrial planets in the Solar System, meaning that it is a rocky body like Earth. It is similar to Earth in size and mass, and is often described as Earth's "sister" or "twin".[27] The diameter of Venus is 12,103.6 km (7,520.8 mi)—only 638.4 km (396.7 mi) less than Earth's—and its mass is 81.5% of Earth's. Conditions on the Venusian surface differ radically from those on Earth because its dense atmosphere is 96.5% carbon dioxide, with most of the remaining 3.5% being nitrogen.[28]
16
+
17
+ The Venusian surface was a subject of speculation until some of its secrets were revealed by planetary science in the 20th century. Venera landers in 1975 and 1982 returned images of a surface covered in sediment and relatively angular rocks.[29] The surface was mapped in detail by Magellan in 1990–91. The ground shows evidence of extensive volcanism, and the sulfur in the atmosphere may indicate that there have been recent eruptions.[30][31]
18
+
19
+ About 80% of the Venusian surface is covered by smooth, volcanic plains, consisting of 70% plains with wrinkle ridges and 10% smooth or lobate plains.[32] Two highland "continents" make up the rest of its surface area, one lying in the planet's northern hemisphere and the other just south of the equator. The northern continent is called Ishtar Terra after Ishtar, the Babylonian goddess of love, and is about the size of Australia. Maxwell Montes, the highest mountain on Venus, lies on Ishtar Terra. Its peak is 11 km (7 mi) above the Venusian average surface elevation.[33] The southern continent is called Aphrodite Terra, after the Greek goddess of love, and is the larger of the two highland regions at roughly the size of South America. A network of fractures and faults covers much of this area.[34]
20
+
21
+ The absence of evidence of lava flow accompanying any of the visible calderas remains an enigma. The planet has few impact craters, demonstrating that the surface is relatively young, approximately 300–600 million years old.[35][36] Venus has some unique surface features in addition to the impact craters, mountains, and valleys commonly found on rocky planets. Among these are flat-topped volcanic features called "farra", which look somewhat like pancakes and range in size from 20 to 50 km (12 to 31 mi) across, and from 100 to 1,000 m (330 to 3,280 ft) high; radial, star-like fracture systems called "novae"; features with both radial and concentric fractures resembling spider webs, known as "arachnoids"; and "coronae", circular rings of fractures sometimes surrounded by a depression. These features are volcanic in origin.[37]
22
+
23
+ Most Venusian surface features are named after historical and mythological women.[38] Exceptions are Maxwell Montes, named after James Clerk Maxwell, and highland regions Alpha Regio, Beta Regio, and Ovda Regio. The last three features were named before the current system was adopted by the International Astronomical Union, the body which oversees planetary nomenclature.[39]
24
+
25
+ The longitude of physical features on Venus are expressed relative to its prime meridian. The original prime meridian passed through the radar-bright spot at the centre of the oval feature Eve, located south of Alpha Regio.[40] After the Venera missions were completed, the prime meridian was redefined to pass through the central peak in the crater Ariadne.[41][42]
26
+
27
+ Much of the Venusian surface appears to have been shaped by volcanic activity. Venus has several times as many volcanoes as Earth, and it has 167 large volcanoes that are over 100 km (60 mi) across. The only volcanic complex of this size on Earth is the Big Island of Hawaii.[37]:154 This is not because Venus is more volcanically active than Earth, but because its crust is older. Earth's oceanic crust is continually recycled by subduction at the boundaries of tectonic plates, and has an average age of about 100 million years,[43] whereas the Venusian surface is estimated to be 300–600 million years old.[35][37]
28
+
29
+ Several lines of evidence point to ongoing volcanic activity on Venus. Sulphur dioxide concentrations in the atmosphere dropped by a factor of 10 between 1978 and 1986, jumped in 2006, and again declined 10-fold.[44] This may mean that levels had been boosted several times by large volcanic eruptions.[45][46] It has also been suggested that Venusian lightning (discussed below) could originate from volcanic activity (i.e. volcanic lightning). In January 2020, astronomers reported evidence that suggests that Venus is currently volcanically active.[25][26]
30
+
31
+ In 2008 and 2009, the first direct evidence for ongoing volcanism was observed by Venus Express, in the form of four transient localized infrared hot spots within the rift zone Ganis Chasma,[47][n 1] near the shield volcano Maat Mons. Three of the spots were observed in more than one successive orbit. These spots are thought to represent lava freshly released by volcanic eruptions.[48][49] The actual temperatures are not known, because the size of the hot spots could not be measured, but are likely to have been in the 800–1,100 K (527–827 °C; 980–1,520 °F) range, relative to a normal temperature of 740 K (467 °C; 872 °F).[50]
32
+
33
+ Almost a thousand impact craters on Venus are evenly distributed across its surface. On other cratered bodies, such as Earth and the Moon, craters show a range of states of degradation. On the Moon, degradation is caused by subsequent impacts, whereas on Earth it is caused by wind and rain erosion. On Venus, about 85% of the craters are in pristine condition. The number of craters, together with their well-preserved condition, indicates the planet underwent a global resurfacing event about 300–600 million years ago,[35][36] followed by a decay in volcanism.[51] Whereas Earth's crust is in continuous motion, Venus is thought to be unable to sustain such a process. Without plate tectonics to dissipate heat from its mantle, Venus instead undergoes a cyclical process in which mantle temperatures rise until they reach a critical level that weakens the crust. Then, over a period of about 100 million years, subduction occurs on an enormous scale, completely recycling the crust.[37]
34
+
35
+ Venusian craters range from 3 to 280 km (2 to 174 mi) in diameter. No craters are smaller than 3 km, because of the effects of the dense atmosphere on incoming objects. Objects with less than a certain kinetic energy are slowed so much by the atmosphere that they do not create an impact crater.[52] Incoming projectiles less than 50 m (160 ft) in diameter will fragment and burn up in the atmosphere before reaching the ground.[53]
36
+
37
+ The stratigraphically oldest tessera terrains have consistently lower thermal emissivity than the surrounding basaltic plains measured by Venus Express and Magellan, indicating a different, possibly a more felsic, mineral assemblage.[20][54] The mechanism to generate a large amount of felsic crust usually requires the presence of water ocean and plate tectonics, implying that habitable condition had existed on early Venus. However, the nature of tessera terrains is far from certain.[55]
38
+
39
+ Without seismic data or knowledge of its moment of inertia, little direct information is available about the internal structure and geochemistry of Venus.[56] The similarity in size and density between Venus and Earth suggests they share a similar internal structure: a core, mantle, and crust. Like that of Earth, the Venusian core is at least partially liquid because the two planets have been cooling at about the same rate.[57] The slightly smaller size of Venus means pressures are 24% lower in its deep interior than Earth's.[58] The principal difference between the two planets is the lack of evidence for plate tectonics on Venus, possibly because its crust is too strong to subduct without water to make it less viscous. This results in reduced heat loss from the planet, preventing it from cooling and providing a likely explanation for its lack of an internally generated magnetic field.[59]
40
+ Instead, Venus may lose its internal heat in periodic major resurfacing events.[35]
41
+
42
+ Venus has an extremely dense atmosphere composed of 96.5% carbon dioxide, 3.5% nitrogen, and traces of other gases including sulfur dioxide.[60] The mass of its atmosphere is 93 times that of Earth's, whereas the pressure at its surface is about 92 times that at Earth's—a pressure equivalent to that at a depth of nearly 1 km (5⁄8 mi) under Earth's oceans. The density at the surface is 65 kg/m3, 6.5% that of water or 50 times as dense as Earth's atmosphere at 293 K (20 °C; 68 °F) at sea level. The CO2-rich atmosphere generates the strongest greenhouse effect in the Solar System, creating surface temperatures of at least 735 K (462 °C; 864 °F).[17][61] This makes Venus's surface hotter than Mercury's, which has a minimum surface temperature of 53 K (−220 °C; −364 °F) and maximum surface temperature of 700 K (427 °C; 801 °F),[62][63] even though Venus is nearly twice Mercury's distance from the Sun and thus receives only 25% of Mercury's solar irradiance. This temperature is higher than that used for sterilization.
43
+
44
+ Venus' atmosphere is extremely enriched of primordial noble gases compared to that of Earth.[64] This enrichment indicates an early divergence from Earth in evolution. An unusually large comet impact[65] or accretion of a more massive primary atmosphere from solar nebula[66] have been proposed to explain the enrichment. However, the atmosphere is also depleted of radiogenic argon, a proxy to mantle degassing, suggesting an early shutdown of major magmatism.[67][68]
45
+
46
+ Studies have suggested that billions of years ago, Venus's atmosphere could have been much more like the one surrounding Earth, and that there may have been substantial quantities of liquid water on the surface, but after a period of 600 million to several billion years,[69] a runaway greenhouse effect was caused by the evaporation of that original water, which generated a critical level of greenhouse gases in its atmosphere.[70] Although the surface conditions on Venus are no longer hospitable to any Earth-like life that may have formed before this event, there is speculation on the possibility that life exists in the upper cloud layers of Venus, 50 km (30 mi) up from the surface, where the temperature ranges between 303 and 353 K (30 and 80 °C; 86 and 176 °F) but the environment is acidic.[71][72][73]
47
+
48
+ Thermal inertia and the transfer of heat by winds in the lower atmosphere mean that the temperature of Venus's surface does not vary significantly between the planet's two hemispheres, those facing and not facing the Sun, despite Venus's extremely slow rotation. Winds at the surface are slow, moving at a few kilometres per hour, but because of the high density of the atmosphere at the surface, they exert a significant amount of force against obstructions, and transport dust and small stones across the surface. This alone would make it difficult for a human to walk through, even without the heat, pressure, and lack of oxygen.[74]
49
+
50
+ Above the dense CO2 layer are thick clouds consisting mainly of sulfuric acid, which is formed by sulfur dioxide and water through a chemical reaction resulting in sulfuric acid hydrate. Additionally, the atmosphere consists of approximately 1% ferric chloride.[75][76] Other possible constituents of the cloud particles are ferric sulfate, aluminium chloride and phosphoric anhydride. Clouds at different levels have different compositions and particle size distributions.[75] These clouds reflect and scatter about 90% of the sunlight that falls on them back into space, and prevent visual observation of Venus's surface. The permanent cloud cover means that although Venus is closer than Earth to the Sun, it receives less sunlight on the ground. Strong 300 km/h (185 mph) winds at the cloud tops go around Venus about every four to five Earth days.[77] Winds on Venus move at up to 60 times the speed of its rotation, whereas Earth's fastest winds are only 10–20% rotation speed.[78]
51
+
52
+ The surface of Venus is effectively isothermal; it retains a constant temperature not only between the two hemispheres but between the equator and the poles.[5][79] Venus's minute axial tilt—less than 3°, compared to 23° on Earth—also minimises seasonal temperature variation.[80] Altitude is one of the few factors that affect Venusian temperature. The highest point on Venus, Maxwell Montes, is therefore the coolest point on Venus, with a temperature of about 655 K (380 °C; 715 °F) and an atmospheric pressure of about 4.5 MPa (45 bar).[81][82] In 1995, the Magellan spacecraft imaged a highly reflective substance at the tops of the highest mountain peaks that bore a strong resemblance to terrestrial snow. This substance likely formed from a similar process to snow, albeit at a far higher temperature. Too volatile to condense on the surface, it rose in gaseous form to higher elevations, where it is cooler and could precipitate. The identity of this substance is not known with certainty, but speculation has ranged from elemental tellurium to lead sulfide (galena).[83]
53
+
54
+ Although Venus has no seasons as such, in 2019 astronomers identified a cyclical variation in sunlight absorption by the atmosphere, possibly caused by opaque, absorbing particles suspended in the upper clouds. The variation causes observed changes in the speed of Venus's zonal winds, and appears to rise and fall in time with the Sun's 11-year sunspot cycle.[84]
55
+
56
+ The existence of lightning in the atmosphere of Venus has been controversial[85] since the first suspected bursts were detected by the Soviet Venera probes[86][87][88] In 2006–07, Venus Express clearly detected whistler mode waves, the signatures of lightning. Their intermittent appearance indicates a pattern associated with weather activity. According to these measurements, the lightning rate is at least half of that on Earth,[89] however other instruments have not detected lightning at all.[85] The origin of any lightning remains unclear, but could originate from the clouds or volcanoes.
57
+
58
+ In 2007, Venus Express discovered that a huge double atmospheric vortex exists at the south pole.[90][91] Venus Express also discovered, in 2011, that an ozone layer exists high in the atmosphere of Venus.[92] On 29 January 2013, ESA scientists reported that the ionosphere of Venus streams outwards in a manner similar to "the ion tail seen streaming from a comet under similar conditions."[93][94]
59
+
60
+ In December 2015, and to a lesser extent in April and May 2016, researchers working on Japan's Akatsuki mission observed bow shapes in the atmosphere of Venus. This was considered direct evidence of the existence of perhaps the largest stationary gravity waves in the solar system.[95][96][97]
61
+
62
+ In 1967, Venera 4 found Venus's magnetic field to be much weaker than that of Earth. This magnetic field is induced by an interaction between the ionosphere and the solar wind,[100][101] rather than by an internal dynamo as in the Earth's core. Venus's small induced magnetosphere provides negligible protection to the atmosphere against cosmic radiation.
63
+
64
+ The lack of an intrinsic magnetic field at Venus was surprising, given that it is similar to Earth in size and was expected also to contain a dynamo at its core. A dynamo requires three things: a conducting liquid, rotation, and convection. The core is thought to be electrically conductive and, although its rotation is often thought to be too slow, simulations show it is adequate to produce a dynamo.[102][103] This implies that the dynamo is missing because of a lack of convection in Venus's core. On Earth, convection occurs in the liquid outer layer of the core because the bottom of the liquid layer is much higher in temperature than the top. On Venus, a global resurfacing event may have shut down plate tectonics and led to a reduced heat flux through the crust. This would cause the mantle temperature to increase, thereby reducing the heat flux out of the core. As a result, no internal geodynamo is available to drive a magnetic field. Instead, the heat from the core is being used to reheat the crust.[104]
65
+
66
+ One possibility is that Venus has no solid inner core,[105] or that its core is not cooling, so that the entire liquid part of the core is at approximately the same temperature. Another possibility is that its core has already completely solidified. The state of the core is highly dependent on the concentration of sulfur, which is unknown at present.[104]
67
+
68
+ The weak magnetosphere around Venus means that the solar wind is interacting directly with its outer atmosphere. Here, ions of hydrogen and oxygen are being created by the dissociation of neutral molecules from ultraviolet radiation. The solar wind then supplies energy that gives some of these ions sufficient velocity to escape Venus's gravity field. This erosion process results in a steady loss of low-mass hydrogen, helium, and oxygen ions, whereas higher-mass molecules, such as carbon dioxide, are more likely to be retained. Atmospheric erosion by the solar wind probably led to the loss of most of Venus's water during the first billion years after it formed.[106] The erosion has increased the ratio of higher-mass deuterium to lower-mass hydrogen in the atmosphere 100 times compared to the rest of the solar system.[107]
69
+
70
+ Venus orbits the Sun at an average distance of about 0.72 AU (108 million km; 67 million mi), and completes an orbit every 224.7 days. Although all planetary orbits are elliptical, Venus's orbit is the closest to circular, with an eccentricity of less than 0.01.[5] When Venus lies between Earth and the Sun in inferior conjunction, it makes the closest approach to Earth of any planet at an average distance of 41 million km (25 million mi).[5] However, it spends a large amount of its time away from Earth, meaning that it is the closest planet to Earth for only a minority of the time. This means that Mercury is actually the planet that is closest to Earth a plurality of the time.[108] The planet reaches inferior conjunction every 584 days, on average.[5] Because of the decreasing eccentricity of Earth's orbit, the minimum distances will become greater over tens of thousands of years. From the year 1 to 5383, there are 526 approaches less than 40 million km; then there are none for about 60,158 years.[109]
71
+
72
+ All the planets in the Solar System orbit the Sun in an anticlockwise direction as viewed from above Earth's north pole. Most planets also rotate on their axes in an anti-clockwise direction, but Venus rotates clockwise in retrograde rotation once every 243 Earth days—the slowest rotation of any planet. Because its rotation is so slow, Venus is very close to spherical.[110] A Venusian sidereal day thus lasts longer than a Venusian year (243 versus 224.7 Earth days). Venus's equator rotates at 6.52 km/h (4.05 mph), whereas Earth's rotates at 1,674.4 km/h (1,040.4 mph).[114][115] Venus's rotation has slowed in the 16 years between the Magellan spacecraft and Venus Express visits; each Venusian sidereal day has increased by 6.5 minutes in that time span.[116] Because of the retrograde rotation, the length of a solar day on Venus is significantly shorter than the sidereal day, at 116.75 Earth days (making the Venusian solar day shorter than Mercury's 176 Earth days).[117] One Venusian year is about 1.92 Venusian solar days.[118] To an observer on the surface of Venus, the Sun would rise in the west and set in the east,[118] although Venus's opaque clouds prevent observing the Sun from the planet's surface.[119]
73
+
74
+ Venus may have formed from the solar nebula with a different rotation period and obliquity, reaching its current state because of chaotic spin changes caused by planetary perturbations and tidal effects on its dense atmosphere, a change that would have occurred over the course of billions of years. The rotation period of Venus may represent an equilibrium state between tidal locking to the Sun's gravitation, which tends to slow rotation, and an atmospheric tide created by solar heating of the thick Venusian atmosphere.[120][121]
75
+ The 584-day average interval between successive close approaches to Earth is almost exactly equal to 5 Venusian solar days (5.001444 to be precise),[122] but the hypothesis of a spin-orbit resonance with Earth has been discounted.[123]
76
+
77
+ Venus has no natural satellites.[124] It has several trojan asteroids: the quasi-satellite 2002 VE68[125][126] and two other temporary trojans, 2001 CK32 and 2012 XE133.[127] In the 17th century, Giovanni Cassini reported a moon orbiting Venus, which was named Neith and numerous sightings were reported over the following 200 years, but most were determined to be stars in the vicinity. Alex Alemi's and David Stevenson's 2006 study of models of the early Solar System at the California Institute of Technology shows Venus likely had at least one moon created by a huge impact event billions of years ago.[128] About 10 million years later, according to the study, another impact reversed the planet's spin direction and caused the Venusian moon gradually to spiral inward until it collided with Venus.[129] If later impacts created moons, these were removed in the same way. An alternative explanation for the lack of satellites is the effect of strong solar tides, which can destabilize large satellites orbiting the inner terrestrial planets.[124]
78
+
79
+ To the naked eye, Venus appears as a white point of light brighter than any other planet or star (apart from the Sun).[130] The planet's mean apparent magnitude is −4.14 with a standard deviation of 0.31.[14] The brightest magnitude occurs during crescent phase about one month before or after inferior conjunction. Venus fades to about magnitude −3 when it is backlit by the Sun.[131] The planet is bright enough to be seen in a clear midday sky[132] and is more easily visible when the Sun is low on the horizon or setting. As an inferior planet, it always lies within about 47° of the Sun.[133]
80
+
81
+ Venus "overtakes" Earth every 584 days as it orbits the Sun.[5] As it does so, it changes from the "Evening Star", visible after sunset, to the "Morning Star", visible before sunrise. Although Mercury, the other inferior planet, reaches a maximum elongation of only 28° and is often difficult to discern in twilight, Venus is hard to miss when it is at its brightest. Its greater maximum elongation means it is visible in dark skies long after sunset. As the brightest point-like object in the sky, Venus is a commonly misreported "unidentified flying object".
82
+
83
+ As it orbits the Sun, Venus displays phases like those of the Moon in a telescopic view. The planet appears as a small and "full" disc when it is on the opposite side of the Sun (at superior conjunction). Venus shows a larger disc and "quarter phase" at its maximum elongations from the Sun, and appears its brightest in the night sky. The planet presents a much larger thin "crescent" in telescopic views as it passes along the near side between Earth and the Sun. Venus displays its largest size and "new phase" when it is between Earth and the Sun (at inferior conjunction). Its atmosphere is visible through telescopes by the halo of sunlight refracted around it.[133]
84
+
85
+ The Venusian orbit is slightly inclined relative to Earth's orbit; thus, when the planet passes between Earth and the Sun, it usually does not cross the face of the Sun. Transits of Venus occur when the planet's inferior conjunction coincides with its presence in the plane of Earth's orbit. Transits of Venus occur in cycles of 243 years with the current pattern of transits being pairs of transits separated by eight years, at intervals of about 105.5 years or 121.5 years—a pattern first discovered in 1639 by the English astronomer Jeremiah Horrocks.[134]
86
+
87
+ The latest pair was June 8, 2004 and June 5–6, 2012. The transit could be watched live from many online outlets or observed locally with the right equipment and conditions.[135]
88
+
89
+ The preceding pair of transits occurred in December 1874 and December 1882; the following pair will occur in December 2117 and December 2125.[136] The 1874 transit is the subject of the oldest film known, the 1874 Passage de Venus. Historically, transits of Venus were important, because they allowed astronomers to determine the size of the astronomical unit, and hence the size of the Solar System as shown by Horrocks in 1639.[137] Captain Cook's exploration of the east coast of Australia came after he had sailed to Tahiti in 1768 to observe a transit of Venus.[138][139]
90
+
91
+ The pentagram of Venus is the path that Venus makes as observed from Earth. Successive inferior conjunctions of Venus repeat very near a 13:8 ratio (Earth orbits 8 times for every 13 orbits of Venus), shifting 144° upon sequential inferior conjunctions. The 13:8 ratio is approximate. 8/13 is approximately 0.61538 while Venus orbits the Sun in 0.61519 years.[140]
92
+
93
+ Naked eye observations of Venus during daylight hours exist in several anecdotes and records. Astronomer Edmund Halley calculated its maximum naked eye brightness in 1716, when many Londoners were alarmed by its appearance in the daytime. French emperor Napoleon Bonaparte once witnessed a daytime apparition of the planet while at a reception in Luxembourg.[141] Another historical daytime observation of the planet took place during the inauguration of the American president Abraham Lincoln in Washington, D.C., on 4 March 1865.[142] Although naked eye visibility of Venus's phases is disputed, records exist of observations of its crescent.[143]
94
+
95
+ A long-standing mystery of Venus observations is the so-called ashen light—an apparent weak illumination of its dark side, seen when the planet is in the crescent phase. The first claimed observation of ashen light was made in 1643, but the existence of the illumination has never been reliably confirmed. Observers have speculated it may result from electrical activity in the Venusian atmosphere, but it could be illusory, resulting from the physiological effect of observing a bright, crescent-shaped object.[144][87]
96
+
97
+ Because the movements of Venus appear to be discontinuous (it disappears due to its proximity to the sun, for many days at a time, and then reappears on the other horizon), some cultures did not recognize Venus as single entity;[145] instead, they assumed it to be two separate stars on each horizon: the morning and evening star.[145] Nonetheless, a cylinder seal from the Jemdet Nasr period and the Venus tablet of Ammisaduqa from the First Babylonian dynasty indicate that the ancient Sumerians already knew that the morning and evening stars were the same celestial object.[146][145][147] In the Old Babylonian period, the planet Venus was known as Ninsi'anna, and later as Dilbat.[148] The name "Ninsi'anna" translates to "divine lady, illumination of heaven", which refers to Venus as the brightest visible "star". Earlier spellings of the name were written with the cuneiform sign si4 (= SU, meaning "to be red"), and the original meaning may have been "divine lady of the redness of heaven", in reference to the colour of the morning and evening sky.[149]
98
+
99
+ The Chinese historically referred to the morning Venus as "the Great White" (Tài-bái 太白) or "the Opener (Starter) of Brightness" (Qǐ-m��ng 啟明), and the evening Venus as "the Excellent West One" (Cháng-gēng 長庚).[150]
100
+
101
+ The ancient Greeks also initially believed Venus to be two separate stars: Phosphorus, the morning star, and Hesperus, the evening star. Pliny the Elder credited the realization that they were a single object to Pythagoras in the sixth century BCE,[151] while Diogenes Laërtius argued that Parmenides was probably responsible for this rediscovery.[152] Though they recognized Venus as a single object, the ancient Romans continued to designate the morning aspect of Venus as Lucifer, literally "Light-Bringer", and the evening aspect as Vesper, both of which are literal translations of their traditional Greek names.
102
+
103
+ In the second century, in his astronomical treatise Almagest, Ptolemy theorized that both Mercury and Venus are located between the Sun and the Earth. The 11th-century Persian astronomer Avicenna claimed to have observed the transit of Venus,[153] which later astronomers took as confirmation of Ptolemy's theory.[154] In the 12th century, the Andalusian astronomer Ibn Bajjah observed "two planets as black spots on the face of the Sun"; these were thought to be the transits of Venus and Mercury by 13th-century Maragha astronomer Qotb al-Din Shirazi, though this cannot be true as there were no Venus transits in Ibn Bajjah's lifetime.[155][n 2]
104
+
105
+ When the Italian physicist Galileo Galilei first observed the planet in the early 17th century, he found it showed phases like the Moon, varying from crescent to gibbous to full and vice versa. When Venus is furthest from the Sun in the sky, it shows a half-lit phase, and when it is closest to the Sun in the sky, it shows as a crescent or full phase. This could be possible only if Venus orbited the Sun, and this was among the first observations to clearly contradict the Ptolemaic geocentric model that the Solar System was concentric and centred on Earth.[158][159]
106
+
107
+ The 1639 transit of Venus was accurately predicted by Jeremiah Horrocks and observed by him and his friend, William Crabtree, at each of their respective homes, on 4 December 1639 (24 November under the Julian calendar in use at that time).[160]
108
+
109
+ The atmosphere of Venus was discovered in 1761 by Russian polymath Mikhail Lomonosov.[161][162] Venus's atmosphere was observed in 1790 by German astronomer Johann Schröter. Schröter found when the planet was a thin crescent, the cusps extended through more than 180°. He correctly surmised this was due to scattering of sunlight in a dense atmosphere. Later, American astronomer Chester Smith Lyman observed a complete ring around the dark side of the planet when it was at inferior conjunction, providing further evidence for an atmosphere.[163] The atmosphere complicated efforts to determine a rotation period for the planet, and observers such as Italian-born astronomer Giovanni Cassini and Schröter incorrectly estimated periods of about 24 h from the motions of markings on the planet's apparent surface.[164]
110
+
111
+ Little more was discovered about Venus until the 20th century. Its almost featureless disc gave no hint what its surface might be like, and it was only with the development of spectroscopic, radar and ultraviolet observations that more of its secrets were revealed. The first ultraviolet observations were carried out in the 1920s, when Frank E. Ross found that ultraviolet photographs revealed considerable detail that was absent in visible and infrared radiation. He suggested this was due to a dense, yellow lower atmosphere with high cirrus clouds above it.[165]
112
+
113
+ Spectroscopic observations in the 1900s gave the first clues about the Venusian rotation. Vesto Slipher tried to measure the Doppler shift of light from Venus, but found he could not detect any rotation. He surmised the planet must have a much longer rotation period than had previously been thought.[166] Later work in the 1950s showed the rotation was retrograde. Radar observations of Venus were first carried out in the 1960s, and provided the first measurements of the rotation period, which were close to the modern value.[167]
114
+
115
+ Radar observations in the 1970s revealed details of the Venusian surface for the first time. Pulses of radio waves were beamed at the planet using the 300 m (1,000 ft) radio telescope at Arecibo Observatory, and the echoes revealed two highly reflective regions, designated the Alpha and Beta regions. The observations also revealed a bright region attributed to mountains, which was called Maxwell Montes.[168] These three features are now the only ones on Venus that do not have female names.[39]
116
+
117
+ The first robotic space probe mission to Venus, and the first to any planet, began with the Soviet Venera program in 1961.[169] The United States' exploration of Venus had its first success with the Mariner 2 mission on 14 December 1962, becoming the world's first successful interplanetary mission, passing 34,833 km (21,644 mi) above the surface of Venus, and gathering data on the planet's atmosphere.[170][171]
118
+
119
+ On 18 October 1967, the Soviet Venera 4 successfully entered the atmosphere and deployed science experiments. Venera 4 showed the surface temperature was hotter than Mariner 2 had calculated, at almost 500 °C (932 °F), determined that the atmosphere was 95% carbon dioxide (CO2), and discovered that Venus's atmosphere was considerably denser than Venera 4's designers had anticipated.[172] The joint Venera 4–Mariner 5 data were analysed by a combined Soviet–American science team in a series of colloquia over the following year,[173] in an early example of space cooperation.[174]
120
+
121
+ In 1974, Mariner 10 swung by Venus on its way to Mercury and took ultraviolet photographs of the clouds, revealing the extraordinarily high wind speeds in the Venusian atmosphere.
122
+
123
+ In 1975, the Soviet Venera 9 and 10 landers transmitted the first images from the surface of Venus, which were in black and white. In 1982 the first colour images of the surface were obtained with the Soviet Venera 13 and 14 landers.
124
+
125
+ NASA obtained additional data in 1978 with the Pioneer Venus project that consisted of two separate missions:[175] Pioneer Venus Orbiter and Pioneer Venus Multiprobe.[176] The successful Soviet Venera program came to a close in October 1983, when Venera 15 and 16 were placed in orbit to conduct detailed mapping of 25% of Venus's terrain (from the north pole to 30°N latitude)[177]
126
+
127
+ Several other Venus flybys took place in the 1980s and 1990s that increased the understanding of Venus, including Vega 1 (1985), Vega 2 (1985), Galileo (1990), Magellan (1994), Cassini–Huygens (1998), and MESSENGER (2006). Then, Venus Express by the European Space Agency (ESA) entered orbit around Venus in April 2006. Equipped with seven scientific instruments, Venus Express provided unprecedented long-term observation of Venus's atmosphere. ESA concluded that mission in December 2014.
128
+
129
+ As of 2016, Japan's Akatsuki is in a highly elliptical orbit around Venus since 7 December 2015, and there are several probing proposals under study by Roscosmos, NASA, and India's ISRO.
130
+
131
+ In 2016, the NASA Innovative Advanced Concepts program studied a rover, the Automaton Rover for Extreme Environments, designed to survive for an extended time in Venus's environmental conditions. It would be controlled by a mechanical computer and driven by wind power.[178]
132
+
133
+ Venus is a primary feature of the night sky, and so has been of remarkable importance in mythology, astrology and fiction throughout history and in different cultures.
134
+
135
+ In Sumerian religion, Inanna was associated with the planet Venus.[181][182] Several hymns praise Inanna in her role as the goddess of the planet Venus.[145][182][181] Theology professor Jeffrey Cooley has argued that, in many myths, Inanna's movements may correspond with the movements of the planet Venus in the sky.[145] The discontinuous movements of Venus relate to both mythology as well as Inanna's dual nature.[145] In Inanna's Descent to the Underworld, unlike any other deity, Inanna is able to descend into the netherworld and return to the heavens. The planet Venus appears to make a similar descent, setting in the West and then rising again in the East.[145] An introductory hymn describes Inanna leaving the heavens and heading for Kur, what could be presumed to be, the mountains, replicating the rising and setting of Inanna to the West.[145] In Inanna and Shukaletuda and Inanna's Descent into the Underworld appear to parallel the motion of the planet Venus.[145] In Inanna and Shukaletuda, Shukaletuda is described as scanning the heavens in search of Inanna, possibly searching the eastern and western horizons.[183] In the same myth, while searching for her attacker, Inanna herself makes several movements that correspond with the movements of Venus in the sky.[145]
136
+
137
+ Classical poets such as Homer, Sappho, Ovid and Virgil spoke of the star and its light.[184] Poets such as William Blake, Robert Frost, Letitia Elizabeth Landon, Alfred Lord Tennyson and William Wordsworth wrote odes to it.[185]
138
+
139
+ In Chinese the planet is called Jīn-xīng (金星), the golden planet of the metal element. In India Shukra Graha ("the planet Shukra") which is named after a powerful saint Shukra. Shukra which is used in Indian Vedic astrology[186] means "clear, pure" or "brightness, clearness" in Sanskrit. One of the nine Navagraha, it is held to affect wealth, pleasure and reproduction; it was the son of Bhrgu, preceptor of the Daityas, and guru of the Asuras.[187] The word Shukra is also associated with semen, or generation. Venus is known as Kejora in Indonesian and Malay. Modern Chinese, Japanese and Korean cultures refer to the planet literally as the "metal star" (金星), based on the Five elements.[188][189][190]
140
+
141
+ The Maya considered Venus to be the most important celestial body after the Sun and Moon. They called it Chac ek,[191] or Noh Ek', "the Great Star".[192] The cycles of Venus were important to their calendar.
142
+
143
+ The Ancient Egyptians and Greeks believed Venus to be two separate bodies, a morning star and an evening star. The Egyptians knew the morning star as Tioumoutiri and the evening star as Ouaiti.[193] The Greeks used the names Phōsphoros (Φωσϕόρος), meaning "light-bringer" (whence the element phosphorus; alternately Ēōsphoros (Ἠωςϕόρος), meaning "dawn-bringer"), for the morning star, and Hesperos (Ἕσπερος), meaning "Western one", for the evening star.[194] Though by the Roman era they were recognized as one celestial object, known as "the star of Venus", the traditional two Greek names continued to be used, though usually translated to Latin as Lūcifer[194][195] and Vesper.
144
+
145
+ With the invention of the telescope, the idea that Venus was a physical world and possible destination began to take form.
146
+
147
+ The impenetrable Venusian cloud cover gave science fiction writers free rein to speculate on conditions at its surface; all the more so when early observations showed that not only was it similar in size to Earth, it possessed a substantial atmosphere. Closer to the Sun than Earth, the planet was frequently depicted as warmer, but still habitable by humans.[196] The genre reached its peak between the 1930s and 1950s, at a time when science had revealed some aspects of Venus, but not yet the harsh reality of its surface conditions. Findings from the first missions to Venus showed the reality to be quite different, and brought this particular genre to an end.[197] As scientific knowledge of Venus advanced, science fiction authors tried to keep pace, particularly by conjecturing human attempts to terraform Venus.[198]
148
+
149
+ The astronomical symbol for Venus is the same as that used in biology for the female sex: a circle with a small cross beneath.[199] [200] The Venus symbol also represents femininity, and in Western alchemy stood for the metal copper.[199] [200] Polished copper has been used for mirrors from antiquity, and the symbol for Venus has sometimes been understood to stand for the mirror of the goddess although that is not its true origin.[199][200]
150
+
151
+ The speculation of the existence of life on Venus has decreased significantly since the early 1960s, when spacecraft began studying Venus and it became clear that the conditions on Venus are extreme compared to those on Earth.
152
+
153
+ That Venus is closer to the Sun than Earth, raising temperatures on the surface to nearly 735 K (462 °C; 863 °F), that its atmospheric pressure is 90 times that of Earth, and the extreme impact of the greenhouse effect make water-based life as currently known unlikely. A few scientists have speculated that thermoacidophilic extremophile microorganisms might exist in the lower-temperature, acidic upper layers of the Venusian atmosphere.[201][202][203] The atmospheric pressure and temperature fifty kilometres above the surface are similar to those at Earth's surface. This has led to proposals to use aerostats (lighter-than-air balloons) for initial exploration and ultimately for permanent "floating cities" in the Venusian atmosphere.[204] Among the many engineering challenges are the dangerous amounts of sulfuric acid at these heights.[204]
154
+
155
+ Nonetheless, in August 2019, astronomers reported that newly discovered long-term pattern of absorbance and albedo changes in the atmosphere of the planet Venus are caused by "unknown absorbers", which may be chemicals or even large colonies of microorganisms high up in the atmosphere of the planet.[205][84]
156
+
157
+ Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Virgo Supercluster → Laniakea Supercluster → Observable universe → UniverseEach arrow (→) may be read as "within" or "part of".
158
+
en/4656.html.txt ADDED
@@ -0,0 +1,158 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+
5
+ Venus is the second planet from the Sun. It is named after the Roman goddess of love and beauty. As the second-brightest natural object in the night sky after the Moon, Venus can cast shadows and can be, on rare occasion, visible to the naked eye in broad daylight.[15][16] Venus lies within Earth's orbit, and so never appears to venture far from the Sun, either setting in the west just after dusk or rising in the east a bit before dawn. Venus orbits the Sun every 224.7 Earth days.[17] With a rotation period of 243 Earth days, it takes longer to rotate about its axis than any other planet in the Solar System and does so in the opposite direction to all but Uranus (meaning the Sun rises in the west and sets in the east).[18] Venus does not have any moons, a distinction it shares only with Mercury among planets in the Solar System.[19]
6
+
7
+ Venus is a terrestrial planet and is sometimes called Earth's "sister planet" because of their similar size, mass, proximity to the Sun, and bulk composition. It is radically different from Earth in other respects. It has the densest atmosphere of the four terrestrial planets, consisting of more than 96% carbon dioxide. The atmospheric pressure at the planet's surface is 92 times that of Earth, or roughly the pressure found 900 m (3,000 ft) underwater on Earth. Venus is by far the hottest planet in the Solar System, with a mean surface temperature of 737 K (464 °C; 867 °F), even though Mercury is closer to the Sun. Venus is shrouded by an opaque layer of highly reflective clouds of sulfuric acid, preventing its surface from being seen from space in visible light. It may have had water oceans in the past,[20][21] but these would have vaporized as the temperature rose due to a runaway greenhouse effect.[22] The water has probably photodissociated, and the free hydrogen has been swept into interplanetary space by the solar wind because of the lack of a planetary magnetic field.[23] Venus's surface is a dry desertscape interspersed with slab-like rocks and is periodically resurfaced by volcanism.
8
+
9
+ As one of the brightest objects in the sky, Venus has been a major fixture in human culture for as long as records have existed. It has been made sacred to gods of many cultures, and has been a prime inspiration for writers and poets as the morning star and evening star. Venus was the first planet to have its motions plotted across the sky, as early as the second millennium BC.[24]
10
+
11
+ As the planet with the closest approach to Earth, Venus has been a prime target for early interplanetary exploration. It was the first planet beyond Earth visited by a spacecraft (Mariner 2 in 1962), and the first to be successfully landed on (by Venera 7 in 1970). Venus's thick clouds render observation of its surface impossible in visible light, and the first detailed maps did not emerge until the arrival of the Magellan orbiter in 1991. Plans have been proposed for rovers or more complex missions, but they are hindered by Venus's hostile surface conditions.
12
+
13
+ In January 2020, astronomers reported evidence that suggests that Venus is currently volcanically active.[25][26]
14
+
15
+ Venus is one of the four terrestrial planets in the Solar System, meaning that it is a rocky body like Earth. It is similar to Earth in size and mass, and is often described as Earth's "sister" or "twin".[27] The diameter of Venus is 12,103.6 km (7,520.8 mi)—only 638.4 km (396.7 mi) less than Earth's—and its mass is 81.5% of Earth's. Conditions on the Venusian surface differ radically from those on Earth because its dense atmosphere is 96.5% carbon dioxide, with most of the remaining 3.5% being nitrogen.[28]
16
+
17
+ The Venusian surface was a subject of speculation until some of its secrets were revealed by planetary science in the 20th century. Venera landers in 1975 and 1982 returned images of a surface covered in sediment and relatively angular rocks.[29] The surface was mapped in detail by Magellan in 1990–91. The ground shows evidence of extensive volcanism, and the sulfur in the atmosphere may indicate that there have been recent eruptions.[30][31]
18
+
19
+ About 80% of the Venusian surface is covered by smooth, volcanic plains, consisting of 70% plains with wrinkle ridges and 10% smooth or lobate plains.[32] Two highland "continents" make up the rest of its surface area, one lying in the planet's northern hemisphere and the other just south of the equator. The northern continent is called Ishtar Terra after Ishtar, the Babylonian goddess of love, and is about the size of Australia. Maxwell Montes, the highest mountain on Venus, lies on Ishtar Terra. Its peak is 11 km (7 mi) above the Venusian average surface elevation.[33] The southern continent is called Aphrodite Terra, after the Greek goddess of love, and is the larger of the two highland regions at roughly the size of South America. A network of fractures and faults covers much of this area.[34]
20
+
21
+ The absence of evidence of lava flow accompanying any of the visible calderas remains an enigma. The planet has few impact craters, demonstrating that the surface is relatively young, approximately 300–600 million years old.[35][36] Venus has some unique surface features in addition to the impact craters, mountains, and valleys commonly found on rocky planets. Among these are flat-topped volcanic features called "farra", which look somewhat like pancakes and range in size from 20 to 50 km (12 to 31 mi) across, and from 100 to 1,000 m (330 to 3,280 ft) high; radial, star-like fracture systems called "novae"; features with both radial and concentric fractures resembling spider webs, known as "arachnoids"; and "coronae", circular rings of fractures sometimes surrounded by a depression. These features are volcanic in origin.[37]
22
+
23
+ Most Venusian surface features are named after historical and mythological women.[38] Exceptions are Maxwell Montes, named after James Clerk Maxwell, and highland regions Alpha Regio, Beta Regio, and Ovda Regio. The last three features were named before the current system was adopted by the International Astronomical Union, the body which oversees planetary nomenclature.[39]
24
+
25
+ The longitude of physical features on Venus are expressed relative to its prime meridian. The original prime meridian passed through the radar-bright spot at the centre of the oval feature Eve, located south of Alpha Regio.[40] After the Venera missions were completed, the prime meridian was redefined to pass through the central peak in the crater Ariadne.[41][42]
26
+
27
+ Much of the Venusian surface appears to have been shaped by volcanic activity. Venus has several times as many volcanoes as Earth, and it has 167 large volcanoes that are over 100 km (60 mi) across. The only volcanic complex of this size on Earth is the Big Island of Hawaii.[37]:154 This is not because Venus is more volcanically active than Earth, but because its crust is older. Earth's oceanic crust is continually recycled by subduction at the boundaries of tectonic plates, and has an average age of about 100 million years,[43] whereas the Venusian surface is estimated to be 300–600 million years old.[35][37]
28
+
29
+ Several lines of evidence point to ongoing volcanic activity on Venus. Sulphur dioxide concentrations in the atmosphere dropped by a factor of 10 between 1978 and 1986, jumped in 2006, and again declined 10-fold.[44] This may mean that levels had been boosted several times by large volcanic eruptions.[45][46] It has also been suggested that Venusian lightning (discussed below) could originate from volcanic activity (i.e. volcanic lightning). In January 2020, astronomers reported evidence that suggests that Venus is currently volcanically active.[25][26]
30
+
31
+ In 2008 and 2009, the first direct evidence for ongoing volcanism was observed by Venus Express, in the form of four transient localized infrared hot spots within the rift zone Ganis Chasma,[47][n 1] near the shield volcano Maat Mons. Three of the spots were observed in more than one successive orbit. These spots are thought to represent lava freshly released by volcanic eruptions.[48][49] The actual temperatures are not known, because the size of the hot spots could not be measured, but are likely to have been in the 800–1,100 K (527–827 °C; 980–1,520 °F) range, relative to a normal temperature of 740 K (467 °C; 872 °F).[50]
32
+
33
+ Almost a thousand impact craters on Venus are evenly distributed across its surface. On other cratered bodies, such as Earth and the Moon, craters show a range of states of degradation. On the Moon, degradation is caused by subsequent impacts, whereas on Earth it is caused by wind and rain erosion. On Venus, about 85% of the craters are in pristine condition. The number of craters, together with their well-preserved condition, indicates the planet underwent a global resurfacing event about 300–600 million years ago,[35][36] followed by a decay in volcanism.[51] Whereas Earth's crust is in continuous motion, Venus is thought to be unable to sustain such a process. Without plate tectonics to dissipate heat from its mantle, Venus instead undergoes a cyclical process in which mantle temperatures rise until they reach a critical level that weakens the crust. Then, over a period of about 100 million years, subduction occurs on an enormous scale, completely recycling the crust.[37]
34
+
35
+ Venusian craters range from 3 to 280 km (2 to 174 mi) in diameter. No craters are smaller than 3 km, because of the effects of the dense atmosphere on incoming objects. Objects with less than a certain kinetic energy are slowed so much by the atmosphere that they do not create an impact crater.[52] Incoming projectiles less than 50 m (160 ft) in diameter will fragment and burn up in the atmosphere before reaching the ground.[53]
36
+
37
+ The stratigraphically oldest tessera terrains have consistently lower thermal emissivity than the surrounding basaltic plains measured by Venus Express and Magellan, indicating a different, possibly a more felsic, mineral assemblage.[20][54] The mechanism to generate a large amount of felsic crust usually requires the presence of water ocean and plate tectonics, implying that habitable condition had existed on early Venus. However, the nature of tessera terrains is far from certain.[55]
38
+
39
+ Without seismic data or knowledge of its moment of inertia, little direct information is available about the internal structure and geochemistry of Venus.[56] The similarity in size and density between Venus and Earth suggests they share a similar internal structure: a core, mantle, and crust. Like that of Earth, the Venusian core is at least partially liquid because the two planets have been cooling at about the same rate.[57] The slightly smaller size of Venus means pressures are 24% lower in its deep interior than Earth's.[58] The principal difference between the two planets is the lack of evidence for plate tectonics on Venus, possibly because its crust is too strong to subduct without water to make it less viscous. This results in reduced heat loss from the planet, preventing it from cooling and providing a likely explanation for its lack of an internally generated magnetic field.[59]
40
+ Instead, Venus may lose its internal heat in periodic major resurfacing events.[35]
41
+
42
+ Venus has an extremely dense atmosphere composed of 96.5% carbon dioxide, 3.5% nitrogen, and traces of other gases including sulfur dioxide.[60] The mass of its atmosphere is 93 times that of Earth's, whereas the pressure at its surface is about 92 times that at Earth's—a pressure equivalent to that at a depth of nearly 1 km (5⁄8 mi) under Earth's oceans. The density at the surface is 65 kg/m3, 6.5% that of water or 50 times as dense as Earth's atmosphere at 293 K (20 °C; 68 °F) at sea level. The CO2-rich atmosphere generates the strongest greenhouse effect in the Solar System, creating surface temperatures of at least 735 K (462 °C; 864 °F).[17][61] This makes Venus's surface hotter than Mercury's, which has a minimum surface temperature of 53 K (−220 °C; −364 °F) and maximum surface temperature of 700 K (427 °C; 801 °F),[62][63] even though Venus is nearly twice Mercury's distance from the Sun and thus receives only 25% of Mercury's solar irradiance. This temperature is higher than that used for sterilization.
43
+
44
+ Venus' atmosphere is extremely enriched of primordial noble gases compared to that of Earth.[64] This enrichment indicates an early divergence from Earth in evolution. An unusually large comet impact[65] or accretion of a more massive primary atmosphere from solar nebula[66] have been proposed to explain the enrichment. However, the atmosphere is also depleted of radiogenic argon, a proxy to mantle degassing, suggesting an early shutdown of major magmatism.[67][68]
45
+
46
+ Studies have suggested that billions of years ago, Venus's atmosphere could have been much more like the one surrounding Earth, and that there may have been substantial quantities of liquid water on the surface, but after a period of 600 million to several billion years,[69] a runaway greenhouse effect was caused by the evaporation of that original water, which generated a critical level of greenhouse gases in its atmosphere.[70] Although the surface conditions on Venus are no longer hospitable to any Earth-like life that may have formed before this event, there is speculation on the possibility that life exists in the upper cloud layers of Venus, 50 km (30 mi) up from the surface, where the temperature ranges between 303 and 353 K (30 and 80 °C; 86 and 176 °F) but the environment is acidic.[71][72][73]
47
+
48
+ Thermal inertia and the transfer of heat by winds in the lower atmosphere mean that the temperature of Venus's surface does not vary significantly between the planet's two hemispheres, those facing and not facing the Sun, despite Venus's extremely slow rotation. Winds at the surface are slow, moving at a few kilometres per hour, but because of the high density of the atmosphere at the surface, they exert a significant amount of force against obstructions, and transport dust and small stones across the surface. This alone would make it difficult for a human to walk through, even without the heat, pressure, and lack of oxygen.[74]
49
+
50
+ Above the dense CO2 layer are thick clouds consisting mainly of sulfuric acid, which is formed by sulfur dioxide and water through a chemical reaction resulting in sulfuric acid hydrate. Additionally, the atmosphere consists of approximately 1% ferric chloride.[75][76] Other possible constituents of the cloud particles are ferric sulfate, aluminium chloride and phosphoric anhydride. Clouds at different levels have different compositions and particle size distributions.[75] These clouds reflect and scatter about 90% of the sunlight that falls on them back into space, and prevent visual observation of Venus's surface. The permanent cloud cover means that although Venus is closer than Earth to the Sun, it receives less sunlight on the ground. Strong 300 km/h (185 mph) winds at the cloud tops go around Venus about every four to five Earth days.[77] Winds on Venus move at up to 60 times the speed of its rotation, whereas Earth's fastest winds are only 10–20% rotation speed.[78]
51
+
52
+ The surface of Venus is effectively isothermal; it retains a constant temperature not only between the two hemispheres but between the equator and the poles.[5][79] Venus's minute axial tilt—less than 3°, compared to 23° on Earth—also minimises seasonal temperature variation.[80] Altitude is one of the few factors that affect Venusian temperature. The highest point on Venus, Maxwell Montes, is therefore the coolest point on Venus, with a temperature of about 655 K (380 °C; 715 °F) and an atmospheric pressure of about 4.5 MPa (45 bar).[81][82] In 1995, the Magellan spacecraft imaged a highly reflective substance at the tops of the highest mountain peaks that bore a strong resemblance to terrestrial snow. This substance likely formed from a similar process to snow, albeit at a far higher temperature. Too volatile to condense on the surface, it rose in gaseous form to higher elevations, where it is cooler and could precipitate. The identity of this substance is not known with certainty, but speculation has ranged from elemental tellurium to lead sulfide (galena).[83]
53
+
54
+ Although Venus has no seasons as such, in 2019 astronomers identified a cyclical variation in sunlight absorption by the atmosphere, possibly caused by opaque, absorbing particles suspended in the upper clouds. The variation causes observed changes in the speed of Venus's zonal winds, and appears to rise and fall in time with the Sun's 11-year sunspot cycle.[84]
55
+
56
+ The existence of lightning in the atmosphere of Venus has been controversial[85] since the first suspected bursts were detected by the Soviet Venera probes[86][87][88] In 2006–07, Venus Express clearly detected whistler mode waves, the signatures of lightning. Their intermittent appearance indicates a pattern associated with weather activity. According to these measurements, the lightning rate is at least half of that on Earth,[89] however other instruments have not detected lightning at all.[85] The origin of any lightning remains unclear, but could originate from the clouds or volcanoes.
57
+
58
+ In 2007, Venus Express discovered that a huge double atmospheric vortex exists at the south pole.[90][91] Venus Express also discovered, in 2011, that an ozone layer exists high in the atmosphere of Venus.[92] On 29 January 2013, ESA scientists reported that the ionosphere of Venus streams outwards in a manner similar to "the ion tail seen streaming from a comet under similar conditions."[93][94]
59
+
60
+ In December 2015, and to a lesser extent in April and May 2016, researchers working on Japan's Akatsuki mission observed bow shapes in the atmosphere of Venus. This was considered direct evidence of the existence of perhaps the largest stationary gravity waves in the solar system.[95][96][97]
61
+
62
+ In 1967, Venera 4 found Venus's magnetic field to be much weaker than that of Earth. This magnetic field is induced by an interaction between the ionosphere and the solar wind,[100][101] rather than by an internal dynamo as in the Earth's core. Venus's small induced magnetosphere provides negligible protection to the atmosphere against cosmic radiation.
63
+
64
+ The lack of an intrinsic magnetic field at Venus was surprising, given that it is similar to Earth in size and was expected also to contain a dynamo at its core. A dynamo requires three things: a conducting liquid, rotation, and convection. The core is thought to be electrically conductive and, although its rotation is often thought to be too slow, simulations show it is adequate to produce a dynamo.[102][103] This implies that the dynamo is missing because of a lack of convection in Venus's core. On Earth, convection occurs in the liquid outer layer of the core because the bottom of the liquid layer is much higher in temperature than the top. On Venus, a global resurfacing event may have shut down plate tectonics and led to a reduced heat flux through the crust. This would cause the mantle temperature to increase, thereby reducing the heat flux out of the core. As a result, no internal geodynamo is available to drive a magnetic field. Instead, the heat from the core is being used to reheat the crust.[104]
65
+
66
+ One possibility is that Venus has no solid inner core,[105] or that its core is not cooling, so that the entire liquid part of the core is at approximately the same temperature. Another possibility is that its core has already completely solidified. The state of the core is highly dependent on the concentration of sulfur, which is unknown at present.[104]
67
+
68
+ The weak magnetosphere around Venus means that the solar wind is interacting directly with its outer atmosphere. Here, ions of hydrogen and oxygen are being created by the dissociation of neutral molecules from ultraviolet radiation. The solar wind then supplies energy that gives some of these ions sufficient velocity to escape Venus's gravity field. This erosion process results in a steady loss of low-mass hydrogen, helium, and oxygen ions, whereas higher-mass molecules, such as carbon dioxide, are more likely to be retained. Atmospheric erosion by the solar wind probably led to the loss of most of Venus's water during the first billion years after it formed.[106] The erosion has increased the ratio of higher-mass deuterium to lower-mass hydrogen in the atmosphere 100 times compared to the rest of the solar system.[107]
69
+
70
+ Venus orbits the Sun at an average distance of about 0.72 AU (108 million km; 67 million mi), and completes an orbit every 224.7 days. Although all planetary orbits are elliptical, Venus's orbit is the closest to circular, with an eccentricity of less than 0.01.[5] When Venus lies between Earth and the Sun in inferior conjunction, it makes the closest approach to Earth of any planet at an average distance of 41 million km (25 million mi).[5] However, it spends a large amount of its time away from Earth, meaning that it is the closest planet to Earth for only a minority of the time. This means that Mercury is actually the planet that is closest to Earth a plurality of the time.[108] The planet reaches inferior conjunction every 584 days, on average.[5] Because of the decreasing eccentricity of Earth's orbit, the minimum distances will become greater over tens of thousands of years. From the year 1 to 5383, there are 526 approaches less than 40 million km; then there are none for about 60,158 years.[109]
71
+
72
+ All the planets in the Solar System orbit the Sun in an anticlockwise direction as viewed from above Earth's north pole. Most planets also rotate on their axes in an anti-clockwise direction, but Venus rotates clockwise in retrograde rotation once every 243 Earth days—the slowest rotation of any planet. Because its rotation is so slow, Venus is very close to spherical.[110] A Venusian sidereal day thus lasts longer than a Venusian year (243 versus 224.7 Earth days). Venus's equator rotates at 6.52 km/h (4.05 mph), whereas Earth's rotates at 1,674.4 km/h (1,040.4 mph).[114][115] Venus's rotation has slowed in the 16 years between the Magellan spacecraft and Venus Express visits; each Venusian sidereal day has increased by 6.5 minutes in that time span.[116] Because of the retrograde rotation, the length of a solar day on Venus is significantly shorter than the sidereal day, at 116.75 Earth days (making the Venusian solar day shorter than Mercury's 176 Earth days).[117] One Venusian year is about 1.92 Venusian solar days.[118] To an observer on the surface of Venus, the Sun would rise in the west and set in the east,[118] although Venus's opaque clouds prevent observing the Sun from the planet's surface.[119]
73
+
74
+ Venus may have formed from the solar nebula with a different rotation period and obliquity, reaching its current state because of chaotic spin changes caused by planetary perturbations and tidal effects on its dense atmosphere, a change that would have occurred over the course of billions of years. The rotation period of Venus may represent an equilibrium state between tidal locking to the Sun's gravitation, which tends to slow rotation, and an atmospheric tide created by solar heating of the thick Venusian atmosphere.[120][121]
75
+ The 584-day average interval between successive close approaches to Earth is almost exactly equal to 5 Venusian solar days (5.001444 to be precise),[122] but the hypothesis of a spin-orbit resonance with Earth has been discounted.[123]
76
+
77
+ Venus has no natural satellites.[124] It has several trojan asteroids: the quasi-satellite 2002 VE68[125][126] and two other temporary trojans, 2001 CK32 and 2012 XE133.[127] In the 17th century, Giovanni Cassini reported a moon orbiting Venus, which was named Neith and numerous sightings were reported over the following 200 years, but most were determined to be stars in the vicinity. Alex Alemi's and David Stevenson's 2006 study of models of the early Solar System at the California Institute of Technology shows Venus likely had at least one moon created by a huge impact event billions of years ago.[128] About 10 million years later, according to the study, another impact reversed the planet's spin direction and caused the Venusian moon gradually to spiral inward until it collided with Venus.[129] If later impacts created moons, these were removed in the same way. An alternative explanation for the lack of satellites is the effect of strong solar tides, which can destabilize large satellites orbiting the inner terrestrial planets.[124]
78
+
79
+ To the naked eye, Venus appears as a white point of light brighter than any other planet or star (apart from the Sun).[130] The planet's mean apparent magnitude is −4.14 with a standard deviation of 0.31.[14] The brightest magnitude occurs during crescent phase about one month before or after inferior conjunction. Venus fades to about magnitude −3 when it is backlit by the Sun.[131] The planet is bright enough to be seen in a clear midday sky[132] and is more easily visible when the Sun is low on the horizon or setting. As an inferior planet, it always lies within about 47° of the Sun.[133]
80
+
81
+ Venus "overtakes" Earth every 584 days as it orbits the Sun.[5] As it does so, it changes from the "Evening Star", visible after sunset, to the "Morning Star", visible before sunrise. Although Mercury, the other inferior planet, reaches a maximum elongation of only 28° and is often difficult to discern in twilight, Venus is hard to miss when it is at its brightest. Its greater maximum elongation means it is visible in dark skies long after sunset. As the brightest point-like object in the sky, Venus is a commonly misreported "unidentified flying object".
82
+
83
+ As it orbits the Sun, Venus displays phases like those of the Moon in a telescopic view. The planet appears as a small and "full" disc when it is on the opposite side of the Sun (at superior conjunction). Venus shows a larger disc and "quarter phase" at its maximum elongations from the Sun, and appears its brightest in the night sky. The planet presents a much larger thin "crescent" in telescopic views as it passes along the near side between Earth and the Sun. Venus displays its largest size and "new phase" when it is between Earth and the Sun (at inferior conjunction). Its atmosphere is visible through telescopes by the halo of sunlight refracted around it.[133]
84
+
85
+ The Venusian orbit is slightly inclined relative to Earth's orbit; thus, when the planet passes between Earth and the Sun, it usually does not cross the face of the Sun. Transits of Venus occur when the planet's inferior conjunction coincides with its presence in the plane of Earth's orbit. Transits of Venus occur in cycles of 243 years with the current pattern of transits being pairs of transits separated by eight years, at intervals of about 105.5 years or 121.5 years—a pattern first discovered in 1639 by the English astronomer Jeremiah Horrocks.[134]
86
+
87
+ The latest pair was June 8, 2004 and June 5–6, 2012. The transit could be watched live from many online outlets or observed locally with the right equipment and conditions.[135]
88
+
89
+ The preceding pair of transits occurred in December 1874 and December 1882; the following pair will occur in December 2117 and December 2125.[136] The 1874 transit is the subject of the oldest film known, the 1874 Passage de Venus. Historically, transits of Venus were important, because they allowed astronomers to determine the size of the astronomical unit, and hence the size of the Solar System as shown by Horrocks in 1639.[137] Captain Cook's exploration of the east coast of Australia came after he had sailed to Tahiti in 1768 to observe a transit of Venus.[138][139]
90
+
91
+ The pentagram of Venus is the path that Venus makes as observed from Earth. Successive inferior conjunctions of Venus repeat very near a 13:8 ratio (Earth orbits 8 times for every 13 orbits of Venus), shifting 144° upon sequential inferior conjunctions. The 13:8 ratio is approximate. 8/13 is approximately 0.61538 while Venus orbits the Sun in 0.61519 years.[140]
92
+
93
+ Naked eye observations of Venus during daylight hours exist in several anecdotes and records. Astronomer Edmund Halley calculated its maximum naked eye brightness in 1716, when many Londoners were alarmed by its appearance in the daytime. French emperor Napoleon Bonaparte once witnessed a daytime apparition of the planet while at a reception in Luxembourg.[141] Another historical daytime observation of the planet took place during the inauguration of the American president Abraham Lincoln in Washington, D.C., on 4 March 1865.[142] Although naked eye visibility of Venus's phases is disputed, records exist of observations of its crescent.[143]
94
+
95
+ A long-standing mystery of Venus observations is the so-called ashen light—an apparent weak illumination of its dark side, seen when the planet is in the crescent phase. The first claimed observation of ashen light was made in 1643, but the existence of the illumination has never been reliably confirmed. Observers have speculated it may result from electrical activity in the Venusian atmosphere, but it could be illusory, resulting from the physiological effect of observing a bright, crescent-shaped object.[144][87]
96
+
97
+ Because the movements of Venus appear to be discontinuous (it disappears due to its proximity to the sun, for many days at a time, and then reappears on the other horizon), some cultures did not recognize Venus as single entity;[145] instead, they assumed it to be two separate stars on each horizon: the morning and evening star.[145] Nonetheless, a cylinder seal from the Jemdet Nasr period and the Venus tablet of Ammisaduqa from the First Babylonian dynasty indicate that the ancient Sumerians already knew that the morning and evening stars were the same celestial object.[146][145][147] In the Old Babylonian period, the planet Venus was known as Ninsi'anna, and later as Dilbat.[148] The name "Ninsi'anna" translates to "divine lady, illumination of heaven", which refers to Venus as the brightest visible "star". Earlier spellings of the name were written with the cuneiform sign si4 (= SU, meaning "to be red"), and the original meaning may have been "divine lady of the redness of heaven", in reference to the colour of the morning and evening sky.[149]
98
+
99
+ The Chinese historically referred to the morning Venus as "the Great White" (Tài-bái 太白) or "the Opener (Starter) of Brightness" (Qǐ-m��ng 啟明), and the evening Venus as "the Excellent West One" (Cháng-gēng 長庚).[150]
100
+
101
+ The ancient Greeks also initially believed Venus to be two separate stars: Phosphorus, the morning star, and Hesperus, the evening star. Pliny the Elder credited the realization that they were a single object to Pythagoras in the sixth century BCE,[151] while Diogenes Laërtius argued that Parmenides was probably responsible for this rediscovery.[152] Though they recognized Venus as a single object, the ancient Romans continued to designate the morning aspect of Venus as Lucifer, literally "Light-Bringer", and the evening aspect as Vesper, both of which are literal translations of their traditional Greek names.
102
+
103
+ In the second century, in his astronomical treatise Almagest, Ptolemy theorized that both Mercury and Venus are located between the Sun and the Earth. The 11th-century Persian astronomer Avicenna claimed to have observed the transit of Venus,[153] which later astronomers took as confirmation of Ptolemy's theory.[154] In the 12th century, the Andalusian astronomer Ibn Bajjah observed "two planets as black spots on the face of the Sun"; these were thought to be the transits of Venus and Mercury by 13th-century Maragha astronomer Qotb al-Din Shirazi, though this cannot be true as there were no Venus transits in Ibn Bajjah's lifetime.[155][n 2]
104
+
105
+ When the Italian physicist Galileo Galilei first observed the planet in the early 17th century, he found it showed phases like the Moon, varying from crescent to gibbous to full and vice versa. When Venus is furthest from the Sun in the sky, it shows a half-lit phase, and when it is closest to the Sun in the sky, it shows as a crescent or full phase. This could be possible only if Venus orbited the Sun, and this was among the first observations to clearly contradict the Ptolemaic geocentric model that the Solar System was concentric and centred on Earth.[158][159]
106
+
107
+ The 1639 transit of Venus was accurately predicted by Jeremiah Horrocks and observed by him and his friend, William Crabtree, at each of their respective homes, on 4 December 1639 (24 November under the Julian calendar in use at that time).[160]
108
+
109
+ The atmosphere of Venus was discovered in 1761 by Russian polymath Mikhail Lomonosov.[161][162] Venus's atmosphere was observed in 1790 by German astronomer Johann Schröter. Schröter found when the planet was a thin crescent, the cusps extended through more than 180°. He correctly surmised this was due to scattering of sunlight in a dense atmosphere. Later, American astronomer Chester Smith Lyman observed a complete ring around the dark side of the planet when it was at inferior conjunction, providing further evidence for an atmosphere.[163] The atmosphere complicated efforts to determine a rotation period for the planet, and observers such as Italian-born astronomer Giovanni Cassini and Schröter incorrectly estimated periods of about 24 h from the motions of markings on the planet's apparent surface.[164]
110
+
111
+ Little more was discovered about Venus until the 20th century. Its almost featureless disc gave no hint what its surface might be like, and it was only with the development of spectroscopic, radar and ultraviolet observations that more of its secrets were revealed. The first ultraviolet observations were carried out in the 1920s, when Frank E. Ross found that ultraviolet photographs revealed considerable detail that was absent in visible and infrared radiation. He suggested this was due to a dense, yellow lower atmosphere with high cirrus clouds above it.[165]
112
+
113
+ Spectroscopic observations in the 1900s gave the first clues about the Venusian rotation. Vesto Slipher tried to measure the Doppler shift of light from Venus, but found he could not detect any rotation. He surmised the planet must have a much longer rotation period than had previously been thought.[166] Later work in the 1950s showed the rotation was retrograde. Radar observations of Venus were first carried out in the 1960s, and provided the first measurements of the rotation period, which were close to the modern value.[167]
114
+
115
+ Radar observations in the 1970s revealed details of the Venusian surface for the first time. Pulses of radio waves were beamed at the planet using the 300 m (1,000 ft) radio telescope at Arecibo Observatory, and the echoes revealed two highly reflective regions, designated the Alpha and Beta regions. The observations also revealed a bright region attributed to mountains, which was called Maxwell Montes.[168] These three features are now the only ones on Venus that do not have female names.[39]
116
+
117
+ The first robotic space probe mission to Venus, and the first to any planet, began with the Soviet Venera program in 1961.[169] The United States' exploration of Venus had its first success with the Mariner 2 mission on 14 December 1962, becoming the world's first successful interplanetary mission, passing 34,833 km (21,644 mi) above the surface of Venus, and gathering data on the planet's atmosphere.[170][171]
118
+
119
+ On 18 October 1967, the Soviet Venera 4 successfully entered the atmosphere and deployed science experiments. Venera 4 showed the surface temperature was hotter than Mariner 2 had calculated, at almost 500 °C (932 °F), determined that the atmosphere was 95% carbon dioxide (CO2), and discovered that Venus's atmosphere was considerably denser than Venera 4's designers had anticipated.[172] The joint Venera 4–Mariner 5 data were analysed by a combined Soviet–American science team in a series of colloquia over the following year,[173] in an early example of space cooperation.[174]
120
+
121
+ In 1974, Mariner 10 swung by Venus on its way to Mercury and took ultraviolet photographs of the clouds, revealing the extraordinarily high wind speeds in the Venusian atmosphere.
122
+
123
+ In 1975, the Soviet Venera 9 and 10 landers transmitted the first images from the surface of Venus, which were in black and white. In 1982 the first colour images of the surface were obtained with the Soviet Venera 13 and 14 landers.
124
+
125
+ NASA obtained additional data in 1978 with the Pioneer Venus project that consisted of two separate missions:[175] Pioneer Venus Orbiter and Pioneer Venus Multiprobe.[176] The successful Soviet Venera program came to a close in October 1983, when Venera 15 and 16 were placed in orbit to conduct detailed mapping of 25% of Venus's terrain (from the north pole to 30°N latitude)[177]
126
+
127
+ Several other Venus flybys took place in the 1980s and 1990s that increased the understanding of Venus, including Vega 1 (1985), Vega 2 (1985), Galileo (1990), Magellan (1994), Cassini–Huygens (1998), and MESSENGER (2006). Then, Venus Express by the European Space Agency (ESA) entered orbit around Venus in April 2006. Equipped with seven scientific instruments, Venus Express provided unprecedented long-term observation of Venus's atmosphere. ESA concluded that mission in December 2014.
128
+
129
+ As of 2016, Japan's Akatsuki is in a highly elliptical orbit around Venus since 7 December 2015, and there are several probing proposals under study by Roscosmos, NASA, and India's ISRO.
130
+
131
+ In 2016, the NASA Innovative Advanced Concepts program studied a rover, the Automaton Rover for Extreme Environments, designed to survive for an extended time in Venus's environmental conditions. It would be controlled by a mechanical computer and driven by wind power.[178]
132
+
133
+ Venus is a primary feature of the night sky, and so has been of remarkable importance in mythology, astrology and fiction throughout history and in different cultures.
134
+
135
+ In Sumerian religion, Inanna was associated with the planet Venus.[181][182] Several hymns praise Inanna in her role as the goddess of the planet Venus.[145][182][181] Theology professor Jeffrey Cooley has argued that, in many myths, Inanna's movements may correspond with the movements of the planet Venus in the sky.[145] The discontinuous movements of Venus relate to both mythology as well as Inanna's dual nature.[145] In Inanna's Descent to the Underworld, unlike any other deity, Inanna is able to descend into the netherworld and return to the heavens. The planet Venus appears to make a similar descent, setting in the West and then rising again in the East.[145] An introductory hymn describes Inanna leaving the heavens and heading for Kur, what could be presumed to be, the mountains, replicating the rising and setting of Inanna to the West.[145] In Inanna and Shukaletuda and Inanna's Descent into the Underworld appear to parallel the motion of the planet Venus.[145] In Inanna and Shukaletuda, Shukaletuda is described as scanning the heavens in search of Inanna, possibly searching the eastern and western horizons.[183] In the same myth, while searching for her attacker, Inanna herself makes several movements that correspond with the movements of Venus in the sky.[145]
136
+
137
+ Classical poets such as Homer, Sappho, Ovid and Virgil spoke of the star and its light.[184] Poets such as William Blake, Robert Frost, Letitia Elizabeth Landon, Alfred Lord Tennyson and William Wordsworth wrote odes to it.[185]
138
+
139
+ In Chinese the planet is called Jīn-xīng (金星), the golden planet of the metal element. In India Shukra Graha ("the planet Shukra") which is named after a powerful saint Shukra. Shukra which is used in Indian Vedic astrology[186] means "clear, pure" or "brightness, clearness" in Sanskrit. One of the nine Navagraha, it is held to affect wealth, pleasure and reproduction; it was the son of Bhrgu, preceptor of the Daityas, and guru of the Asuras.[187] The word Shukra is also associated with semen, or generation. Venus is known as Kejora in Indonesian and Malay. Modern Chinese, Japanese and Korean cultures refer to the planet literally as the "metal star" (金星), based on the Five elements.[188][189][190]
140
+
141
+ The Maya considered Venus to be the most important celestial body after the Sun and Moon. They called it Chac ek,[191] or Noh Ek', "the Great Star".[192] The cycles of Venus were important to their calendar.
142
+
143
+ The Ancient Egyptians and Greeks believed Venus to be two separate bodies, a morning star and an evening star. The Egyptians knew the morning star as Tioumoutiri and the evening star as Ouaiti.[193] The Greeks used the names Phōsphoros (Φωσϕόρος), meaning "light-bringer" (whence the element phosphorus; alternately Ēōsphoros (Ἠωςϕόρος), meaning "dawn-bringer"), for the morning star, and Hesperos (Ἕσπερος), meaning "Western one", for the evening star.[194] Though by the Roman era they were recognized as one celestial object, known as "the star of Venus", the traditional two Greek names continued to be used, though usually translated to Latin as Lūcifer[194][195] and Vesper.
144
+
145
+ With the invention of the telescope, the idea that Venus was a physical world and possible destination began to take form.
146
+
147
+ The impenetrable Venusian cloud cover gave science fiction writers free rein to speculate on conditions at its surface; all the more so when early observations showed that not only was it similar in size to Earth, it possessed a substantial atmosphere. Closer to the Sun than Earth, the planet was frequently depicted as warmer, but still habitable by humans.[196] The genre reached its peak between the 1930s and 1950s, at a time when science had revealed some aspects of Venus, but not yet the harsh reality of its surface conditions. Findings from the first missions to Venus showed the reality to be quite different, and brought this particular genre to an end.[197] As scientific knowledge of Venus advanced, science fiction authors tried to keep pace, particularly by conjecturing human attempts to terraform Venus.[198]
148
+
149
+ The astronomical symbol for Venus is the same as that used in biology for the female sex: a circle with a small cross beneath.[199] [200] The Venus symbol also represents femininity, and in Western alchemy stood for the metal copper.[199] [200] Polished copper has been used for mirrors from antiquity, and the symbol for Venus has sometimes been understood to stand for the mirror of the goddess although that is not its true origin.[199][200]
150
+
151
+ The speculation of the existence of life on Venus has decreased significantly since the early 1960s, when spacecraft began studying Venus and it became clear that the conditions on Venus are extreme compared to those on Earth.
152
+
153
+ That Venus is closer to the Sun than Earth, raising temperatures on the surface to nearly 735 K (462 °C; 863 °F), that its atmospheric pressure is 90 times that of Earth, and the extreme impact of the greenhouse effect make water-based life as currently known unlikely. A few scientists have speculated that thermoacidophilic extremophile microorganisms might exist in the lower-temperature, acidic upper layers of the Venusian atmosphere.[201][202][203] The atmospheric pressure and temperature fifty kilometres above the surface are similar to those at Earth's surface. This has led to proposals to use aerostats (lighter-than-air balloons) for initial exploration and ultimately for permanent "floating cities" in the Venusian atmosphere.[204] Among the many engineering challenges are the dangerous amounts of sulfuric acid at these heights.[204]
154
+
155
+ Nonetheless, in August 2019, astronomers reported that newly discovered long-term pattern of absorbance and albedo changes in the atmosphere of the planet Venus are caused by "unknown absorbers", which may be chemicals or even large colonies of microorganisms high up in the atmosphere of the planet.[205][84]
156
+
157
+ Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Virgo Supercluster → Laniakea Supercluster → Observable universe → UniverseEach arrow (→) may be read as "within" or "part of".
158
+
en/4657.html.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ A planet, in astronomy, is one of a class of celestial bodies that orbit stars. (A dwarf planet is a similar, but officially mutually exclusive, class of body.)
2
+
3
+ Planet or Planets may also refer to:
en/4658.html.txt ADDED
@@ -0,0 +1,174 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ Carnivorous plants are plants that derive some or most of their nutrients (but not energy, which they derive from photosynthesis) from trapping and consuming animals or protozoans, typically insects and other arthropods. Carnivorous plants have adapted to grow in places where the soil is thin or poor in nutrients, especially nitrogen, such as acidic bogs. Charles Darwin wrote Insectivorous Plants, the first well-known treatise on carnivorous plants, in 1875.[4] Carnivorous plants can be found on all continents except Antarctica, as well as many Pacific islands.[5]
4
+
5
+ True carnivory is thought to have evolved independently nine times in five different orders of flowering plants,[6][7][8][9] and is represented by more than a dozen genera. This classification includes at least 583 species that attract, trap, and kill prey, absorbing the resulting available nutrients.[6][10] This number has increased by approximately 3 species per year since the year 2000.[11] Additionally, over 300 protocarnivorous plant species in several genera show some but not all of these characteristics.
6
+
7
+ Five basic trapping mechanisms are found in carnivorous plants.[12]
8
+
9
+ These traps may be active or passive, depending on whether movement aids the capture of prey. For example, Triphyophyllum is a passive flypaper that secretes mucilage, but whose leaves do not grow or move in response to prey capture. Meanwhile, sundews are active flypaper traps whose leaves undergo rapid acid growth, which is an expansion of individual cells as opposed to cell division. The rapid acid growth allows the sundew tentacles to bend, aiding in the retention and digestion of prey.[13]
10
+
11
+ Characterised by an internal chamber, pitfall traps are thought to have evolved independently at least six times.[6] This particular adaptation is found within the families Sarraceniaceae (Darlingtonia, Heliamphora, Sarracenia), Nepenthaceae (Nepenthes), Cephalotaceae (Cephalotus), and Eriocaulaceae (Paepalanthus). Within the family Bromeliaceae, pitcher morphology and carnivory evolved twice (Brocchinia and Catopsis).[6] Because these families do not share a common ancestor who also had pitfall trap morphology, carnivorous pitchers are an example of convergent evolution.
12
+
13
+ A passive trap, pitfall traps attract prey with nectar bribes secreted by the peristome and bright flower-like anthocyanin patterning within the pitcher. The linings of most pitcher plants are covered in a loose coating of waxy flakes which are slippery for insects, causing them to fall into the pitcher. Once within the pitcher structure, digestive enzymes or mutualistic species break down the prey into an absorbable form for the plant.[7][14] Water can become trapped within the pitcher, making a habitat for other flora and fauna. This type of 'water body' is called a Phytotelma.
14
+
15
+ The simplest pitcher plants are probably those of Heliamphora, the marsh pitcher plant. In this genus, the traps are clearly derived from a simple rolled leaf whose margins have sealed together. These plants live in areas of high rainfall in South America such as Mount Roraima and consequently have a problem ensuring their pitchers do not overflow. To counteract this problem, natural selection has favoured the evolution of an overflow similar to that of a bathroom sink—a small gap in the zipped-up leaf margins allows excess water to flow out of the pitcher.[citation needed][15]
16
+
17
+ Heliamphora is a member of the Sarraceniaceae, a New World family in the order Ericales (heathers and allies). Heliamphora is limited to South America, but the family contains two other genera, Sarracenia and Darlingtonia, which are endemic to the Southeastern United States (with the exception of one species) and California respectively. Sarracenia purpurea subsp. purpurea (the northern pitcher plant) can be found as far north as Canada. Sarracenia is the pitcher plant genus most commonly encountered in cultivation, because it is relatively hardy and easy to grow.
18
+
19
+ In the genus Sarracenia, the problem of pitcher overflow is solved by an operculum, which is essentially a flared leaflet that covers the opening of the rolled-leaf tube and protects it from rain. Possibly because of this improved waterproofing, Sarracenia species secrete enzymes such as proteases and phosphatases into the digestive fluid at the bottom of the pitcher; Heliamphora relies on bacterial digestion alone. The enzymes digest the proteins and nucleic acids in the prey, releasing amino acids and phosphate ions, which the plant absorbs. In at least one species, Sarracenia flava, the nectar bribe is laced with coniine, a toxic alkaloid also found in hemlock, which probably increases the efficiency of the traps by intoxicating prey.[16]
20
+
21
+ Darlingtonia californica, the cobra plant, possesses an adaptation also found in Sarracenia psittacina and, to a lesser extent, in Sarracenia minor: the operculum is balloon-like and almost seals the opening to the tube. This balloon-like chamber is pitted with areolae, chlorophyll-free patches through which light can penetrate. Insects, mostly ants, enter the chamber via the opening underneath the balloon. Once inside, they tire themselves trying to escape from these false exits, until they eventually fall into the tube. Prey access is increased by the "fish tails", outgrowths of the operculum that give the plant its name. Some seedling Sarracenia species also have long, overhanging opercular outgrowths; Darlingtonia may therefore represent an example of neoteny.
22
+
23
+ The second major group of pitcher plants are the monkey cups or tropical pitcher plants of the genus Nepenthes. In the hundred or so species of this genus, the pitcher is borne at the end of a tendril, which grows as an extension to the midrib of the leaf. Most species catch insects, although the larger ones, such as Nepenthes rajah, also occasionally take small mammals and reptiles. Nepenthes bicalcarata possesses two sharp thorns that project from the base of the operculum over the entrance to the pitcher. These likely serve to lure insects into a precarious position over the pitcher mouth, where they may lose their footing and fall into the fluid within.[17]
24
+
25
+ The pitfall trap has evolved independently in at least two other groups. The Albany pitcher plant Cephalotus follicularis is a small pitcher plant from Western Australia, with moccasin-like pitchers. The rim of its pitcher's opening (the peristome) is particularly pronounced (both secrete nectar) and provides a thorny overhang to the opening, preventing trapped insects from climbing out.
26
+
27
+ The final carnivore with a pitfall-like trap is the bromeliad Brocchinia reducta. Like most relatives of the pineapple, the tightly packed, waxy leaf bases of the strap-like leaves of this species form an urn. In most bromeliads, water collects readily in this urn and may provide habitats for frogs, insects and, more useful for the plant, diazotrophic (nitrogen-fixing) bacteria. In Brocchinia, the urn is a specialised insect trap, with a loose, waxy lining and a population of digestive bacteria.[citation needed]
28
+
29
+ The flypaper trap utilises sticky mucilage or glue. The leaf of flypaper traps is studded with mucilage-secreting glands, which may be short (like those of the butterworts), or long and mobile (like those of many sundews). Flypapers have evolved independently at least five times. There is evidence that some clades of flypaper traps have evolved from morphologically more complex traps such as pitchers.[9]
30
+
31
+ In the genus Pinguicula, the mucilage glands are quite short (sessile), and the leaf, while shiny (giving the genus its common name of 'butterwort'), does not appear carnivorous. However, this belies the fact that the leaf is an extremely effective trap of small flying insects (such as fungus gnats), and its surface responds to prey by relatively rapid growth. This thigmotropic growth may involve rolling of the leaf blade (to prevent rain from splashing the prey off the leaf surface) or dishing of the surface under the prey to form a shallow digestive pit.
32
+
33
+ The sundew genus (Drosera) consists of over 100 species of active flypapers whose mucilage glands are borne at the end of long tentacles, which frequently grow fast enough in response to prey (thigmotropism) to aid the trapping process. The tentacles of D. burmanii can bend 180° in a minute or so. Sundews are extremely cosmopolitan and are found on all the continents except the Antarctic mainland. They are most diverse in Australia, the home to the large subgroup of pygmy sundews such as D. pygmaea and to a number of tuberous sundews such as D. peltata, which form tubers that aestivate during the dry summer months. These species are so dependent on insect sources of nitrogen that they generally lack the enzyme nitrate reductase, which most plants require to assimilate soil-borne nitrate into organic forms.[citation needed]
34
+
35
+ Closely related to Drosera is the Portuguese dewy pine, Drosophyllum, which differs from the sundews in being passive. Its leaves are incapable of rapid movement or growth. Unrelated, but similar in habit, are the Australian rainbow plants (Byblis). Drosophyllum is unusual in that it grows under near-desert conditions; almost all other carnivores are either bog plants or grow in moist tropical areas.
36
+ Recent molecular data (particularly the production of plumbagin) indicate that the remaining flypaper, Triphyophyllum peltatum, a member of the Dioncophyllaceae, is closely related to Drosophyllum and forms part of a larger clade of carnivorous and non-carnivorous plants with the Droseraceae, Nepenthaceae, Ancistrocladaceae and Plumbaginaceae. This plant is usually encountered as a liana, but in its juvenile phase, the plant is carnivorous. This may be related to a requirement for specific nutrients for flowering.
37
+
38
+ The only two active snap traps—the Venus flytrap (Dionaea muscipula) and the waterwheel plant (Aldrovanda vesiculosa)—had a common ancestor with the snap trap adaptation, which had evolved from an ancestral lineage that utilised flypaper traps.[18] Their trapping mechanism has also been described as a "mouse trap", "bear trap" or "man trap", based on their shape and rapid movement. However, the term snap trap is preferred as other designations are misleading, particularly with respect to the intended prey. Aldrovanda is aquatic and specialised in catching small invertebrates; Dionaea is terrestrial and catches a variety of arthropods, including spiders.[19]
39
+
40
+ The traps are very similar, with leaves whose terminal section is divided into two lobes, hinged along the midrib. Trigger hairs (three on each lobe in Dionaea muscipula, many more in the case of Aldrovanda) inside the trap lobes are sensitive to touch. When a trigger hair is bent, stretch-gated ion channels in the membranes of cells at the base of the trigger hair open, generating an action potential that propagates to cells in the midrib.[20] These cells respond by pumping out ions, which may either cause water to follow by osmosis (collapsing the cells in the midrib) or cause rapid acid growth.[21] The mechanism is still debated, but in any case, changes in the shape of cells in the midrib allow the lobes, held under tension, to snap shut,[20] flipping rapidly from convex to concave[22] and interring the prey. This whole process takes less than a second. In the Venus flytrap, closure in response to raindrops and blown-in debris is prevented by the leaves having a simple memory: for the lobes to shut, two stimuli are required, 0.5 to 30 seconds apart.[23][24]
41
+
42
+ The snapping of the leaves is a case of thigmonasty (undirected movement in response to touch). Further stimulation of the lobe's internal surfaces by the struggling insects causes the lobes to close even tighter (thigmotropism), sealing the lobes hermetically and forming a stomach in which digestion occurs over a period of one to two weeks. Leaves can be reused three or four times before they become unresponsive to stimulation, depending on the growing conditions.
43
+
44
+ Bladder traps are exclusive to the genus Utricularia, or bladderworts. The bladders (vesiculae) pump ions out of their interiors. Water follows by osmosis, generating a partial vacuum inside the bladder. The bladder has a small opening, sealed by a hinged door. In aquatic species, the door has a pair of long trigger hairs. Aquatic invertebrates such as Daphnia touch these hairs and deform the door by lever action, releasing the vacuum. The invertebrate is sucked into the bladder, where it is digested. Many species of Utricularia (such as U. sandersonii) are terrestrial, growing in waterlogged soil, and their trapping mechanism is triggered in a slightly different manner. Bladderworts lack roots, but terrestrial species have anchoring stems that resemble roots. Temperate aquatic bladderworts generally die back to a resting turion during the winter months, and U. macrorhiza appears to regulate the number of bladders it bears in response to the prevailing nutrient content of its habitat.[15]
45
+
46
+ A lobster-pot trap is a chamber that is easy to enter, and whose exit is either difficult to find or obstructed by inward-pointing bristles. Lobster pots are the trapping mechanism in Genlisea, the corkscrew plants. These plants appear to specialise in aquatic protozoa. A Y-shaped modified leaf allows prey to enter but not exit. Inward-pointing hairs force the prey to move in a particular direction. Prey entering the spiral entrance that coils around the upper two arms of the Y are forced to move inexorably towards a stomach in the lower arm of the Y, where they are digested. Prey movement is also thought to be encouraged by water movement through the trap, produced in a similar way to the vacuum in bladder traps, and probably evolutionarily related to it.
47
+
48
+ Outside of Genlisea, features reminiscent of lobster-pot traps can be seen in Sarracenia psittacina, Darlingtonia californica, and, some horticulturalists argue, Nepenthes aristolochioides.
49
+
50
+ The trapping mechanism of the sundew Drosera glanduligera combines features of both flypaper and snap traps; it has been termed a catapult-flypaper trap.[25] However,this isn't the only combination traps. Nepenthes jamban is a combination of pitfall and flypaper traps because it has a sticky pitcher fluid.
51
+
52
+ Most Sumatran nepenthes, link N. inermis also have this method. For example, N. Dubia and N. flava also use this method.
53
+
54
+ To be defined as carnivorous, a plant must first exhibit an adaptation of some trait specifically for the attraction, capture, or digestion of prey. Only one trait needs to have evolved that fits this adaptive requirement, as many current carnivorous plant genera lack some of the above-mentioned attributes. The second requirement is the ability to absorb nutrients from dead prey and gain a fitness advantage from the integration of these derived nutrients (mostly amino acids and ammonium ions)[26] either through increased growth or pollen and/or seed production. However, plants that may opportunistically utilise nutrients from dead animals without specifically seeking and capturing fauna are excluded from the carnivorous definition. The second requirement also differentiates carnivory from defensive plant characteristics that may kill or incapacitate insects without the advantage of nutrient absorption. Due to the observation that many currently classified carnivores lack digestive enzymes for breaking down nutrients and instead rely upon mutualistic and symbiotic relationships with bacteria, ants, or insect, this adaptation has been added to the carnivorous definition.[8][27] Despite this, there are cases where plants appear carnivorous, in that they fulfill some of the above definition, but are not truly carnivorous. Some botanists argue that there is a spectrum of carnivory found in plants: from completely non-carnivorous plants like cabbages, to borderline carnivores, to unspecialised and simple traps, like Heliamphora, to extremely specialised and complex traps, like that of the Venus flytrap.[7]
55
+
56
+ A possible carnivore is the genus Roridula; the plants in this genus produce sticky leaves with resin-tipped glands and look extremely similar to some of the larger sundews. However, they do not directly benefit from the insects they catch. Instead, they form a mutualistic symbiosis with species of assassin bug (genus Pameridea), which eat the trapped insects. The plant benefits from the nutrients in the bugs' feces.[28] By some definitions this would still constitute botanical carnivory.[7]
57
+
58
+ A number of species in the Martyniaceae (previously Pedaliaceae), such as Ibicella lutea, have sticky leaves that trap insects. However, these plants have not been shown conclusively to be carnivorous.[29] Likewise, the seeds of Shepherd's Purse,[29] urns of Paepalanthus bromelioides,[30] bracts of Passiflora foetida,[31] and flower stalks and sepals of triggerplants (Stylidium)[32] appear to trap and kill insects, but their classification as carnivores is contentious.
59
+
60
+ Charles Darwin concluded that carnivory in plants was convergent, writing in 1875 that carnivorous genera Utricularia and Nepenthes were not "at all related to the [carnivorous family] Droseraceae".[4]  This remained a subject of debate for over a century. In 1960, Leon Croizat concluded that carnivory was monophyletic, and placed all the carnivorous plants together at the base of the angiosperms.[9]  Molecular studies over the past 30 years have led to a wide consensus that Darwin was correct, with studies showing that carnivory evolved at least six times in the angiosperms, and that trap designs such as pitcher traps and flypaper traps are analogous rather than homologous.[8]
61
+
62
+ Researchers using molecular data have concluded that carnivory evolved independently in the Poales (Brocchinia and Catopsis in the Bromelaceae), the Caryophyllales (Droseraceae, Nepenthaceae, Drosophyllaceae, Dioncophyllaceae), the Oxalidales (Cephalotus), the Ericales (Sarraceniaceae and Roridulaceae), and twice in the Lamiales (Lentibulariaceae and independently in Byblidaceae).[9]  The oldest evolution of an existing carnivory lineage has been dated to 85.6 million years ago, with the most recent being Brocchinia reducta in the Bromeliaceae estimated at only 1.9 mya.[33]
63
+
64
+ The evolution of carnivorous plants is obscured by the paucity of their fossil record. Very few fossils have been found, and then usually only as seed or pollen. Carnivorous plants are generally herbs, and their traps are produced by primary growth. They generally do not form readily fossilisable structures such as thick bark or wood.
65
+
66
+ Still, much can be deduced from the structure of current traps and their ecological interactions. It is widely believed that carnivory evolved as a method to increase nutrients in extremely nutrient poor conditions, leading to a cost-benefit model for botanical carnivory. Cost-benefit models are given under the assumption that there is a set amount of energy potentially available for an organism, which leads to trade-offs when energy is allocated to certain functions to maximise competitive ability and fitness. For carnivory, the trait could only evolve if the increase in nutrients from prey capture exceeded the cost of investment in carnivorous adaptations.[27]
67
+
68
+ Most carnivorous plants live in habitats with high light, waterlogged soils, and extremely low soil nitrogen and phosphorus, producing the ecological impetus to derive nitrogen from an alternate source. High light environments allowed for the trade off between photosynthetic leaves and prey capturing traps that are photosynthetically inefficient. To compensate for photosynthetically inefficient material, the nutrients obtained through carnivory would need to increase photosynthesis by investing in more leaf mass, i.e. growing. This means when there is a shortage of nutrients and enough light and water, prey capture and digestion has the greatest impact on photosynthetic gains, favoring the evolution of plant adaptations which allowed for more effective and efficient carnivory.[7][26] Due to the large amount of energy and resources allocated to carnivorous adaptations. i.e. the production of lures, digestive enzymes, modified leaf structures, and the decreased rate of photosynthesis over total leaf area, some authors argue that carnivory is an evolutionary last resort when nitrogen and phosphorus are limited in an ecosystem.[34]
69
+
70
+ Pitfall traps are derived from rolled leaves, which evolved several independent times through convergent evolution. The vascular tissues of Sarracenia is a case in point. The keel along the front of the trap contains a mixture of leftward- and rightward-facing vascular bundles, as would be predicted from the fusion of the edges of an adaxial (stem-facing) leaf surface. Flypapers also show a simple evolutionary gradient from sticky, non-carnivorous leaves, through passive flypapers to active forms. Molecular data show the Dionaea–Aldrovanda clade is closely related to Drosera,[35] and evolved from active flypaper traps into snap traps.[18]
71
+
72
+ It has been suggested that all trap types are modifications of a similar basic structure—the hairy leaf.[36] Hairy (or more specifically, stalked-glandular) leaves can catch and retain drops of rainwater, especially if shield-shaped or peltate, thus promoting bacteria growth. Insects land on the leaf, become mired by the surface tension of the water, and suffocate. Bacteria jumpstart decay, releasing from the corpse nutrients that the plant can absorb through its leaves. This foliar feeding can be observed in most non-carnivorous plants. Plants that were better at retaining insects or water therefore had a selective advantage. Rainwater can be retained by cupping the leaf, and pitfall traps may have evolved simply by selection pressure for the production of more deeply cupped leaves, followed by "zipping up" of the margins and subsequent loss of most of the hairs. Alternatively, insects can be retained by making the leaf stickier by the production of mucilage, leading to flypaper traps.
73
+
74
+ The lobster-pot traps of Genlisea are difficult to interpret. They may have developed from bifurcated pitchers that later specialised on ground-dwelling prey; or, perhaps, the prey-guiding protrusions of bladder traps became more substantial than the net-like funnel found in most aquatic bladderworts. Whatever their origin, the helical shape of the lobster pot is an adaptation that displays as much trapping surface as possible in all directions when buried in moss.
75
+
76
+ The traps of the bladderworts may have derived from pitchers that specialised in aquatic prey when flooded, like Sarracenia psittacina does today. Escaping prey in terrestrial pitchers have to climb or fly out of a trap, and both of these can be prevented by wax, gravity and narrow tubes. However, a flooded trap can be swum out of, so in Utricularia, a one-way lid may have developed to form the door of a proto-bladder. Later, this may have become active by the evolution of a partial vacuum inside the bladder, tripped by prey brushing against trigger hairs on the door of the bladder.
77
+
78
+ The active glue traps use rapid plant movements to trap their prey. Rapid plant movement can result from actual growth, or from rapid changes in cell turgor, which allow cells to expand or contract by quickly altering their water content. Slow-moving flypapers like Pinguicula exploit growth, while the Venus flytrap uses such rapid turgor changes which make glue unnecessary. The stalked glands that once made glue became teeth and trigger hairs in species with active snap traps —an example of natural selection hijacking preexisting structures for new functions.[18]
79
+
80
+ Recent taxonomic analysis[37] of the relationships within the Caryophyllales indicate that the Droseraceae, Triphyophyllum, Nepenthaceae and Drosophyllum, while closely related, are embedded within a larger clade that includes non-carnivorous groups such as the tamarisks, Ancistrocladaceae, Polygonaceae and Plumbaginaceae. The tamarisks possess specialised salt-excreting glands on their leaves, as do several of the Plumbaginaceae (such as the sea lavender, Limonium), which may have been co-opted for the excretion of other chemicals, such as proteases and mucilage. Some of the Plumbaginaceae (e.g. Ceratostigma) also have stalked, vascularised glands that secrete mucilage on their calyces and aid in seed dispersal and possibly in protecting the flowers from crawling parasitic insects. The balsams (such as Impatiens), which are closely related to the Sarraceniaceae and Roridula, similarly possess stalked glands.
81
+
82
+ Philcoxia is unique in the Plantaginaceae as a result of its subterranean stems and leaves, which have been shown to be used in the capture of nematodes. These plants grow in sand in Brazil, where they are likely to receive other nutrients. Like many other types of carnivorous plant, stalked glands are seen on the leaves. Enzymes on the leaves are used to digest the worms and release their nutrients.[38]
83
+
84
+ The only traps that are unlikely to have descended from a hairy leaf or sepal are the carnivorous bromeliads (Brocchinia and Catopsis). These plants use the urn—a fundamental part of a bromeliad—for a new purpose and build on it by the production of wax and the other paraphernalia of carnivory.
85
+
86
+ Botanical carnivory has evolved in several independent families peppered throughout the angiosperm phylogeny, showing that carnivorous traits underwent convergent evolution multiple times to create similar morphologies across disparate families. Results of genetic testing published in 2017 found an example of convergent evolution -
87
+ a digestive enzyme with the same functional mutations across unrelated lineages.[39]
88
+
89
+ Carnivorous plants are widespread but rather rare. They are almost entirely restricted to habitats such as bogs, where soil nutrients are extremely limiting, but where sunlight and water are readily available. Only under such extreme conditions is carnivory favored to an extent that makes the adaptations advantageous.
90
+
91
+ The archetypal carnivore, the Venus flytrap, grows in soils with almost immeasurable nitrate and calcium levels. Plants need nitrogen for protein synthesis, calcium for cell wall stiffening, phosphate for nucleic acid synthesis, and iron and magnesium for chlorophyll synthesis. The soil is often waterlogged, which favours the production of toxic ions such as ammonium, and its pH is an acidic 4 to 5. Ammonium can be used as a source of nitrogen by plants, but its high toxicity means that concentrations high enough to fertilise are also high enough to cause damage.
92
+
93
+ However, the habitat is warm, sunny, constantly moist, and the plant experiences relatively little competition from low growing Sphagnum moss. Still, carnivores are also found in very atypical habitats. Drosophyllum lusitanicum is found around desert edges and Pinguicula valisneriifolia on limestone (calcium-rich) cliffs.[40]
94
+
95
+ In all the studied cases, carnivory allows plants to grow and reproduce using animals as a source of nitrogen, phosphorus and possibly potassium.[41][42][43] However, there is a spectrum of dependency on animal prey. Pygmy sundews are unable to use nitrate from soil because they lack the necessary enzymes (nitrate reductase in particular).[44] Common butterworts (Pinguicula vulgaris) can use inorganic sources of nitrogen better than organic sources, but a mixture of both is preferred.[41] European bladderworts seem to use both sources equally well. Animal prey makes up for differing deficiencies in soil nutrients.
96
+
97
+ Plants use their leaves to intercept sunlight. The energy is used to reduce carbon dioxide from the air with electrons from water to make sugars (and other biomass) and a waste product, oxygen, in the process of photosynthesis. Leaves also respire, in a similar way to animals, by burning their biomass to generate chemical energy. This energy is temporarily stored in the form of ATP (adenosine triphosphate), which acts as an energy currency for metabolism in all living things. As a waste product, respiration produces carbon dioxide.
98
+
99
+ For a plant to grow, it must photosynthesise more than it respires. Otherwise, it will eventually exhaust its biomass and die. The potential for plant growth is net photosynthesis, the total gross gain of biomass by photosynthesis, minus the biomass lost by respiration. Understanding carnivory requires a cost-benefit analysis of these factors.[26]
100
+
101
+ In carnivorous plants, the leaf is not just used to photosynthesise, but also as a trap. Changing the leaf shape to make it a better trap generally makes it less efficient at photosynthesis. For example, pitchers have to be held upright, so that only their opercula directly intercept light. The plant also has to expend extra energy on non-photosynthetic structures like glands, hairs, glue and digestive enzymes.[45] To produce such structures, the plant requires ATP and respires more of its biomass. Hence, a carnivorous plant will have both decreased photosynthesis and increased respiration, making the potential for growth small and the cost of carnivory high.
102
+
103
+ Being carnivorous allows the plant to grow better when the soil contains little nitrate or phosphate. In particular, an increased supply of nitrogen and phosphorus makes photosynthesis more efficient, because photosynthesis depends on the plant being able to synthesise very large amounts of the nitrogen-rich enzyme RuBisCO (ribulose-1,5-bis-phosphate carboxylase/oxygenase), the most abundant protein on Earth.
104
+
105
+ It is intuitively clear that the Venus flytrap is more carnivorous than Triphyophyllum peltatum. The former is a full-time moving snap-trap; the latter is a part-time, non-moving flypaper. The energy "wasted" by the plant in building and fuelling its trap is a suitable measure of the carnivory of the trap.
106
+
107
+ Using this measure of investment in carnivory, a model can be proposed.[26] Above is a graph of carbon dioxide uptake (potential for growth) against trap respiration (investment in carnivory) for a leaf in a sunny habitat containing no soil nutrients at all. Respiration is a straight line sloping down under the horizontal axis (respiration produces carbon dioxide). Gross photosynthesis is a curved line above the horizontal axis: as investment increases, so too does the photosynthesis of the trap, as the leaf receives a better supply of nitrogen and phosphorus. Eventually another factor (such as light intensity or carbon dioxide concentration) will become more limiting to photosynthesis than nitrogen or phosphorus supply. As a result, increasing the investment will not make the plant grow better. The net uptake of carbon dioxide, and therefore, the plant's potential for growth, must be positive for the plant to survive. There is a broad span of investment where this is the case, and there is also a non-zero optimum. Plants investing more or less than this optimum will take up less carbon dioxide than an optimal plant, and hence growing less well. These plants will be at a selective disadvantage. At zero investment the growth is zero, because a non-carnivorous plant cannot survive in a habitat with absolutely no soil-borne nutrients. Such habitats do not exist, so for example, Sphagnum absorbs the tiny amounts of nitrates and phosphates in rain very efficiently and also forms symbioses with diazotrophic cyanobacteria.
108
+
109
+ In a habitat with abundant soil nutrients but little light (as shown above), the gross photosynthesis curve will be lower and flatter, because light will be more limiting than nutrients. A plant can grow at zero investment in carnivory; this is also the optimum investment for a plant, as any investment in traps reduces net photosynthesis (growth) to less than the net photosynthesis of a plant that obtains its nutrients from soil alone.
110
+
111
+ Carnivorous plants exist between these two extremes: the less limiting light and water are, and the more limiting soil nutrients are, the higher the optimum investment in carnivory, and hence the more obvious the adaptations will be to the casual observer.
112
+
113
+ The most obvious evidence for this model is that carnivorous plants tend to grow in habitats where water and light are abundant and where competition is relatively low: the typical bog. Those that do not tend to be even more fastidious in some other way. Drosophyllum lusitanicum grows where there is little water, but it is even more extreme in its requirement for bright light and low disturbance than most other carnivores. Pinguicula valisneriifolia grows in soils with high levels of calcium but requires strong illumination and lower competition than many butterworts.[46]
114
+
115
+ In general, carnivorous plants are poor competitors, because they invest too heavily in structures that have no selective advantage in nutrient-rich habitats. They succeed only where other plants fail. Carnivores are to nutrients what cacti are to water. Carnivory only pays off when the nutrient stress is high and where light is abundant.[47] When these conditions are not met, some plants give up carnivory temporarily. Sarracenia spp. produce flat, non-carnivorous leaves (phyllodes) in winter. Light levels are lower than in summer, so light is more limiting than nutrients, and carnivory does not pay. The lack of insects in winter exacerbates the problem. Damage to growing pitcher leaves prevent them from forming proper pitchers, and again, the plant produces a phyllode instead.
116
+
117
+ Many other carnivores shut down in some seasons. Tuberous sundews die back to tubers in the dry season, bladderworts to turions in winter, and non-carnivorous leaves are made by most butterworts and Cephalotus in the less favourable seasons. Utricularia macrorhiza varies the number of bladders it produces based on the expected density of prey.[48] Part-time carnivory in Triphyophyllum peltatum may be due to an unusually high need for potassium at a certain point in the life cycle, just before flowering.
118
+
119
+ The more carnivorous a plant is, the less conventional its habitat is likely to be. Venus flytraps live in a very specialised habitat, whereas less carnivorous plants (Byblis, Pinguicula) are found in less unusual habitats (i.e., those typical for non-carnivores). Byblis and Drosophyllum both come from relatively arid regions and are both passive flypapers, arguably the lowest maintenance form of trap. Venus flytraps filter their prey using the teeth around the trap's edge, so as not to waste energy on hard-to-digest prey. In evolution, laziness pays, because energy can be used for reproduction, and short-term benefits in reproduction will outweigh long-term benefits in anything else.
120
+
121
+ Carnivory rarely pays, so even carnivorous plants avoid it when there is too little light or an easier source of nutrients, and they use as few carnivorous features as are required at a given time or for a given prey item. There are very few habitats stressful enough to make investing biomass and energy in trigger hairs and enzymes worthwhile. Many plants occasionally benefit from animal protein rotting on their leaves, but carnivory that is obvious enough for the casual observer to notice is rare.[49]
122
+
123
+ Bromeliads seem very well preadapted to carnivory, but only one or two species can be classified as truly carnivorous. By their very shape, bromeliads will benefit from increased prey-derived nutrient input. In this sense, bromeliads are probably carnivorous, but their habitats are too dark for more extreme, recognisable carnivory to evolve. Most bromeliads are epiphytes, and most epiphytes grow in partial shade on tree branches. Brocchinia reducta, on the other hand, is a ground dweller.
124
+
125
+ Many carnivorous plants are not strongly competitive and rely on circumstances to suppress dominating vegetation. Accordingly, some of them rely on fire ecology for their continued survival.
126
+
127
+ For the most part carnivorous plant populations are not dominant enough to be dramatically significant, ecologically speaking, but there is an impressive variety of organisms that interact with various carnivorous plants in sundry relationships of kleptoparasitism, commensalism, and mutualism. For example, small insectivores such as tree frogs often exploit the supply of prey to be found in pitcher plants, and the frog Microhyla nepenthicola actually specialises in such habitats. Certain crab spiders such as Misumenops nepenthicola live largely on the prey of Nepenthes, and other, less specialised, spiders may build webs where they trap insects attracted by the smell or appearance of the traps; some scavengers, detritivores, and also organisms that harvest or exploit those in turn, such as the mosquito Wyeomyia smithii are largely or totally dependent on particular carnivorous plants. Plants such as Roridula species combine with specialised bugs (Pameridea roridulae) in benefiting from insects trapped on their leaves.
128
+
129
+ Associations with species of pitcher plants are so many and varied that the study of Nepenthes infauna is something of a discipline in its own right. Camponotus schmitzi, the diving ant, has an intimate degree of mutualism with the pitcher plant Nepenthes bicalcarata; it not only retrieves prey and detritus from beneath the surface of the liquid in the pitchers, but repels herbivores, and cleans the pitcher peristome, maintaining its slippery nature. The ants have been reported to attack struggling prey, hindering their escape, so there might be an element of myrmecotrophy to the relationship. Numerous species of mosquitoes lay their eggs in the liquid, where their larvae play various roles, depending on species; some eat microbes and detritus, as is common among mosquito larvae, whereas some species of Toxorhynchites also breed in pitchers, and their larvae are predators of other species of mosquito larvae. Apart from the crab spiders on pitchers, an actual small, red crab Geosesarma malayanum will enter the fluid, robbing and scavenging, though reputedly it does so at some risk of being captured and digested itself.[49]
130
+
131
+ Nepenthes rajah has a remarkable mutualism with two unrelated small mammals, the mountain treeshrew (Tupaia montana) and the summit rat (Rattus baluensis). The tree shrews and the rats defecate into the plant's traps while visiting them to feed on sweet, fruity secretions from glands on the pitcher lids.[50] The tree shrew also has a similar relationship with at least two other giant species of Nepenthes. More subtly, Hardwicke's woolly bat (Kerivoula hardwickii), a small species, roosts beneath the operculum (lid) of Nepenthes hemsleyana.[51] The bat's excretions that land in the pitcher pay for the shelter, as it were. To the plant the excreta are more readily assimilable than intact insects would be.
132
+
133
+ There also is a considerable list of Nepenthes endophytes; these are microbes other than pathogens that live in the tissues of pitcher plants, often apparently harmlessly.
134
+
135
+ Another important area of symbiosis between carnivorous plants and insects is pollination. While many species of carnivorous plant can reproduce asexually via self-pollination or vegetative propagation, many carnivorous plants are insect-pollinated.[52] Outcross pollination is beneficial as it increases genetic diversity. This means that carnivorous plants undergo an evolutionary and ecological conflict often called the pollinator-prey conflict.[52] There are several ways by which carnivorous plants reduce the strain of the pollinator-prey conflict. For long-lived plants, the short-term loss of reproduction may be offset by the future growth made possible by resources obtained from prey.[52] Other plants might "target" different species of insect for pollination and prey using different olfactory and visual cues.[52]
136
+
137
+ Approximately half of the plant species assessed by the IUCN are considered threatened (vulnerable, endangered or critically endangered). Common threats are habitat loss as a result of agriculture, collection of wild plants, pollution, invasive species, residential and commercial development, energy production, mining, transportation services, geologic events, climate change, severe weather, and many other anthropogenic activities.[53] Species in the same genus were proven to face similar threats. Threat by continent is deemed highly variable, with threats found for 19 species in North America, 15 species in Asia, seven species in Europe, six species in South America, two species in Africa, and one species in Australia Indicator species' such as Sarracenia reveal positive associations with regard to these threats. Certain threats are also positively correlated themselves, with residential and commercial development, natural systems modifications, invasive species, and pollution having positive associations. Conservation research is aiming to further quantify the effects of threats, such as pollution, on carnivorous plants, as well as to quantify the extinction risks. Only 17% of species had been assessed as of 2011, according to the IUCN.[54] Carnivorous plant conservation will help maintain important ecosystems and prevent secondary extinctions of specialist species that rely on them [11] such as foundation species which may seek refuge or rely on certain plants for their existence. Research suggests a holistic approach, targeted at the habitat-level of carnivorous plants, may be required for successful conservation.[55]
138
+
139
+ The classification of all flowering plants is currently in a state of flux. In the Cronquist system, the Droseraceae and Nepenthaceae were placed in the order Nepenthales, based on the radial symmetry of their flowers and their possession of insect traps. The Sarraceniaceae was placed either in the Nepenthales, or in its own order, the Sarraceniales. The Byblidaceae, Cephalotaceae, and Roridulaceae were placed in the Saxifragales; and the Lentibulariaceae in the Scrophulariales (now subsumed into the Lamiales[56]).
140
+
141
+ In more modern classification, such as that of the Angiosperm Phylogeny Group, the families have been retained, but they have been redistributed amongst several disparate orders. It is also recommended that Drosophyllum be considered in a monotypic family outside the rest of the Droseraceae, probably more closely allied to the Dioncophyllaceae. The current recommendations are shown below (only carnivorous genera are listed):
142
+
143
+ In horticulture, carnivorous plants are considered a curiosity or a rarity, but are becoming more common in cultivation with the advent of mass-production tissue-culture propagation techniques. Venus flytraps are still the most commonly grown, usually available at garden centers and hardware stores, sometimes offered alongside other easy-to-grow varieties. Nurseries that specialise in growing carnivorous plants exclusively also exist, more uncommon or demanding varieties of carnivorous plants can be obtained from specialist nurseries. California Carnivores is a notable example of such a nursery in the US that specialises in the cultivation of carnivorous plants. It is owned and operated by horticulturalist Peter D'Amato.[57] Rob Cantley's Borneo Exotics in Sri Lanka is a large nursery that sells worldwide.[58]
144
+
145
+ Although different species of carnivorous plants have different cultivation requirements in terms of sunlight, humidity, soil moisture, etc., there are commonalities. Most carnivorous plants require rainwater, or water that has been distilled, deionised by reverse osmosis, or acidified to around pH 6.5 using sulfuric acid.[59][60] Common tap or drinking water contains minerals (particularly calcium salts) that will quickly build up and kill the plant.[61] This is because most carnivorous plants have evolved in nutrient-poor, acidic soils and are consequently extreme calcifuges. They are therefore very sensitive to excessive soil-borne nutrients. Since most of these plants are found in bogs, almost all are very intolerant of drying. There are exceptions:
146
+ tuberous sundews require a dry (summer) dormancy period, and Drosophyllum requires much drier conditions than most.
147
+
148
+ Outdoor-grown carnivorous plants generally catch more than enough insects to keep themselves properly fed. Insects may be fed to the plants by hand to supplement their diet; however, carnivorous plants are generally unable to digest large non-insect food items; bits of hamburger, for example, will simply rot, and this may cause the trap, or even the whole plant, to die.
149
+
150
+ A carnivorous plant that catches no insects at all will rarely die, although its growth may be impaired. In general, these plants are best left to their own devices: after underwatering with tap-water, the most common cause of Venus flytrap death is prodding the traps to watch them close and feeding them inappropriate items.
151
+
152
+ Most carnivorous plants require bright light, and most will look better under such conditions, as this encourages them to synthesise red and purple anthocyanin pigments. Nepenthes and Pinguicula will do better out of full sun, but most other species are happy in direct sunlight.
153
+
154
+ Carnivores mostly live in bogs, and those that do not are generally tropical. Hence, most require high humidity. On a small scale, this can be achieved by placing the plant in a wide saucer containing pebbles that are kept permanently wet. Small Nepenthes species grow well in large terraria.
155
+
156
+ Many carnivores are native to cold temperate regions and can be grown outside in a bog garden year-round. Most Sarracenia can tolerate temperatures well below freezing, despite most species being native to the southeastern United States. Species of Drosera and Pinguicula also tolerate subfreezing temperatures. Nepenthes species, which are tropical, require temperatures from 20 to 30 °C to thrive.
157
+
158
+ Carnivorous plants require appropriate nutrient-poor soil. Most appreciate a 3:1 mixture of Sphagnum peat to sharp horticultural sand (coir is an acceptable, and more ecofriendly substitute for peat). Nepenthes will grow in orchid compost or in pure Sphagnum moss.
159
+
160
+ Carnivorous plants are themselves susceptible to infestation by parasites such as aphids or mealybugs. Although small infestations can be removed by hand, larger infestations necessitate use of an insecticide.
161
+
162
+ Isopropyl alcohol (rubbing alcohol) is effective as a topical insecticide, particularly on scale insects. Diazinon is an excellent systemic insecticide that is tolerated by most carnivorous plants. Malathion and Acephate (Orthene) have also been reported as tolerable by carnivorous plants.
163
+
164
+ Although insects can be a problem, by far the biggest killer of carnivorous plants (besides human maltreatment) is grey mold (Botrytis cinerea). This thrives under warm, humid conditions and can be a real problem in winter. To some extent, temperate carnivorous plants can be protected from this pathogen by ensuring that they are kept cool and well ventilated in winter and that any dead leaves are removed promptly. If this fails, a fungicide is in order.
165
+
166
+ The easiest carnivorous plants for beginners are those from the cool temperate zone. These plants will do well under cool greenhouse conditions (minimum 5 °C in winter, maximum 25 °C in summer) if kept in wide trays of acidified or rain water during summer and kept moist during winter:
167
+
168
+ Venus flytraps will do well under these conditions but are actually rather difficult to grow: even if treated well, they will often succumb to grey mold in winter unless well ventilated. Some of the lowland Nepenthes are very easy to grow as long as they are provided with relatively constant, hot and humid conditions.
169
+
170
+ A study published in 2009 by researchers from Tel Aviv University indicates that secretions produced by carnivorous plants contain compounds that have anti-fungal properties and may lead to the development of a new class of anti-fungal drugs that will be effective against infections that are resistant to current anti-fungal drugs.[62][63]
171
+
172
+ Carnivorous plants have long been the subject of popular interest and exposition, much of it highly inaccurate. Fictional plants have been featured in a number of books, movies, television series, and video games. Typically, these fictional depictions include exaggerated characteristics, such as enormous size or possession of abilities beyond the realm of reality, and can be viewed as a kind of artistic license. Two of the most famous examples of fictional carnivorous plants in popular culture are the 1960s black comedy The Little Shop of Horrors and the triffids of John Wyndham's The Day of the Triffids. Other movies, such as The Hellstrom Chronicle (1971), and television series utilise accurate depictions of carnivorous plants for cinematic purposes.
173
+
174
+ The earliest known depiction of carnivorous plants in popular culture was a case wherein a large man-eating tree was reported to have consumed a young woman in Madagascar in 1878. The South Australian Register carried the story in 1881. It was accompanied by an illustration of the tree consuming the woman, said to be a member from the "little known but cruel tribe" called the Mkodos. The story was attributed to a Dr. Carl Liche who supposedly witnessed the event. The account has been debunked as pure myth as it appears Dr. Liche, the Mkodos, and the tree were all fabrications.[64]
en/4659.html.txt ADDED
@@ -0,0 +1,294 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+
5
+ Plants are mainly multicellular organisms, predominantly photosynthetic eukaryotes of the kingdom Plantae. Historically, plants were treated as one of two kingdoms including all living things that were not animals, and all algae and fungi were treated as plants. However, all current definitions of Plantae exclude the fungi and some algae, as well as the prokaryotes (the archaea and bacteria). By one definition, plants form the clade Viridiplantae (Latin name for "green plants"), a group that includes the flowering plants, conifers and other gymnosperms, ferns and their allies, hornworts, liverworts, mosses and the green algae, but excludes the red and brown algae.
6
+
7
+ Green plants obtain most of their energy from sunlight via photosynthesis by primary chloroplasts that are derived from endosymbiosis with cyanobacteria. Their chloroplasts contain chlorophylls a and b, which gives them their green color. Some plants are parasitic or mycotrophic and have lost the ability to produce normal amounts of chlorophyll or to photosynthesize. Plants are characterized by sexual reproduction and alternation of generations, although asexual reproduction is also common.
8
+
9
+ There are about 320,000 species of plants, of which the great majority, some 260–290 thousand, produce seeds.[5] Green plants provide a substantial proportion of the world's molecular oxygen,[6] and are the basis of most of Earth's ecosystems. Plants that produce grain, fruit and vegetables also form basic human foods and have been domesticated for millennia. Plants have many cultural and other uses, as ornaments, building materials, writing material and, in great variety, they have been the source of medicines and psychoactive drugs. The scientific study of plants is known as botany, a branch of biology.
10
+
11
+ All living things were traditionally placed into one of two groups, plants and animals. This classification may date from Aristotle (384 BC – 322 BC), who made the distinction between plants, which generally do not move, and animals, which often are mobile to catch their food. Much later, when Linnaeus (1707–1778) created the basis of the modern system of scientific classification, these two groups became the kingdoms Vegetabilia (later Metaphyta or Plantae) and Animalia (also called Metazoa). Since then, it has become clear that the plant kingdom as originally defined included several unrelated groups, and the fungi and several groups of algae were removed to new kingdoms. However, these organisms are still often considered plants, particularly in popular contexts.[citation needed]
12
+
13
+ The term "plant" generally implies the possession of the following traits: multicellularity, possession of cell walls containing cellulose, and the ability to carry out photosynthesis with primary chloroplasts.[7][8]
14
+
15
+ When the name Plantae or plant is applied to a specific group of organisms or taxon, it usually refers to one of four concepts. From least to most inclusive, these four groupings are:
16
+
17
+ Another way of looking at the relationships between the different groups that have been called "plants" is through a cladogram, which shows their evolutionary relationships. These are not yet completely settled, but one accepted relationship between the three groups described above is shown below[clarification needed].[16][17][18][19][20][21][22] Those which have been called "plants" are in bold (some minor groups have been omitted).
18
+
19
+ Rhodophyta (red algae)
20
+
21
+ Rhodelphidia (predatorial)
22
+
23
+ Picozoa
24
+
25
+ Glaucophyta (glaucophyte algae)
26
+
27
+ Mesostigmatophyceae
28
+
29
+ Chlorokybophyceae
30
+
31
+ Spirotaenia
32
+
33
+ Chlorophyta
34
+
35
+ Charales (stoneworts)
36
+
37
+ land plants or embryophytes
38
+
39
+ Cryptista
40
+
41
+ The way in which the groups of green algae are combined and named varies considerably between authors.
42
+
43
+ Algae comprise several different groups of organisms which produce food by photosynthesis and thus have traditionally been included in the plant kingdom. The seaweeds range from large multicellular algae to single-celled organisms and are classified into three groups, the green algae, red algae and brown algae. There is good evidence that the brown algae evolved independently from the others, from non-photosynthetic ancestors that formed endosymbiotic relationships with red algae rather than from cyanobacteria, and they are no longer classified as plants as defined here.[23][24]
44
+
45
+ The Viridiplantae, the green plants – green algae and land plants – form a clade, a group consisting of all the descendants of a common ancestor. With a few exceptions, the green plants have the following features in common; primary chloroplasts derived from cyanobacteria containing chlorophylls a and b, cell walls containing cellulose, and food stores in the form of starch contained within the plastids. They undergo closed mitosis without centrioles, and typically have mitochondria with flat cristae. The chloroplasts of green plants are surrounded by two membranes, suggesting they originated directly from endosymbiotic cyanobacteria.
46
+
47
+ Two additional groups, the Rhodophyta (red algae) and Glaucophyta (glaucophyte algae), also have primary chloroplasts that appear to be derived directly from endosymbiotic cyanobacteria, although they differ from Viridiplantae in the pigments which are used in photosynthesis and so are different in colour. These groups also differ from green plants in that the storage polysaccharide is floridean starch and is stored in the cytoplasm rather than in the plastids. They appear to have had a common origin with Viridiplantae and the three groups form the clade Archaeplastida, whose name implies that their chloroplasts were derived from a single ancient endosymbiotic event. This is the broadest modern definition of the term 'plant'.
48
+
49
+ In contrast, most other algae (e.g. brown algae/diatoms, haptophytes, dinoflagellates, and euglenids) not only have different pigments but also have chloroplasts with three or four surrounding membranes. They are not close relatives of the Archaeplastida, presumably having acquired chloroplasts separately from ingested or symbiotic green and red algae. They are thus not included in even the broadest modern definition of the plant kingdom, although they were in the past.
50
+
51
+ The green plants or Viridiplantae were traditionally divided into the green algae (including the stoneworts) and the land plants. However, it is now known that the land plants evolved from within a group of green algae, so that the green algae by themselves are a paraphyletic group, i.e. a group that excludes some of the descendants of a common ancestor. Paraphyletic groups are generally avoided in modern classifications, so that in recent treatments the Viridiplantae have been divided into two clades, the Chlorophyta and the Streptophyta (including the land plants and Charophyta).[25][26]
52
+
53
+ The Chlorophyta (a name that has also been used for all green algae) are the sister group to the Charophytes, from which the land plants evolved. There are about 4,300 species,[27] mainly unicellular or multicellular marine organisms such as the sea lettuce, Ulva.
54
+
55
+ The other group within the Viridiplantae are the mainly freshwater or terrestrial Streptophyta, which consists of the land plants together with the Charophyta, itself consisting of several groups of green algae such as the desmids and stoneworts. Streptophyte algae are either unicellular or form multicellular filaments, branched or unbranched.[26] The genus Spirogyra is a filamentous streptophyte alga familiar to many, as it is often used in teaching and is one of the organisms responsible for the algal "scum" on ponds. The freshwater stoneworts strongly resemble land plants and are believed to be their closest relatives.[citation needed] Growing immersed in fresh water, they consist of a central stalk with whorls of branchlets.
56
+
57
+ Linnaeus' original classification placed the fungi within the Plantae, since they were unquestionably neither animals or minerals and these were the only other alternatives. With 19th century developments in microbiology, Ernst Haeckel introduced the new kingdom Protista in addition to Plantae and Animalia, but whether fungi were best placed in the Plantae or should be reclassified as protists remained controversial. In 1969, Robert Whittaker proposed the creation of the kingdom Fungi. Molecular evidence has since shown that the most recent common ancestor (concestor), of the Fungi was probably more similar to that of the Animalia than to that of Plantae or any other kingdom.[28]
58
+
59
+ Whittaker's original reclassification was based on the fundamental difference in nutrition between the Fungi and the Plantae. Unlike plants, which generally gain carbon through photosynthesis, and so are called autotrophs, fungi do not possess chloroplasts and generally obtain carbon by breaking down and absorbing surrounding materials, and so are called heterotrophic saprotrophs. In addition, the substructure of multicellular fungi is different from that of plants, taking the form of many chitinous microscopic strands called hyphae, which may be further subdivided into cells or may form a syncytium containing many eukaryotic nuclei. Fruiting bodies, of which mushrooms are the most familiar example, are the reproductive structures of fungi, and are unlike any structures produced by plants.[citation needed]
60
+
61
+ The table below shows some species count estimates of different green plant (Viridiplantae) divisions. It suggests there are about 300,000 species of living Viridiplantae, of which 85–90% are flowering plants. (Note: as these are from different sources and different dates, they are not necessarily comparable, and like all species counts, are subject to a degree of uncertainty in some cases.)
62
+
63
+ (6,600–10,300)
64
+
65
+ (18,100–20,200)
66
+
67
+ (12,200)
68
+
69
+ (259,511)
70
+
71
+ The naming of plants is governed by the International Code of Nomenclature for algae, fungi, and plants and International Code of Nomenclature for Cultivated Plants (see cultivated plant taxonomy).
72
+
73
+ The evolution of plants has resulted in increasing levels of complexity, from the earliest algal mats, through bryophytes, lycopods, ferns to the complex gymnosperms and angiosperms of today. Plants in all of these groups continue to thrive, especially in the environments in which they evolved.
74
+
75
+ An algal scum formed on the land 1,200 million years ago, but it was not until the Ordovician Period, around 450 million years ago, that land plants appeared.[39] However, new evidence from the study of carbon isotope ratios in Precambrian rocks has suggested that complex photosynthetic plants developed on the earth over 1000 m.y.a.[40] For more than a century it has been assumed that the ancestors of land plants evolved in aquatic environments and then adapted to a life on land, an idea usually credited to botanist Frederick Orpen Bower in his 1908 book The Origin of a Land Flora. A recent alternative view, supported by genetic evidence, is that they evolved from terrestrial single-celled algae,[41] and that even the common ancestor of red and green algae, and the unicellular freshwater algae glaucophytes, originated in a terrestrial environment in freshwater biofilms or microbial mats.[42] Primitive land plants began to diversify in the late Silurian Period, around 420 million years ago, and the results of their diversification are displayed in remarkable detail in an early Devonian fossil assemblage from the Rhynie chert. This chert preserved early plants in cellular detail, petrified in volcanic springs. By the middle of the Devonian Period most of the features recognised in plants today are present, including roots, leaves and secondary wood, and by late Devonian times seeds had evolved.[43] Late Devonian plants had thereby reached a degree of sophistication that allowed them to form forests of tall trees. Evolutionary innovation continued in the Carboniferous and later geological periods and is ongoing today. Most plant groups were relatively unscathed by the Permo-Triassic extinction event, although the structures of communities changed. This may have set the scene for the evolution of flowering plants in the Triassic (~200 million years ago), which exploded in the Cretaceous and Tertiary. The latest major group of plants to evolve were the grasses, which became important in the mid Tertiary, from around 40 million years ago. The grasses, as well as many other groups, evolved new mechanisms of metabolism to survive the low CO2 and warm, dry conditions of the tropics over the last 10 million years.
76
+
77
+ A 1997 proposed phylogenetic tree of Plantae, after Kenrick and Crane,[44] is as follows, with modification to the Pteridophyta from Smith et al.[45] The Prasinophyceae are a paraphyletic assemblage of early diverging green algal lineages, but are treated as a group outside the Chlorophyta:[46] later authors have not followed this suggestion.
78
+
79
+ Prasinophyceae (micromonads)
80
+
81
+ Spermatophytes (seed plants)
82
+
83
+ Progymnospermophyta †
84
+
85
+ Pteridopsida (true ferns)
86
+
87
+ Marattiopsida
88
+
89
+ Equisetopsida (horsetails)
90
+
91
+ Psilotopsida (whisk ferns & adders'-tongues)
92
+
93
+ Cladoxylopsida †
94
+
95
+ Lycopodiophyta
96
+
97
+ Zosterophyllophyta †
98
+
99
+ Rhyniophyta †
100
+
101
+ Aglaophyton †
102
+
103
+ Horneophytopsida †
104
+
105
+ Bryophyta (mosses)
106
+
107
+ Anthocerotophyta (hornworts)
108
+
109
+ Marchantiophyta (liverworts)
110
+
111
+ Charophyta
112
+
113
+ Trebouxiophyceae (Pleurastrophyceae)
114
+
115
+ Chlorophyceae
116
+
117
+ Ulvophyceae
118
+
119
+ A newer proposed classification follows Leliaert et al. 2011[47] and modified with Silar 2016[20][21][48][49] for the green algae clades and Novíkov & Barabaš-Krasni 2015[50] for the land plants clade. Notice that the Prasinophyceae are here placed inside the Chlorophyta.
120
+
121
+ Mesostigmatophyceae
122
+
123
+ Chlorokybophyceae
124
+
125
+ Spirotaenia
126
+
127
+ Chlorophyta inc. Prasinophyceae
128
+
129
+ Streptofilum
130
+
131
+ Klebsormidiophyta
132
+
133
+ Charophyta Rabenhorst 1863 emend. Lewis & McCourt 2004 (Stoneworts)
134
+
135
+ Coleochaetophyta
136
+
137
+ Zygnematophyta
138
+
139
+ Marchantiophyta (Liverworts)
140
+
141
+ Bryophyta (True mosses)
142
+
143
+ Anthocerotophyta (Non-flowering hornworts)
144
+
145
+ †Horneophyta
146
+
147
+ †Aglaophyta
148
+
149
+ Tracheophyta (Vascular Plants)
150
+
151
+ Later, a phylogeny based on genomes and transcriptomes from 1,153 plant species was proposed.[51] The placing of algal groups is supported by phylogenies based on genomes from the Mesostigmatophyceae and Chlorokybophyceae that have since been sequenced.[52][53] The classification of Bryophyta is supported both by Puttick et al. 2018,[54] and by phylogenies involving the hornwort genomes that have also since been sequenced.[55][56]
152
+
153
+ Rhodophyta
154
+
155
+ Glaucophyta
156
+
157
+ Chlorophyta
158
+
159
+ Prasinococcales
160
+
161
+
162
+
163
+ Mesostigmatophyceae
164
+
165
+ Chlorokybophyceae
166
+
167
+ Spirotaenia
168
+
169
+ Klebsormidiales
170
+
171
+ Chara
172
+
173
+ Coleochaetales
174
+
175
+ Zygnematophyceae
176
+
177
+ Hornworts
178
+
179
+ Liverworts
180
+
181
+ Mosses
182
+
183
+ Lycophytes
184
+
185
+ Ferns
186
+
187
+ Gymnosperms
188
+
189
+ Angiosperms
190
+
191
+ The plants that are likely most familiar to us are the multicellular land plants, called embryophytes. Embryophytes include the vascular plants, such as ferns, conifers and flowering plants. They also include the bryophytes, of which mosses and liverworts are the most common.
192
+
193
+ All of these plants have eukaryotic cells with cell walls composed of cellulose, and most obtain their energy through photosynthesis, using light, water and carbon dioxide to synthesize food. About three hundred plant species do not photosynthesize but are parasites on other species of photosynthetic plants. Embryophytes are distinguished from green algae, which represent a mode of photosynthetic life similar to the kind modern plants are believed to have evolved from, by having specialized reproductive organs protected by non-reproductive tissues.
194
+
195
+ Bryophytes first appeared during the early Paleozoic. They mainly live in habitats where moisture is available for significant periods, although some species, such as Targionia, are desiccation-tolerant. Most species of bryophytes remain small throughout their life-cycle. This involves an alternation between two generations: a haploid stage, called the gametophyte, and a diploid stage, called the sporophyte. In bryophytes, the sporophyte is always unbranched and remains nutritionally dependent on its parent gametophyte. The embryophytes have the ability to secrete a cuticle on their outer surface, a waxy layer that confers resistant to desiccation. In the mosses and hornworts a cuticle is usually only produced on the sporophyte. Stomata are absent from liverworts, but occur on the sporangia of mosses and hornworts, allowing gas exchange.
196
+
197
+ Vascular plants first appeared during the Silurian period, and by the Devonian had diversified and spread into many different terrestrial environments. They developed a number of adaptations that allowed them to spread into increasingly more arid places, notably the vascular tissues xylem and phloem, that transport water and food throughout the organism. Root systems capable of obtaining soil water and nutrients also evolved during the Devonian. In modern vascular plants, the sporophyte is typically large, branched, nutritionally independent and long-lived, but there is increasing evidence that Paleozoic gametophytes were just as complex as the sporophytes. The gametophytes of all vascular plant groups evolved to become reduced in size and prominence in the life cycle.
198
+
199
+ In seed plants, the microgametophyte is reduced from a multicellular free-living organism to a few cells in a pollen grain and the miniaturised megagametophyte remains inside the megasporangium, attached to and dependent on the parent plant. A megasporangium enclosed in a protective layer called an integument is known as an ovule. After fertilisation by means of sperm produced by pollen grains, an embryo sporophyte develops inside the ovule. The integument becomes a seed coat, and the ovule develops into a seed. Seed plants can survive and reproduce in extremely arid conditions, because they are not dependent on free water for the movement of sperm, or the development of free living gametophytes.
200
+
201
+ The first seed plants, pteridosperms (seed ferns), now extinct, appeared in the Devonian and diversified through the Carboniferous. They were the ancestors of modern gymnosperms, of which four surviving groups are widespread today, particularly the conifers, which are dominant trees in several biomes. The name gymnosperm comes from the Greek composite word γυμνόσπερμος (γυμνός gymnos, "naked" and σπέρμα sperma, "seed"), as the ovules and subsequent seeds are not enclosed in a protective structure (carpels or fruit), but are borne naked, typically on cone scales.
202
+
203
+ Plant fossils include roots, wood, leaves, seeds, fruit, pollen, spores, phytoliths, and amber (the fossilized resin produced by some plants). Fossil land plants are recorded in terrestrial, lacustrine, fluvial and nearshore marine sediments. Pollen, spores and algae (dinoflagellates and acritarchs) are used for dating sedimentary rock sequences. The remains of fossil plants are not as common as fossil animals, although plant fossils are locally abundant in many regions worldwide.
204
+
205
+ The earliest fossils clearly assignable to Kingdom Plantae are fossil green algae from the Cambrian. These fossils resemble calcified multicellular members of the Dasycladales. Earlier Precambrian fossils are known that resemble single-cell green algae, but definitive identity with that group of algae is uncertain.
206
+
207
+ The earliest fossils attributed to green algae date from the Precambrian (ca. 1200 mya).[57][58] The resistant outer walls of prasinophyte cysts (known as phycomata) are well preserved in fossil deposits of the Paleozoic (ca. 250–540 mya). A filamentous fossil (Proterocladus) from middle Neoproterozoic deposits (ca. 750 mya) has been attributed to the Cladophorales, while the oldest reliable records of the Bryopsidales, Dasycladales) and stoneworts are from the Paleozoic.[46][59]
208
+
209
+ The oldest known fossils of embryophytes date from the Ordovician, though such fossils are fragmentary. By the Silurian, fossils of whole plants are preserved, including the simple vascular plant Cooksonia in mid-Silurian and the much larger and more complex lycophyte Baragwanathia longifolia in late Silurian. From the early Devonian Rhynie chert, detailed fossils of lycophytes and rhyniophytes have been found that show details of the individual cells within the plant organs and the symbiotic association of these plants with fungi of the order Glomales. The Devonian period also saw the evolution of leaves and roots, and the first modern tree, Archaeopteris. This tree with fern-like foliage and a trunk with conifer-like wood was heterosporous producing spores of two different sizes, an early step in the evolution of seeds.[60]
210
+
211
+ The Coal measures are a major source of Paleozoic plant fossils, with many groups of plants in existence at this time. The spoil heaps of coal mines are the best places to collect; coal itself is the remains of fossilised plants, though structural detail of the plant fossils is rarely visible in coal. In the Fossil Grove at Victoria Park in Glasgow, Scotland, the stumps of Lepidodendron trees are found in their original growth positions.
212
+
213
+ The fossilized remains of conifer and angiosperm roots, stems and branches may be locally abundant in lake and inshore sedimentary rocks from the Mesozoic and Cenozoic eras. Sequoia and its allies, magnolia, oak, and palms are often found.
214
+
215
+ Petrified wood is common in some parts of the world, and is most frequently found in arid or desert areas where it is more readily exposed by erosion. Petrified wood is often heavily silicified (the organic material replaced by silicon dioxide), and the impregnated tissue is often preserved in fine detail. Such specimens may be cut and polished using lapidary equipment. Fossil forests of petrified wood have been found in all continents.
216
+
217
+ Fossils of seed ferns such as Glossopteris are widely distributed throughout several continents of the Southern Hemisphere, a fact that gave support to Alfred Wegener's early ideas regarding Continental drift theory.
218
+
219
+ Most of the solid material in a plant is taken from the atmosphere. Through the process of photosynthesis, most plants use the energy in sunlight to convert carbon dioxide from the atmosphere, plus water, into simple sugars. These sugars are then used as building blocks and form the main structural component of the plant. Chlorophyll, a green-colored, magnesium-containing pigment is essential to this process; it is generally present in plant leaves, and often in other plant parts as well. Parasitic plants, on the other hand, use the resources of their host to provide the materials needed for metabolism and growth.
220
+
221
+ Plants usually rely on soil primarily for support and water (in quantitative terms), but they also obtain compounds of nitrogen, phosphorus, potassium, magnesium and other elemental nutrients from the soil. Epiphytic and lithophytic plants depend on air and nearby debris for nutrients, and carnivorous plants supplement their nutrient requirements, particularly for nitrogen and phosphorus, with insect prey that they capture. For the majority of plants to grow successfully they also require oxygen in the atmosphere and around their roots (soil gas) for respiration. Plants use oxygen and glucose (which may be produced from stored starch) to provide energy.[61] Some plants grow as submerged aquatics, using oxygen dissolved in the surrounding water, and a few specialized vascular plants, such as mangroves and reed (Phragmites australis),[62] can grow with their roots in anoxic conditions.
222
+
223
+ The genome of a plant controls its growth. For example, selected varieties or genotypes of wheat grow rapidly, maturing within 110 days, whereas others, in the same environmental conditions, grow more slowly and mature within 155 days.[63]
224
+
225
+ Growth is also determined by environmental factors, such as temperature, available water, available light, carbon dioxide and available nutrients in the soil. Any change in the availability of these external conditions will be reflected in the plant's growth and the timing of its development.[citation needed]
226
+
227
+ Biotic factors also affect plant growth. Plants can be so crowded that no single individual produces normal growth, causing etiolation and chlorosis. Optimal plant growth can be hampered by grazing animals, suboptimal soil composition, lack of mycorrhizal fungi, and attacks by insects or plant diseases, including those caused by bacteria, fungi, viruses, and nematodes.[63]
228
+
229
+ Simple plants like algae may have short life spans as individuals, but their populations are commonly seasonal. Annual plants grow and reproduce within one growing season, biennial plants grow for two growing seasons and usually reproduce in second year, and perennial plants live for many growing seasons and once mature will often reproduce annually. These designations often depend on climate and other environmental factors. Plants that are annual in alpine or temperate regions can be biennial or perennial in warmer climates. Among the vascular plants, perennials include both evergreens that keep their leaves the entire year, and deciduous plants that lose their leaves for some part of it. In temperate and boreal climates, they generally lose their leaves during the winter; many tropical plants lose their leaves during the dry season.[citation needed]
230
+
231
+ The growth rate of plants is extremely variable. Some mosses grow less than 0.001 millimeters per hour (mm/h), while most trees grow 0.025–0.250 mm/h. Some climbing species, such as kudzu, which do not need to produce thick supportive tissue, may grow up to 12.5 mm/h.[citation needed]
232
+
233
+ Plants protect themselves from frost and dehydration stress with antifreeze proteins, heat-shock proteins and sugars (sucrose is common). LEA (Late Embryogenesis Abundant) protein expression is induced by stresses and protects other proteins from aggregation as a result of desiccation and freezing.[64]
234
+
235
+ When water freezes in plants, the consequences for the plant depend very much on whether the freezing occurs within cells (intracellularly) or outside cells in intercellular spaces.[65] Intracellular freezing, which usually kills the cell[66] regardless of the hardiness of the plant and its tissues, seldom occurs in nature because rates of cooling are rarely high enough to support it. Rates of cooling of several degrees Celsius per minute are typically needed to cause intracellular formation of ice.[67] At rates of cooling of a few degrees Celsius per hour, segregation of ice occurs in intercellular spaces.[68] This may or may not be lethal, depending on the hardiness of the tissue. At freezing temperatures, water in the intercellular spaces of plant tissue freezes first, though the water may remain unfrozen until temperatures drop below −7 °C (19 °F).[65] After the initial formation of intercellular ice, the cells shrink as water is lost to the segregated ice, and the cells undergo freeze-drying. This dehydration is now considered the fundamental cause of freezing injury.
236
+
237
+ Plants are continuously exposed to a range of biotic and abiotic stresses. These stresses often cause DNA damage directly, or indirectly via the generation of reactive oxygen species.[69] Plants are capable of a DNA damage response that is a critical mechanism for maintaining genome stability.[70] The DNA damage response is particularly important during seed germination, since seed quality tends to deteriorate with age in association with DNA damage accumulation.[71] During germination repair processes are activated to deal with this accumulated DNA damage.[72] In particular, single- and double-strand breaks in DNA can be repaired.[73] The DNA checkpoint kinase ATM has a key role in integrating progression through germination with repair responses to the DNA damages accumulated by the aged seed.[74]
238
+
239
+ Plant cells are typically distinguished by their large water-filled central vacuole, chloroplasts, and rigid cell walls that are made up of cellulose, hemicellulose, and pectin. Cell division is also characterized by the development of a phragmoplast for the construction of a cell plate in the late stages of cytokinesis. Just as in animals, plant cells differentiate and develop into multiple cell types. Totipotent meristematic cells can differentiate into vascular, storage, protective (e.g. epidermal layer), or reproductive tissues, with more primitive plants lacking some tissue types.[75]
240
+
241
+ Plants are photosynthetic, which means that they manufacture their own food molecules using energy obtained from light. The primary mechanism plants have for capturing light energy is the pigment chlorophyll. All green plants contain two forms of chlorophyll, chlorophyll a and chlorophyll b. The latter of these pigments is not found in red or brown algae.
242
+ The simple equation of photosynthesis is as follows:
243
+
244
+ By means of cells that behave like nerves, plants receive and distribute within their systems information about incident light intensity and quality. Incident light that stimulates a chemical reaction in one leaf, will cause a chain reaction of signals to the entire plant via a type of cell termed a bundle sheath cell. Researchers, from the Warsaw University of Life Sciences in Poland, found that plants have a specific memory for varying light conditions, which prepares their immune systems against seasonal pathogens.[76] Plants use pattern-recognition receptors to recognize conserved microbial signatures. This recognition triggers an immune response. The first plant receptors of conserved microbial signatures were identified in rice (XA21, 1995)[77] and in Arabidopsis thaliana (FLS2, 2000).[78] Plants also carry immune receptors that recognize highly variable pathogen effectors. These include the NBS-LRR class of proteins.
245
+
246
+ Vascular plants differ from other plants in that nutrients are transported between their different parts through specialized structures, called xylem and phloem. They also have roots for taking up water and minerals. The xylem moves water and minerals from the root to the rest of the plant, and the phloem provides the roots with sugars and other nutrient produced by the leaves.[75]
247
+
248
+ Plants have some of the largest genomes among all organisms.[79] The largest plant genome (in terms of gene number) is that of wheat (Triticum asestivum), predicted to encode ≈94,000 genes[80] and thus almost 5 times as many as the human genome. The first plant genome sequenced was that of Arabidopsis thaliana which encodes about 25,500 genes.[81] In terms of sheer DNA sequence, the smallest published genome is that of the carnivorous bladderwort (Utricularia gibba) at 82 Mb (although it still encodes 28,500 genes)[82] while the largest, from the Norway Spruce (Picea abies), extends over 19,600 Mb (encoding about 28,300 genes).[83]
249
+
250
+ The photosynthesis conducted by land plants and algae is the ultimate source of energy and organic material in nearly all ecosystems. Photosynthesis, at first by cyanobacteria and later by photosynthetic eukaryotes, radically changed the composition of the early Earth's anoxic atmosphere, which as a result is now 21% oxygen. Animals and most other organisms are aerobic, relying on oxygen; those that do not are confined to relatively rare anaerobic environments. Plants are the primary producers in most terrestrial ecosystems and form the basis of the food web in those ecosystems. Many animals rely on plants for shelter as well as oxygen and food.[citation needed]
251
+
252
+ Land plants are key components of the water cycle and several other biogeochemical cycles. Some plants have coevolved with nitrogen fixing bacteria, making plants an important part of the nitrogen cycle. Plant roots play an essential role in soil development and the prevention of soil erosion.[citation needed]
253
+
254
+ Plants are distributed almost worldwide. While they inhabit a multitude of biomes and ecoregions, few can be found beyond the tundras at the northernmost regions of continental shelves. At the southern extremes, plants of the Antarctic flora have adapted tenaciously to the prevailing conditions.[citation needed]
255
+
256
+ Plants are often the dominant physical and structural component of habitats where they occur. Many of the Earth's biomes are named for the type of vegetation because plants are the dominant organisms in those biomes, such as grasslands, taiga and tropical rainforest.[citation needed]
257
+
258
+ Numerous animals have coevolved with plants. Many animals pollinate flowers in exchange for food in the form of pollen or nectar. Many animals disperse seeds, often by eating fruit and passing the seeds in their feces. Myrmecophytes are plants that have coevolved with ants. The plant provides a home, and sometimes food, for the ants. In exchange, the ants defend the plant from herbivores and sometimes competing plants. Ant wastes provide organic fertilizer.
259
+
260
+ The majority of plant species have various kinds of fungi associated with their root systems in a kind of mutualistic symbiosis known as mycorrhiza. The fungi help the plants gain water and mineral nutrients from the soil, while the plant gives the fungi carbohydrates manufactured in photosynthesis. Some plants serve as homes for endophytic fungi that protect the plant from herbivores by producing toxins. The fungal endophyte, Neotyphodium coenophialum, in tall fescue (Festuca arundinacea) does tremendous economic damage to the cattle industry in the U.S.
261
+
262
+ Various forms of parasitism are also fairly common among plants, from the semi-parasitic mistletoe that merely takes some nutrients from its host, but still has photosynthetic leaves, to the fully parasitic broomrape and toothwort that acquire all their nutrients through connections to the roots of other plants, and so have no chlorophyll. Some plants, known as myco-heterotrophs, parasitize mycorrhizal fungi, and hence act as epiparasites on other plants.
263
+
264
+ Many plants are epiphytes, meaning they grow on other plants, usually trees, without parasitizing them. Epiphytes may indirectly harm their host plant by intercepting mineral nutrients and light that the host would otherwise receive. The weight of large numbers of epiphytes may break tree limbs. Hemiepiphytes like the strangler fig begin as epiphytes but eventually set their own roots and overpower and kill their host. Many orchids, bromeliads, ferns and mosses often grow as epiphytes. Bromeliad epiphytes accumulate water in leaf axils to form phytotelmata that may contain complex aquatic food webs.[84]
265
+
266
+ Approximately 630 plants are carnivorous, such as the Venus Flytrap (Dionaea muscipula) and sundew (Drosera species). They trap small animals and digest them to obtain mineral nutrients, especially nitrogen and phosphorus.[85]
267
+
268
+ The study of plant uses by people is called economic botany or ethnobotany.[86] Human cultivation of plants is part of agriculture, which is the basis of human civilization.[87] Plant agriculture is subdivided into agronomy, horticulture and forestry.[88]
269
+
270
+ Humans depend on plants for food, either directly or as feed for domestic animals. Agriculture deals with the production of food crops, and has played a key role in the history of world civilizations. Agriculture includes agronomy for arable crops, horticulture for vegetables and fruit, and forestry for timber.[89] About 7,000 species of plant have been used for food, though most of today's food is derived from only 30 species. The major staples include cereals such as rice and wheat, starchy roots and tubers such as cassava and potato, and legumes such as peas and beans. Vegetable oils such as olive oil provide lipids, while fruit and vegetables contribute vitamins and minerals to the diet.[90]
271
+
272
+ Medicinal plants are a primary source of organic compounds, both for their medicinal and physiological effects, and for the industrial synthesis of a vast array of organic chemicals.[91] Many hundreds of medicines are derived from plants, both traditional medicines used in herbalism[92][93] and chemical substances purified from plants or first identified in them, sometimes by ethnobotanical search, and then synthesised for use in modern medicine. Modern medicines derived from plants include aspirin, taxol, morphine, quinine, reserpine, colchicine, digitalis and vincristine. Plants used in herbalism include ginkgo, echinacea, feverfew, and Saint John's wort. The pharmacopoeia of Dioscorides, De Materia Medica, describing some 600 medicinal plants, was written between 50 and 70 AD and remained in use in Europe and the Middle East until around 1600 AD; it was the precursor of all modern pharmacopoeias.[94][95][96]
273
+
274
+ Plants grown as industrial crops are the source of a wide range of products used in manufacturing, sometimes so intensively as to risk harm to the environment.[97] Nonfood products include essential oils, natural dyes, pigments, waxes, resins, tannins, alkaloids, amber and cork. Products derived from plants include soaps, shampoos, perfumes, cosmetics, paint, varnish, turpentine, rubber, latex, lubricants, linoleum, plastics, inks, and gums. Renewable fuels from plants include firewood, peat and other biofuels.[98][99] The fossil fuels coal, petroleum and natural gas are derived from the remains of aquatic organisms including phytoplankton in geological time.[100]
275
+
276
+ Structural resources and fibres from plants are used to construct dwellings and to manufacture clothing. Wood is used not only for buildings, boats, and furniture, but also for smaller items such as musical instruments and sports equipment. Wood is pulped to make paper and cardboard.[101] Cloth is often made from cotton, flax, ramie or synthetic fibres such as rayon and acetate derived from plant cellulose. Thread used to sew cloth likewise comes in large part from cotton.[102]
277
+
278
+ Thousands of plant species are cultivated for aesthetic purposes as well as to provide shade, modify temperatures, reduce wind, abate noise, provide privacy, and prevent soil erosion. Plants are the basis of a multibillion-dollar per year tourism industry, which includes travel to historic gardens, national parks, rainforests, forests with colorful autumn leaves, and festivals such as Japan's[103] and America's cherry blossom festivals.[104]
279
+
280
+ While some gardens are planted with food crops, many are planted for aesthetic, ornamental, or conservation purposes. Arboretums and botanical gardens are public collections of living plants. In private outdoor gardens, lawn grasses, shade trees, ornamental trees, shrubs, vines, herbaceous perennials and bedding plants are used. Gardens may cultivate the plants in a naturalistic state, or may sculpture their growth, as with topiary or espalier. Gardening is the most popular leisure activity in the U.S., and working with plants or horticulture therapy is beneficial for rehabilitating people with disabilities.[citation needed]
281
+
282
+ Plants may also be grown or kept indoors as houseplants, or in specialized buildings such as greenhouses that are designed for the care and cultivation of living plants. Venus Flytrap, sensitive plant and resurrection plant are examples of plants sold as novelties. There are also art forms specializing in the arrangement of cut or living plant, such as bonsai, ikebana, and the arrangement of cut or dried flowers. Ornamental plants have sometimes changed the course of history, as in tulipomania.[105]
283
+
284
+ Architectural designs resembling plants appear in the capitals of Ancient Egyptian columns, which were carved to resemble either the Egyptian white lotus or the papyrus.[106] Images of plants are often used in painting and photography, as well as on textiles, money, stamps, flags and coats of arms.[citation needed]
285
+
286
+ Basic biological research has often been done with plants. In genetics, the breeding of pea plants allowed Gregor Mendel to derive the basic laws governing inheritance,[107] and examination of chromosomes in maize allowed Barbara McClintock to demonstrate their connection to inherited traits.[108] The plant Arabidopsis thaliana is used in laboratories as a model organism to understand how genes control the growth and development of plant structures.[109] NASA predicts that space stations or space colonies will one day rely on plants for life support.[110]
287
+
288
+ Ancient trees are revered and many are famous. Tree rings themselves are an important method of dating in archeology, and serve as a record of past climates.[citation needed]
289
+
290
+ Plants figure prominently in mythology, religion and literature. They are used as national and state emblems, including state trees and state flowers. Plants are often used as memorials, gifts and to mark special occasions such as births, deaths, weddings and holidays. The arrangement of flowers may be used to send hidden messages.[citation needed]
291
+
292
+ Weeds are unwanted plants growing in managed environments such as farms, urban areas, gardens, lawns, and parks. People have spread plants beyond their native ranges and some of these introduced plants become invasive, damaging existing ecosystems by displacing native species, and sometimes becoming serious weeds of cultivation.[citation needed]
293
+
294
+ Plants may cause harm to animals, including people. Plants that produce windblown pollen invoke allergic reactions in people who suffer from hay fever. A wide variety of plants are poisonous. Toxalbumins are plant poisons fatal to most mammals and act as a serious deterrent to consumption. Several plants cause skin irritations when touched, such as poison ivy. Certain plants contain psychotropic chemicals, which are extracted and ingested or smoked, including nicotine from tobacco, cannabinoids from Cannabis sativa, cocaine from Erythroxylon coca and opium from opium poppy. Smoking causes damage to health or even death, while some drugs may also be harmful or fatal to people.[111][112] Both illegal and legal drugs derived from plants may have negative effects on the economy, affecting worker productivity and law enforcement costs.[113][114]
en/466.html.txt ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ Australopithecus (/ˌɒstrələˈpɪθɪkəs/, OS-trə-lə-PITH-i-kəs);[1] from Latin australis, meaning 'southern', and Greek πίθηκος (pithekos), meaning 'ape'; singular: australopith) is a genus of hominins that existed in Africa from around 4.2[2] to 1.9 million years ago and from which the genus Homo, including modern humans, is considered to be descended. Australopithecus is a member of the subtribe Australopithecina,[3][4] which includes Paranthropus, Kenyanthropus,[5] Ardipithecus[5] and Praeanthropus,[6] though the term "australopithecine" is sometimes used to refer only to members of Australopithecus. Species include: A. garhi, A. africanus, A. sediba, A. afarensis, A. anamensis, A. bahrelghazali and A. deyiremeda. Debate exists as to whether other hominid species of this time, such as Paranthropus ('robust australopithecines'), belong to a separate genus or Australopithecus ('gracile australopiths)', or whether some Australopithecus species should be reclassified into new genera.[2]
4
+
5
+ From palaeontological and archaeological evidence, Australopithecus apparently evolved in eastern Africa around 4.2 million years ago before spreading throughout the continent and eventually becoming extinct 1.9 million years ago (or 1.2 million years ago if Paranthropus is included).[7] While none of the groups normally directly assigned to this group survived, Australopithecus does not appear to be literally extinct (in the sense of having no living descendants), as the genus Homo probably emerged from an Australopithecus species[2][8][9][10][11] at some time between 3 and 2 million years ago.[12]
6
+
7
+ Australopithecus possessed two of three duplicated genes derived from SRGAP2 roughly 3.4 and 2.4 million years ago (SRGAP2B and SRGAP2C), the second of which contributed to the increase in number and migration of neurons in the human brain.[13][14] Significant changes to the hand first appear in the fossil record of later A. afarensis about 3 million years ago (fingers shortened relative to thumb and changes to the joints between the index finger and the trapezium and capitate).[15]
8
+
9
+ The first Australopithecus specimen, the type specimen, was discovered in 1924 in a lime quarry by workers at Taung, South Africa. The specimen was studied by the Australian anatomist Raymond Dart, who was then working at the University of the Witwatersrand in Johannesburg. The fossil skull was from a three-year-old bipedal primate that he named Australopithecus africanus. The first report was published in Nature in February 1925. Dart realised that the fossil contained a number of humanoid features, and so he came to the conclusion that this was an early human ancestor.[16] Later, Scottish paleontologist Robert Broom and Dart set out to search for more early hominin specimens, and several more A. africanus remains from various sites. Initially, anthropologists were largely hostile to the idea that these discoveries were anything but apes, though this changed during the late 1940s.[16] In 1950, evolutionary biologist Ernst Walter Mayr said that all bipedal apes should be classified into the genus Homo, and considered renaming Australopithecus to Homo transvaalensis.[17] However, the contra view taken by Robinson in 1954, excluding australopiths from Homo, became the prevalent view.[17] The first australopithecine fossil discovered in eastern Africa was an A. boisei skull excavated by Mary Leakey in 1959 in Olduvai Gorge, Tanzania. Since then, the Leakey family has continued to excavate the gorge, uncovering further evidence for australopithecines, as well as for Homo habilis and Homo erectus.[16] The scientific community took 20 more years to widely accept Australopithecus as a member of the human family tree.
10
+
11
+ In 1997, an almost complete Australopithecus skeleton with skull was found in the Sterkfontein caves of Gauteng, South Africa. It is now called "Little Foot" and it is around 3.7 million years old. It was named Australopithecus prometheus[18][19] which has since been placed within A. africanus. Other fossil remains found in the same cave in 2008 were named Australopithecus sediba, which lived 1.9 million years ago. A. africanus probably evolved into A. sediba, which some scientists think may have evolved into H. erectus,[20] though this is heavily disputed.
12
+
13
+ A. afarensis, A. anamensis, and A. bahrelghazali were split off into the genus Praeanthropus, but this genus been largely dismissed.[21]
14
+
15
+ The genus Australopithecus is considered to be a wastebasket taxon, whose members are united by their similar physiology rather than close relations with each other over other hominin genera. As such, the genus is paraphyletic, not consisting of a common ancestor and all of its descendents, and is considered an ancestor to Homo, Kenyanthropus, and Paranthropus.[22][23][24][25] Resolving this problem would cause major ramifications in the nomenclature of all descendent species. Possibilities suggested have been to rename Homo sapiens to Australopithecus sapiens[26] (or even Pan sapiens[27][28]), or to move some Australopithecus species into new genera.[29]
16
+
17
+ Opinions differ as to whether the Paranthropus should be included within Australopithecus,[30] and Paranthropus is suggested along with Homo to have developed as part of a clade with A. africanus as its basal root.[17] The members of Paranthropus appear to have a distinct robustness compared to the gracile australopiths, but it is unclear if this indicates all members stemmed from a common ancestor or independently evolved similar traits from occupying a similar niche.[31]
18
+
19
+ Occasional suggestions have been made (by Cele-Conde et al. 2002 and 2007) that A. africanus should also be moved to Paranthropus.[2] On the basis of craniodental evidence, Strait and Grine (2004) suggest that A. anamensis and A. garhi should be assigned to new genera.[32] It is debated whether or not A. bahrelghazali is simply a western version of A. afarensis and not a separate species.[33][34]
20
+
21
+ A taxonomy of the Australopithecus within the great apes is assessed as follows, with Paranthropus and Homo emerging among the Australopithecus.[35] The genus Australopithecus with conventional definitions is assessed to be highly paraphyletic, i.e. it is not a natural group, and the genera Kenyanthropus, Paranthropus and Homo are included.[36][37][38]
22
+
23
+ A. anamensis may have descended from or was closely related to Ardipithecus ramidus.[39] A. anamensis shows some similarities to both Ar. ramidus and Sahelanthropus.[39]
24
+
25
+ Australopiths shared several traits with modern apes and humans, and were widespread throughout Eastern and Northern Africa by 3.5 million years ago (mya). The earliest evidence of fundamentally bipedal hominins is a 3.6 Ma fossil trackway in Laetoli, Tanzania, which bears a remarkable similarity to those of modern humans. The footprints have generally been classified as australopith, as they are the only form of prehuman hominins known to have existed in that region at that time.[40]
26
+
27
+ Australopithecus anamensis, A. afarensis, and A. africanus are among the most famous of the extinct hominins. A. africanus was once considered to be ancestral to the genus Homo (in particular Homo erectus). However, fossils assigned to the genus Homo have been found that are older than A. africanus.[citation needed] Thus, the genus Homo either split off from the genus Australopithecus at an earlier date (the latest common ancestor being either A. afarensis[citation needed] or an even earlier form, possibly Kenyanthropus[citation needed]), or both developed from a yet possibly unknown common ancestor independently.[citation needed]
28
+
29
+ According to the Chimpanzee Genome Project, the human–chimpanzee last common ancestor existed about five to six million years ago, assuming a constant rate of mutation. However, hominin species dated to earlier than the date could call this into question.[41] Sahelanthropus tchadensis, commonly called "Toumai", is about seven million years old and Orrorin tugenensis lived at least six million years ago. Since little is known of them, they remain controversial among scientists since the molecular clock in humans has determined that humans and chimpanzees had a genetic split at least a million years later.[citation needed] One theory suggests that the human and chimpanzee lineages diverged somewhat at first, then some populations interbred around one million years after diverging.[41]
30
+
31
+ The brains of most species of Australopithecus were roughly 35% of the size of a modern human brain[42] with an endocranial volume average of 466 cc (28.4 cu in).[12] Although this is more than the average endocranial volume of chimpanzee brains at 360 cc (22 cu in)[12] the earliest australopiths (A. anamensis) appear to have been within the chimpanzee range,[39] whereas some later australopith specimens have a larger endocranial volume than that of some early Homo fossils.[12]
32
+
33
+ Most species of Australopithecus were diminutive and gracile, usually standing 1.2 to 1.4 m (3 ft 11 in to 4 ft 7 in) tall. It is possible that they exhibited a considerable degree of sexual dimorphism, males being larger than females.[43] In modern populations, males are on average a mere 15% larger than females, while in Australopithecus, males could be up to 50% larger than females by some estimates. However, the degree of sexual dimorphism is debated due to the fragmentary nature of australopith remains.[43]
34
+
35
+ According to A. Zihlman, Australopithecus body proportions closely resemble those of bonobos (Pan paniscus),[44] leading evolutionary biologist Jeremy Griffith to suggest that bonobos may be phenotypically similar to Australopithecus.[45] Furthermore, thermoregulatory models suggest that australopiths were fully hair covered, more like chimpanzees and bonobos, and unlike humans.[46]
36
+
37
+ The fossil record seems to indicate that Australopithecus is ancestral to Homo and modern humans. It was once assumed that large brain size had been a precursor to bipedalism, but the discovery of Australopithecus with a small brain but developed bipedality upset this theory. Nonetheless, it remains a matter of controversy as to how bipedalism first emerged. The advantages of bipedalism were that it left the hands free to grasp objects (e.g., carry food and young), and allowed the eyes to look over tall grasses for possible food sources or predators, but it is also argued that these advantages were not significant enough to cause the emergence of bipedalism.[citation needed] Earlier fossils, such as Orrorin tugenensis, indicate bipedalism around six million years ago, around the time of the split between humans and chimpanzees indicated by genetic studies. This suggests that erect, straight-legged walking originated as an adaptation to tree-dwelling.[47] Major changes to the pelvis and feet had already taken place before Australopithecus.[48] It was once thought that humans descended from a knuckle-walking ancestor,[49] but this is not well-supported.[50]
38
+
39
+ Australopithecines have thirty two teeth, like modern humans. Their molars were parallel, like those of great apes, and they had a slight pre-canine gap (diastema). Their canines were smaller, like modern humans, and with the teeth less interlocked than in previous hominins. In fact, in some australopithecines, the canines are shaped more like incisors.[51] The molars of Australopithicus fit together in much the same way those of humans do, with low crowns and four low, rounded cusps used for crushing. They have cutting edges on the crests.[51] However, australopiths generally evolved a larger postcanine dentition with thicker enamel.[52] Australopiths in general had thick enamel, like Homo, while other great apes have markedly thinner enamel.[51] Robust australopiths wore their molar surfaces down flat, unlike the more gracile species, who kept their crests.[51]
40
+
41
+ In a 1979 preliminary microwear study of Australopithecus fossil teeth, anthropologist Alan Walker theorized that robust australopiths ate predominantly fruit (frugivory).[53] Australopithecus species are thought to have eaten mainly fruit, vegetables, and tubers, and perhaps easy to catch animals such as small lizards. Much research has focused on a comparison between the South African species A. africanus and Paranthropus robustus. Early analyses of dental microwear in these two species showed, compared to P. robustus, A. africanus had fewer microwear features and more scratches as opposed to pits on its molar wear facets.[54] Microwear patterns on the cheek teeth of A. afarensis and A. anamensis indicate that A. afarensis predominantly ate fruits and leaves, whereas A. anamensis included grasses and seeds (in addition to fruits and leaves).[55] The thickening of enamel in australopiths may have been a response to eating more ground-bound foods such as tubers, nuts, and cereal grains with gritty dirt and other small particulates which would wear away enamel. Gracile australopiths had larger incisors, which indicates tearing food was important, perhaps eating scavenged meat. Nonetheless, the wearing patterns on the teeth support a largely herbivorous diet.[51]
42
+
43
+ In 1992, trace-element studies of the strontium/calcium ratios in robust australopith fossils suggested the possibility of animal consumption, as they did in 1994 using stable carbon isotopic analysis.[56] In 2005, fossil animal bones with butchery marks dating to 2.6 million years old were found at the site of Gona, Ethiopia. This implies meat consumption by at least one of three species of hominins occurring around that time: A. africanus, A. garhi, and/or P. aethiopicus.[57] In 2010, fossils of butchered animal bones dated 3.4 million years old were found in Ethiopia, close to regions where australopith fossils were found.[58]
44
+
45
+ Robust australopithecines (Paranthropus) had larger cheek teeth than gracile australopiths, possibly because robust australopithecines had more tough, fibrous plant material in their diets, whereas gracile australopiths ate more hard and brittle foods.[51] However, such divergence in chewing adaptations may instead have been a response to fallback food availability. In leaner times, robust and gracile australopithecines may have turned to different low-quality foods (fibrous plants for the former, and hard food for the latter), but in more bountiful times, they had more variable and overlapping diets.[59][60]
46
+
47
+ A study in 2018 found non-carious cervical lesions, caused by acid erosion, on the teeth of A. africanus, probably caused by consumption of acidic fruit.[61]
48
+
49
+ It was once thought that Australopithecus could not produce tools like Homo, but the discovery of A. garhi associated with large mammal bones bearing evidence of processing by stone tools showed this to not have been the case.[62][63] Discovered in 1994, this was the oldest evidence of manufacturing at the time[64][65] until the 2010 discovery of cut marks dating to 3.4 mya attributed to A. afarensis,[66] and the 2015 discovery of the Lomekwi culture from Lake Turkana dating to 3.3 mya possibly attributed to Kenyanthropus.[67] More stone tools dating to about 2.6 mya in Ledi-Geraru in the Afar Region were found in 2019, though these may be attributed to Homo.[68]
50
+
51
+ The spot where the first Australopithecus boisei was discovered in Tanzania.
52
+
53
+ Original skull of Mrs. Ples, a female A. africanus
54
+
55
+ Taung Child by Cicero Moraes, Arc-Team, Antrocom NPO, Museum of the University of Padua.
56
+
57
+ Cast of the skeleton of Lucy, an A. afarensis
58
+
59
+ Skull of the Taung child
en/4660.html.txt ADDED
@@ -0,0 +1,201 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+
5
+ In evolutionary ecology, parasitism is a symbiotic relationship between species, where one organism, the parasite, lives on or in another organism, the host, causing it some harm, and is adapted structurally to this way of life.[1] The entomologist E. O. Wilson has characterised parasites as "predators that eat prey in units of less than one".[2] Parasites include protozoans such as the agents of malaria, sleeping sickness, and amoebic dysentery; animals such as hookworms, lice, mosquitoes, and vampire bats; fungi such as honey fungus and the agents of ringworm; and plants such as mistletoe, dodder, and the broomrapes. There are six major parasitic strategies of exploitation of animal hosts, namely parasitic castration, directly transmitted parasitism (by contact), trophically transmitted parasitism (by being eaten), vector-transmitted parasitism, parasitoidism, and micropredation.
6
+
7
+ Like predation, parasitism is a type of consumer-resource interaction,[3] but unlike predators, parasites, with the exception of parasitoids, are typically much smaller than their hosts, do not kill them, and often live in or on their hosts for an extended period. Parasites of animals are highly specialised, and reproduce at a faster rate than their hosts. Classic examples include interactions between vertebrate hosts and tapeworms, flukes, the malaria-causing Plasmodium species, and fleas.
8
+
9
+ Parasites reduce host fitness by general or specialised pathology, from parasitic castration to modification of host behaviour. Parasites increase their own fitness by exploiting hosts for resources necessary for their survival, in particular by feeding on them and by using intermediate (secondary) hosts to assist in their transmission from one definitive (primary) host to another. Although parasitism is often unambiguous, it is part of a spectrum of interactions between species, grading via parasitoidism into predation, through evolution into mutualism, and in some fungi, shading into being saprophytic.
10
+
11
+ People have known about parasites such as roundworms and tapeworms since ancient Egypt, Greece, and Rome. In Early Modern times, Antonie van Leeuwenhoek observed Giardia lamblia in his microscope in 1681, while Francesco Redi described internal and external parasites including sheep liver fluke and ticks. Modern parasitology developed in the 19th century. In human culture, parasitism has negative connotations. These were exploited to satirical effect in Jonathan Swift's 1733 poem "On Poetry: A Rhapsody", comparing poets to hyperparasitical "vermin". In fiction, Bram Stoker's 1897 Gothic horror novel Dracula and its many later adaptations featured a blood-drinking parasite. Ridley Scott's 1979 film Alien was one of many works of science fiction to feature a terrifying[4] parasitic alien species.
12
+
13
+ First used in English in 1539, the word parasite comes from the Medieval French parasite, from the Latin parasitus, the latinisation of the Greek παράσιτος (parasitos), "one who eats at the table of another"[5] and that from παρά (para), "beside, by"[6] + σῖτος (sitos), "wheat", hence "food".[7] The related term parasitism appears in English from 1611.[8]
14
+
15
+
16
+
17
+
18
+
19
+ Parasitism is a kind of symbiosis, a close and persistent long-term biological interaction between a parasite and its host. Unlike saprotrophs, parasites feed on living hosts, though some parasitic fungi, for instance, may continue to feed on hosts they have killed. Unlike commensalism and mutualism, the parasitic relationship harms the host, either feeding on it or, as in the case of intestinal parasites, consuming some of its food. Because parasites interact with other species, they can readily act as vectors of pathogens, causing disease.[9][10] Predation is by definition not a symbiosis, as the interaction is brief, but the entomologist E. O. Wilson has characterised parasites as "predators that eat prey in units of less than one".[2]
20
+
21
+ Within that scope are many possible strategies. Taxonomists classify parasites in a variety of overlapping schemes, based on their interactions with their hosts and on their life-cycles, which are sometimes very complex. An obligate parasite depends completely on the host to complete its life cycle, while a facultative parasite does not. Parasite life-cycles involving only one host are called "direct"; those with a definitive host (where the parasite reproduces sexually) and at least one intermediate host are called "indirect".[11][12] An endoparasite lives inside the host's body; an ectoparasite lives outside, on the host's surface.[13] Mesoparasites—like some copepods, for example—enter an opening in the host's body and remain partly embedded there.[14] Some parasites can be generalists, feeding on a wide range of hosts, but many parasites, and the majority of protozoans and helminths that parasitise animals, are specialists and extremely host-specific.[13] An early basic, functional division of parasites distinguished microparasites and macroparasites. These each had a mathematical model assigned in order to analyse the population movements of the host–parasite groupings.[15] The microorganisms and viruses that can reproduce and complete their life cycle within the host are known as microparasites. Macroparasites are the multicellular organisms that reproduce and complete their life cycle outside of the host or on the host's body.[15][16]
22
+
23
+ Much of the thinking on types of parasitism has focussed on terrestrial animal parasites of animals, such as helminths. Those in other environments and with other hosts often have analogous strategies. For example, the snubnosed eel is probably a facultative endoparasite (i.e., it is semiparasitic) that opportunistically burrows into and eats sick and dying fish.[17] Plant-eating insects such as scale insects, aphids, and caterpillars closely resemble ectoparasites, attacking much larger plants; they serve as vectors of bacteria, fungi and viruses which cause plant diseases. As female scale-insects cannot move, they are obligate parasites, permanently attached to their hosts.[15]
24
+
25
+ The sensory inputs that a parasite employs to identify and approach a potential host are known as "host cues". Such cues can include, for example, vibration,[18] exhaled carbon dioxide, skin odours, visual and heat signatures, and moisture.[19] Parasitic plants can use, for example, light, host physiochemistry, and volatiles to recognize potential hosts.[20]
26
+
27
+ There are six major parasitic strategies, namely parasitic castration, directly transmitted parasitism, trophically transmitted parasitism, vector-transmitted parasitism, parasitoidism, and micropredation. These apply to parasites whose hosts are plants as well as animals.[21][15] These strategies represent adaptive peaks; intermediate strategies are possible, but organisms in many different groups have consistently converged on these six, which are evolutionarily stable.[21]
28
+ A perspective on the evolutionary options can be gained by considering four questions: the effect on the fitness of a parasite's hosts; the number of hosts they have per life stage; whether the host is prevented from reproducing; and whether the effect depends on intensity (number of parasites per host). From this analysis, the major evolutionary strategies of parasitism emerge, alongside predation.[22]
29
+
30
+ Parasitic castrators partly or completely destroy their host's ability to reproduce, diverting the energy that would have gone into reproduction into host and parasite growth, sometimes causing gigantism in the host. The host's other systems are left intact, allowing it to survive and sustain the parasite.[21][23] Parasitic crustaceans such as those in the specialised barnacle genus Sacculina specifically cause damage to the gonads of their many species[24] of host crabs. In the case of Sacculina, the testes of over two-thirds of their crab hosts degenerate sufficiently for these male crabs to have gained female secondary sex characteristics such as broader abdomens, smaller claws and egg-grasping appendages. Various species of helminth castrate their hosts (such as insects and snails). This may be directly, whether mechanically by feeding on their gonads, or by secreting a chemical that destroys reproductive cells; or indirectly, whether by secreting a hormone or by diverting nutrients. For example, the trematode Zoogonus lasius, whose sporocysts lack mouths, castrates the intertidal marine snail Tritia obsoleta chemically, developing in its gonad and killing its reproductive cells.[23][25]
31
+
32
+ Directly transmitted parasites, not requiring a vector to reach their hosts, include parasites of terrestrial vertebrates such as lice and mites; marine parasites such as copepods and cyamid amphipods; monogeneans; and many species of nematodes, fungi, protozoans, bacteria, and viruses. Whether endoparasites or ectoparasites, each has a single host species. Within that species, most individuals are free or almost free of parasites, while a minority carry a large number of parasites; this highly uneven distribution is described as aggregated.[21]
33
+
34
+
35
+
36
+ Trophically transmitted parasites are transmitted by being eaten by a host. They include trematodes (all except schistosomes), cestodes, acanthocephalans, pentastomids, many round worms, and many protozoa such as Toxoplasma.[21] They have complex life cycles involving hosts of two or more species. In their juvenile stages, they infect and often encyst in the intermediate host. When this animal is eaten by a predator, the definitive host, the parasite survives the digestion process and matures into an adult; some live as intestinal parasites. Many trophically transmitted parasites modify the behaviour of their intermediate hosts, increasing their chances of being eaten by a predator. Like directly transmitted parasites, the distribution of trophically transmitted parasites among host individuals is aggregated.[21] Coinfection by multiple parasites is common.[26] Autoinfection, where (by exception) the whole of the parasite's life cycle takes place in a single primary host, can sometimes occur in helminths such as Strongyloides stercoralis.[27]
37
+
38
+ Vector-transmitted parasites rely on a third party, an intermediate host, where the parasite does not reproduce sexually[13] to carry them from one definitive host to another.[21] These parasites are microorganisms, namely protozoa, bacteria, or viruses, often intracellular pathogens (causing disease).[21] Their vectors are mostly hematophagic arthropods such as fleas, lice, ticks, and mosquitoes.[21][28] For example, the deer tick Ixodes scapularis acts as a vector for diseases including Lyme disease, babesiosis, and anaplasmosis.[29] Protozoan endoparasites, such as the malarial parasites in the genus Plasmodium and sleeping sickness parasites in the genus Trypanosoma, have infective stages in the host's blood which are transported to new hosts by biting insects.[30]
39
+
40
+ Parasitoids are insects which sooner or later kill their hosts, placing their relationship close to predation.[31] Most parasitoids are hymenopterans, parasitoid wasps; others include dipterans such as phorid flies. They can be divided into two groups, idiobionts and koinobionts, differing in their treatment of their hosts.[32]
41
+
42
+ Idiobiont parasitoids sting their often large prey on capture, either killing them outright or paralysing them immediately. The immobilised prey is then carried to a nest, sometimes alongside other prey if it is not large enough to support a parasitoid throughout its development. An egg is laid on top of the prey, and the nest is then sealed. The parasitoid develops rapidly through its larval and pupal stages, feeding on the provisions left for it.[32]
43
+
44
+ Koinobiont parasitoids, which include flies as well as wasps, lay their eggs inside young hosts, usually larvae. These are allowed to go on growing, so the host and parasitoid develop together for an extended period, ending when the parasitoids emerge as adults, leaving the prey dead, eaten from inside. Some koinobionts regulate their host's development, for example preventing it from pupating or making it moult whenever the parasitoid is ready to moult. They may do this by producing hormones that mimic the host's moulting hormones (ecdysteroids), or by regulating the host's endocrine system.[32]
45
+
46
+ Idiobiont parasitoid wasps immediately paralyse their hosts for their larvae (Pimplinae, pictured) to eat.[21]
47
+
48
+ Koinobiont parasitoid wasps like this braconid lay their eggs inside their hosts, which continue to grow and moult.
49
+
50
+ Phorid fly (centre left) is laying eggs in the abdomen of a worker honey bee, altering its behaviour.
51
+
52
+
53
+
54
+ A micropredator attacks more than one host, reducing each host's fitness at least a small amount, and is only in contact with any one host intermittently. This makes them suitable as vectors as they can pass smaller parasites from one host to another.[21][33][22] Most micropredators are hematophagic, feeding on blood. They include annelids such as leeches, crustaceans such as branchiurans and gnathiid isopods, various dipterans such as mosquitoes and tsetse flies, other arthropods such as fleas and ticks, vertebrates such as lampreys, and mammals such as vampire bats.[21]
55
+
56
+ Parasites use a variety of methods to infect animal hosts, including physical contact, the fecal–oral route, free-living infectious stages, and vectors, suiting their differing hosts, life cycles, and ecological contexts.[34] Examples to illustrate some of the many possible combinations are given in the table.
57
+
58
+ social behavior (grooming)
59
+
60
+ Among the many variations on parasitic strategies are hyperparasitism,[36] social parasitism,[37] brood parasitism,[38] kleptoparasitism,[39] sexual parasitism,[40] and adelphoparasitism.[41]
61
+
62
+ Hyperparasites feed on another parasite, as exemplified by protozoa living in helminth parasites,[36] or facultative or obligate parasitoids whose hosts are either conventional parasites or parasitoids.[21][32] Levels of parasitism beyond secondary also occur, especially among facultative parasitoids. In oak gall systems, there can be up to five levels of parasitism.[42]
63
+
64
+ Hyperparasites can control their hosts' populations, and are used for this purpose in agriculture and to some extent in medicine. The controlling effects can be seen in the way that the CHV1 virus helps to control the damage that chestnut blight, Cryphonectria parasitica, does to American chestnut trees, and in the way that bacteriophages can limit bacterial infections. It is likely, though little researched, that most pathogenic microparasites have hyperparasites which may prove widely useful in both agriculture and medicine.[43]
65
+
66
+ Social parasites take advantage of interspecific interactions between members of social animals such as ants, termites, and bumblebees. Examples include the large blue butterfly, Phengaris arion, its larvae employing ant mimicry to parasitise certain ants,[37] Bombus bohemicus, a bumblebee which invades the hives of other bees and takes over reproduction while their young are raised by host workers, and Melipona scutellaris, a eusocial bee whose virgin queens escape killer workers and invade another colony without a queen.[44] An extreme example of interspecific social parasitism is found in the ant Tetramorium inquilinum, an obligate parasite which lives exclusively on the backs of other Tetramorium ants.[45] A mechanism for the evolution of social parasitism was first proposed by Carlo Emery in 1909.[46] Now known as "Emery's rule", it states that social parasites tend to be closely related to their hosts, often being in the same genus.[47][48][49]
67
+
68
+ Intraspecific social parasitism occurs in parasitic nursing, where some individual young take milk from unrelated females. In wedge-capped capuchins, higher ranking females sometimes take milk from low ranking females without any reciprocation.[50]
69
+
70
+ In brood parasitism, the hosts act as parents as they raise the young as their own. Brood parasites include birds in different families such as cowbirds, whydahs, cuckoos, and black-headed ducks. These do not build nests of their own, but leave their eggs in nests of other species. The eggs of some brood parasites mimic those of their hosts, while some cowbird eggs have tough shells, making them hard for the hosts to kill by piercing, both mechanisms implying selection by the hosts against parasitic eggs.[38][51][52] The adult female European cuckoo further mimics a predator, the European sparrowhawk, giving her time to lay her eggs in the host's nest unobserved.[53]
71
+
72
+ In kleptoparasitism (from Greek κλέπτης (kleptēs), "thief"), parasites steal food gathered by the host. The parasitism is often on close relatives, whether within the same species or between species in the same genus or family. For instance, the many lineages of cuckoo bees lay their eggs in the nest cells of other bees in the same family.[39] Kleptoparasitism is uncommon generally but conspicuous in birds; some such as skuas are specialised in pirating food from other seabirds, relentlessly chasing them down until they disgorge their catch.[54]
73
+
74
+ A unique approach is seen in some species of anglerfish, such as Ceratias holboelli, where the males are reduced to tiny sexual parasites, wholly dependent on females of their own species for survival, permanently attached below the female's body, and unable to fend for themselves. The female nourishes the male and protects him from predators, while the male gives nothing back except the sperm that the female needs to produce the next generation.[40]
75
+
76
+ Adelphoparasitism, (from Greek ἀδελφός (adelphós), brother[55]), also known as sibling-parasitism, occurs where the host species is closely related to the parasite, often in the same family or genus.[41] In the citrus blackfly parasitoid, Encarsia perplexa, unmated females of which may lay haploid eggs in the fully developed larvae of their own species, producing male offspring,[56] while the marine worm Bonellia viridis has a similar reproductive strategy, although the larvae are planktonic.[57]
77
+
78
+ Examples of the major variant strategies are illustrated.
79
+
80
+ A hyperparasitoid chalcid wasp on the cocoons of its host, itself a parasitoid braconid wasp
81
+
82
+ The large blue butterfly is an ant mimic and social parasite.
83
+
84
+ In brood parasitism, the host raises the young of another species, here a cowbird's egg, that has been laid in its nest.
85
+
86
+ The great skua is a powerful kleptoparasite, relentlessly pursuing other seabirds until they disgorge their catches of food.
87
+
88
+ The male anglerfish Ceratias holboelli lives as a tiny sexual parasite permanently attached below the female's body.
89
+
90
+ Encarsia perplexa (centre), a parasitoid of citrus blackfly (lower left), is also an adelphoparasite, laying eggs in larvae of its own species
91
+
92
+ A wide range of organisms is parasitic, from animals, plants, and fungi to protozoans, bacteria, and viruses.[58]
93
+
94
+ Parasitism is widespread in the animal kingdom,[61] and has evolved independently from free-living forms hundreds of times.[21] Many types of helminth including flukes and cestodes have complete life cycles involving two or more hosts. By far the largest group is the parasitoid wasps in the Hymenoptera.[21] The phyla and classes with the largest numbers of parasitic species are listed in the table. Numbers are conservative minimum estimates. The columns for Endo- and Ecto-parasitism refer to the definitive host, as documented in the Vertebrate and Invertebrate columns.[59]
95
+
96
+ A hemiparasite or partial parasite, such as mistletoe derives some of its nutrients from another living plant, whereas a holoparasite such as dodder derives all of its nutrients from another plant.[62] Parasitic plants make up about one per cent of angiosperms and are in almost every biome in the world.[63][64] All these plants have modified roots, haustoria, which penetrate the host plants, connecting them to the conductive system – either the xylem, the phloem, or both. This provides them with the ability to extract water and nutrients from the host. A parasitic plant is classified depending on where it latches onto the host, either the stem or the root, and the amount of nutrients it requires. Since holoparasites have no chlorophyll and therefore cannot make food for themselves by photosynthesis, they are always obligate parasites, deriving all their food from their hosts.[63] Some parasitic plants can locate their host plants by detecting chemicals in the air or soil given off by host shoots or roots, respectively. About 4,500 species of parasitic plant in approximately 20 families of flowering plants are known.[65][63]
97
+
98
+ Species within Orobanchaceae (broomrapes) are some of the most economically destructive of all plants. Species of Striga (witchweeds) are estimated to cost billions of dollars a year in crop yield loss, infesting over 50 million hectares of cultivated land within Sub-Saharan Africa alone. Striga infects both grasses and grains, including corn, rice and sorghum, undoubtedly some of the most important food crops. Orobanche also threatens a wide range of other important crops, including peas, chickpeas, tomatoes, carrots, and varieties of cabbage. Yield loss from Orobanche can be total; despite extensive research, no method of control has been entirely successful.[66]
99
+
100
+ Many plants and fungi exchange carbon and nutrients in mutualistic mycorrhizal relationships. Some 400 species of myco-heterotrophic plants, mostly in the tropics, however effectively cheat by taking carbon from a fungus rather than exchanging it for minerals. They have much reduced roots, as they do not need to absorb water from the soil; their stems are slender with few vascular bundles, and their leaves are reduced to small scales, as they do not photosynthesize. Their seeds are very small and numerous, so they appear to rely on being infected by a suitable fungus soon after germinating.[67]
101
+
102
+ Parasitic fungi derive some or all of their nutritional requirements from plants, other fungi, or animals. Unlike mycorrhizal fungi which have a mutualistic relationship with their host plants, they are pathogenic. For example, the honey fungi in the genus Armillaria grow in the roots of a wide variety of trees, and eventually kill them. They then continue to live in the dead wood, feeding saprophytically.[68]
103
+ Fungal infection (mycosis) is widespread in animals including humans; it kills some 1.6 million people each year.[69] Microsporidia are obligate intracellular parasitic fungi that can also be hyperparasites. They largely affect insects, but some affect vertebrates including humans, where they can cause the intestinal infection microsporidiosis.[70]
104
+
105
+ Protozoa such as Plasmodium, Trypanosoma, and Entamoeba,[71] are endoparasitic. They cause serious diseases in vertebrates including humans – in these examples, malaria, sleeping sickness, and amoebic dysentery – and have complex life cycles.[30]
106
+
107
+ Many bacteria are parasitic, though they are more generally thought of as pathogens causing disease.[72] Parasitic bacteria are extremely diverse, and infect their hosts by a variety of routes. To give a few examples, Bacillus anthracis, the cause of anthrax, is spread by contact with infected domestic animals; its spores, which can survive for years outside the body, can enter a host through an abrasion or may be inhaled. Borrelia, the cause of Lyme disease and relapsing fever, is transmitted by vectors, ticks of the genus Ixodes, from the diseases' reservoirs in animals such as deer. Campylobacter jejuni, a cause of gastroenteritis, is spread by the fecal–oral route from animals, or by eating insufficiently cooked poultry, or by contaminated water. Haemophilus influenzae, an agent of bacterial meningitis and respiratory tract infections such as influenza and bronchitis, is transmitted by droplet contact. Treponema pallidum, the cause of syphilis, is spread by sexual activity.[73]
108
+
109
+ Viruses are obligate intracellular parasites, characterised by extremely limited biological function, to the point where, while they are evidently able to infect all other organisms from bacteria and archaea to animals, plants and fungi, it is unclear whether they can themselves be described as living. Viruses can be either RNA or DNA viruses consisting of a single or double strand of genetic material (RNA or DNA respectively), covered in a protein coat and sometimes a lipid envelope. They thus lack all the usual machinery of the cell such as enzymes, relying entirely on the host cell's ability to replicate DNA and synthesise proteins. Most viruses are bacteriophages, infecting bacteria.[74][75][76][77]
110
+
111
+ Parasitism is a major aspect of evolutionary ecology; for example, almost all free-living animals are host to at least one species of parasite. Vertebrates, the best-studied group, are hosts to between 75,000 and 300,000 species of helminths and an uncounted number of parasitic microorganisms. On average, a mammal species hosts four species of nematode, two of trematodes, and two of cestodes.[78] Humans have 342 species of helminth parasites, and 70 species of protozoan parasites.[79] Some three-quarters of the links in food webs include a parasite, important in regulating host numbers. Perhaps 40 percent of described species are parasitic.[78]
112
+
113
+ Parasitism is hard to demonstrate from the fossil record, but holes in the mandibles of several specimens of Tyrannosaurus may have been caused by Trichomonas-like parasites.[81]
114
+
115
+ A louse-like ectoparasite, Mesophthirus engeli, preserved in mid-Cretaceous amber from Myanmar, has been found with dinosaur feathers, apparently damaged by the insect's "strong chewing mouthparts".[80]
116
+
117
+ As hosts and parasites evolve together, their relationships often change. When a parasite is in a sole relationship with a host, selection drives the relationship to become more benign, even mutualistic, as the parasite can reproduce for longer if its host lives longer.[82] But where parasites are competing, selection favours the parasite that reproduces fastest, leading to increased virulence. There are thus varied possibilities in host–parasite coevolution.[83]
118
+
119
+ Long-term coevolution sometimes leads to a relatively stable relationship tending to commensalism or mutualism, as, all else being equal, it is in the evolutionary interest of the parasite that its host thrives. A parasite may evolve to become less harmful for its host or a host may evolve to cope with the unavoidable presence of a parasite—to the point that the parasite's absence causes the host harm. For example, although animals parasitised by worms are often clearly harmed, such infections may also reduce the prevalence and effects of autoimmune disorders in animal hosts, including humans.[82] In a more extreme example, some nematode worms cannot reproduce, or even survive, without infection by Wolbachia bacteria.[84]
120
+
121
+ Lynn Margulis and others have argued, following Peter Kropotkin's 1902 Mutual Aid: A Factor of Evolution, that natural selection drives relationships from parasitism to mutualism when resources are limited. This process may have been involved in the symbiogenesis which formed the eukaryotes from an intracellular relationship between archaea and bacteria, though the sequence of events remains largely undefined.[85][86]
122
+
123
+ Competition between parasites can be expected to favour faster reproducing and therefore more virulent parasites, by natural selection.[83][87] Parasites whose life cycle involves the death of the host, in order to leave it and to sometimes enter the next host, evolve to be more virulent, and may alter the behavior or other properties of the host to make it more vulnerable to predators.[88] Conversely, parasites whose reproduction is largely tied to their host's reproductive success tend to become less virulent or mutualist, so that their hosts reproduce more effectively.[88]
124
+
125
+ Among competing parasitic insect-killing bacteria of the genera Photorhabdus and Xenorhabdus, virulence depended on the relative potency of the antimicrobial toxins (bacteriocins) produced by the two strains involved. When only one bacterium could kill the other, the other strain was excluded by the competition. But when caterpillars were infected with bacteria both of which had toxins able to kill the other strain, neither strain was excluded, and their virulence was less than when the insect was infected by a single strain.[83]
126
+
127
+ A parasite sometimes undergoes cospeciation with its host, resulting in the pattern described in Fahrenholz's rule, that the phylogenies of the host and parasite come to mirror each other.[89]
128
+
129
+ An example is between the simian foamy virus (SFV) and its primate hosts. The phylogenies of SFV polymerase and the mitochondrial cytochrome c oxidase subunit II from African and Asian primates were found to be closely congruent in branching order and divergence times, implying that the simian foamy viruses cospeciated with Old World primates for at least 30 million years.[90]
130
+
131
+ The presumption of a shared evolutionary history between parasites and hosts can help elucidate how host taxa are related. For instance, there has been a dispute about whether flamingos are more closely related to storks or ducks. The fact that flamingos share parasites with ducks and geese was initially taken as evidence that these groups were more closely related to each other than either is to storks. However, evolutionary events such as the duplication, or the extinction of parasite species (without similar events on the host phylogeny) often erode similarities between host and parasite phylogenies. In the case of flamingos, they have similar lice to those of grebes. Flamingos and grebes do have a common ancestor, implying cospeciation of birds and lice in these groups. Flamingo lice then switched hosts to ducks, creating the situation which had confused biologists.[91]
132
+
133
+ Parasites infect sympatric hosts (those within their same geographical area) more effectively, as has been shown with digenetic trematodes infecting lake snails.[92] This is in line with the Red Queen hypothesis, which states that interactions between species lead to constant natural selection for coadaptation. Parasites track the locally common hosts' phenotypes, so the parasites are less infective to allopatric hosts, those from different geographical regions.[92]
134
+
135
+ Some parasites modify host behaviour in order to increase their transmission between hosts, often in relation to predator and prey (parasite increased trophic transmission). For example, in the California coastal salt marsh, the fluke Euhaplorchis californiensis reduces the ability of its killifish host to avoid predators.[93] This parasite matures in egrets, which are more likely to feed on infected killifish than on uninfected fish. Another example is the protozoan Toxoplasma gondii, a parasite that matures in cats but can be carried by many other mammals. Uninfected rats avoid cat odors, but rats infected with T. gondii are drawn to this scent, which may increase transmission to feline hosts.[94] The malaria parasite modifies the skin odour of its human hosts, increasing their attractiveness to mosquitoes and hence improving the chance that the parasite will be transmitted.[35]
136
+
137
+ Parasites can exploit their hosts to carry out a number of functions that they would otherwise have to carry out for themselves. Parasites which lose those functions then have a selective advantage, as they can divert resources to reproduction. Many insect ectoparasites including bedbugs, batbugs, lice and fleas have lost their ability to fly, relying instead on their hosts for transport.[95] Trait loss more generally is widespread among parasites.[96] An extreme example is the myxosporean Henneguya zschokkei, an ectoparasite of fish and the only animal known to have lost the ability to respire aerobically: its cells lack mitochondria.[97]
138
+
139
+ Hosts have evolved a variety of defensive measures against their parasites, including physical barriers like the skin of vertebrates,[98] the immune system of mammals,[99] insects actively removing parasites,[100] and defensive chemicals in plants.[101]
140
+
141
+ The evolutionary biologist W. D. Hamilton suggested that sexual reproduction could have evolved to help to defeat multiple parasites by enabling genetic recombination, the shuffling of genes to create varied combinations. Hamilton showed by mathematical modelling that sexual reproduction would be evolutionarily stable in different situations, and that the theory's predictions matched the actual ecology of sexual reproduction.[102][103] However, there may be a trade-off between immunocompetence and breeding male vertebrate hosts' secondary sex characteristics, such as the plumage of peacocks and the manes of lions. This is because the male hormone testosterone encourages the growth of secondary sex characteristics, favouring such males in sexual selection, at the price of reducing their immune defences.[104]
142
+
143
+ The physical barrier of the tough and often dry and waterproof skin of reptiles, birds and mammals keeps invading microorganisms from entering the body. Human skin also secretes sebum, which is toxic to most microorganisms.[98] On the other hand, larger parasites such as trematodes detect chemicals produced by the skin to locate their hosts when they enter the water. Vertebrate saliva and tears contain lysozyme, an enzyme which breaks down the cell walls of invading bacteria.[98] Should the organism pass the mouth, the stomach with its hydrochloric acid, toxic to most microorganisms, is the next line of defence.[98] Some intestinal parasites have a thick, tough outer coating which is digested slowly or not at all, allowing the parasite to pass through the stomach alive, at which point they enter the intestine and begin the next stage of their life. Once inside the body, parasites must overcome the immune system's serum proteins and pattern recognition receptors, intracellular and cellular, that trigger the adaptive immune system's lymphocytes such as T cells and antibody-producing B cells. These have receptors that recognise parasites.[99]
144
+
145
+ Insects often adapt their nests to reduce parasitism. For example, one of the key reasons why the wasp Polistes canadensis nests across multiple combs, rather than building a single comb like much of the rest of its genus, is to avoid infestation by tineid moths. The tineid moth lays its eggs within the wasps' nests and then these eggs hatch into larvae that can burrow from cell to cell and prey on wasp pupae. Adult wasps attempt to remove and kill moth eggs and larvae by chewing down the edges of cells, coating the cells with an oral secretion that gives the nest a dark brownish appearance.[100]
146
+
147
+ Plants respond to parasite attack with a series of chemical defences, such as polyphenol oxidase, under the control of the jasmonic acid-insensitive (JA) and salicylic acid (SA) signalling pathways.[101][105] The different biochemical pathways are activated by different attacks, and the two pathways can interact positively or negatively. In general, plants can either initiate a specific or a non-specific response.[106][105] Specific responses involve recognition of a parasite by the plant's cellular receptors, leading to a strong but localised response: defensive chemicals are produced around the area where the parasite was detected, blocking its spread, and avoiding wasting defensive production where it is not needed.[106] Nonspecific defensive responses are systemic, meaning that the responses are not confined to an area of the plant, but spread throughout the plant, making them costly in energy. These are effective against a wide range of parasites.[106] When damaged, such as by lepidopteran caterpillars, leaves of plants including maize and cotton release increased amounts of volatile chemicals such as terpenes that signal they are being attacked; one effect of this is to attract parasitoid wasps, which in turn attack the caterpillars.[107]
148
+
149
+ Parasitism and parasite evolution were until the twenty-first century studied by parasitologists, in a science dominated by medicine, rather than by ecologists or evolutionary biologists. Even though parasite–host interactions were plainly ecological and important in evolution, the history of parasitology caused what the evolutionary ecologist Robert Poulin called a "takeover of parasitism by parasitologists", leading ecologists to ignore the area. This was in his opinion "unfortunate", as parasites are "omnipresent agents of natural selection" and significant forces in evolution and ecology.[108] In his view, the long-standing split between the sciences limited the exchange of ideas, with separate conferences and separate journals. The technical languages of ecology and parasitology sometimes involved different meanings for the same words. There were philosophical differences, too: Poulin notes that, influenced by medicine, "many parasitologists accepted that evolution led to a decrease in parasite virulence, whereas modern evolutionary theory would have predicted a greater range of outcomes".[108]
150
+
151
+ Their complex relationships make parasites difficult to place in food webs: a trematode with multiple hosts for its various life cycle stages would occupy many positions in a food web simultaneously, and would set up loops of energy flow, confusing the analysis. Further, since nearly every animal has (multiple) parasites, parasites would occupy the top levels of every food web.[79]
152
+
153
+ Parasites can play a role in the proliferation of non-native species. For example, invasive green crabs are minimally affected by native trematodes on the Eastern Atlantic coast. This helps them outcompete native crabs such as the rock and Jonah crabs.[109]
154
+
155
+ Ecological parasitology can be important to attempts at control, like during the campaign for eradicating the Guinea worm. Even though the parasite was eradicated in all but four countries, the worm began using frogs as an intermediary host before infecting dogs, making control more difficult than it would have been if the relationships had been better understood.[110]
156
+
157
+ Although parasites are widely considered to be harmful, the eradication of all parasites would not be beneficial. Parasites account for at least half of life's diversity; they perform important ecological roles; and without parasites, organisms might tend to asexual reproduction, diminishing the diversity of traits brought about by sexual reproduction.[111] Parasites provide an opportunity for the transfer of genetic material between species, facilitating evolutionary change.[88] Many parasites require multiple hosts of different species to complete their life cycles and rely on predator–prey or other stable ecological interactions to get from one host to another. The presence of parasites thus indicates that an ecosystem is healthy.[112]
158
+
159
+ An ectoparasite, the California condor louse, Colpocephalum californici, became a well-known conservation issue. A major and very costly captive breeding program was run in the United States to rescue the Californian condor. It was host to a louse, which lived only on it. Any lice found were "deliberately killed" during the program, to keep the condors in the best possible health. The result was that one species, the condor, was saved and returned to the wild, while another species, the parasite, became extinct.[113]
160
+
161
+ Although parasites are often omitted in depictions of food webs, they usually occupy the top position. Parasites can function like keystone species, reducing the dominance of superior competitors and allowing competing species to co-exist.[79][114][115]
162
+
163
+ A single parasite species usually has an aggregated distribution across host animals, which means that most hosts carry few parasites, while a few hosts carry the vast majority of parasite individuals. This poses considerable problems for students of parasite ecology, as it renders parametric statistics as commonly used by biologists invalid. Log-transformation of data before the application of parametric test, or the use of non-parametric statistics is recommended by several authors, but this can give rise to further problems, so quantitative parasitology is based on more advanced biostatistical methods.[116]
164
+
165
+ Human parasites including roundworms, the Guinea worm, threadworms and tapeworms are mentioned in Egyptian papyrus records from 3000 BC onwards; the Ebers papyrus describes hookworm. In ancient Greece, parasites including the bladder worm are described in the Hippocratic Corpus, while the comic playwright Aristophanes called tapeworms "hailstones". The Roman physicians Celsus and Galen documented the roundworms Ascaris lumbricoides and Enterobius vermicularis.[117]
166
+
167
+ In his Canon of Medicine, completed in 1025, the Persian physician Avicenna recorded human and animal parasites including roundworms, threadworms, the Guinea worm and tapeworms.[117]
168
+
169
+ In his 1397 book Traité de l'état, science et pratique de l'art de la Bergerie (Account of the state, science and practice of the art of shepherding), Jehan de Brie [fr] wrote the first description of a trematode endoparasite, the sheep liver fluke Fasciola hepatica.[118][119]
170
+
171
+ In the Early Modern period, Francesco Redi's 1668 book Esperienze Intorno alla Generazione degl'Insetti (Experiences of the Generation of Insects), explicitly described ecto- and endoparasites, illustrating ticks, the larvae of nasal flies of deer, and sheep liver fluke.[120] Redi noted that parasites develop from eggs, contradicting the theory of spontaneous generation.[121] In his 1684 book Osservazioni intorno agli animali viventi che si trovano negli animali viventi (Observations on Living Animals found in Living Animals), Redi described and illustrated over 100 parasites including the large roundworm in humans that causes ascariasis.[120] Redi was the first to name the cysts of Echinococcus granulosus seen in dogs and sheep as parasitic; a century later, in 1760, Peter Simon Pallas correctly suggested that these were the larvae of tapeworms.[117]
172
+
173
+ In 1681, Antonie van Leeuwenhoek observed and illustrated the protozoan parasite Giardia lamblia, and linked it to "his own loose stools". This was the first protozoan parasite of humans to be seen under a microscope.[117] A few years later, in 1687, the Italian biologists Giovanni Cosimo Bonomo and Diacinto Cestoni described scabies as caused by the parasitic mite Sarcoptes scabiei, marking it as the first disease of humans with a known microscopic causative agent.[122]
174
+
175
+ Modern parasitology developed in the 19th century with accurate observations and experiments by many researchers and clinicians;[118] the term was first used in 1870.[123] In 1828, James Annersley described amoebiasis, protozoal infections of the intestines and the liver, though the pathogen, Entamoeba histolytica, was not discovered until 1873 by Friedrich Lösch. James Paget discovered the intestinal nematode Trichinella spiralis in humans in 1835. James McConnell described the human liver fluke, Clonorchis sinensis, in 1875.[117] Algernon Thomas and Rudolf Leuckart independently made the first discovery of the life cycle of a trematode, the sheep liver fluke, by experiment in 1881–1883.[118] In 1877 Patrick Manson discovered the life cycle of the filarial worms, that cause elephantiatis transmitted by mosquitoes. Manson further predicted that the malaria parasite, Plasmodium, had a mosquito vector, and persuaded Ronald Ross to investigate. Ross confirmed that the prediction was correct in 1897–1898. At the same time, Giovanni Battista Grassi and others described the malaria parasite's life cycle stages in Anopheles mosquitoes. Ross was controversially awarded the 1902 Nobel prize for his work, while Grassi was not.[117] In 1903, David Bruce identified the protozoan parasite and the tsetse fly vector of African trypanosomiasis.[124]
176
+
177
+ Given the importance of malaria, with some 220 million people infected annually, many attempts have been made to interrupt its transmission. Various methods of malaria prophylaxis have been tried including the use of antimalarial drugs to kill off the parasites in the blood, the eradication of its mosquito vectors with organochlorine and other insecticides, and the development of a malaria vaccine. All of these have proven problematic, with drug resistance, insecticide resistance among mosquitoes, and repeated failure of vaccines as the parasite mutates.[125] The first and as of 2015 the only licensed vaccine for any parasitic disease of humans is RTS,S for Plasmodium falciparum malaria.[126]
178
+
179
+ Poulin observes that the widespread prophylactic use of anthelmintic drugs in domestic sheep and cattle constitutes a worldwide uncontrolled experiment in the life-history evolution of their parasites. The outcomes depend on whether the drugs decrease the chance of a helminth larva reaching adulthood. If so, natural selection can be expected to favour the production of eggs at an earlier age. If on the other hand the drugs mainly affects adult parasitic worms, selection could cause delayed maturity and increased virulence. Such changes appear to be under way: the nematode Teladorsagia circumcincta is changing its adult size and reproductive rate in response to drugs.[127]
180
+
181
+ In the classical era, the concept of the parasite was not strictly pejorative: the parasitus was an accepted role in Roman society, in which a person could live off the hospitality of others, in return for "flattery, simple services, and a willingness to endure humiliation".[128][129]
182
+
183
+ Parasitism has a derogatory sense in popular usage. According to the immunologist John Playfair,[130]
184
+
185
+ In everyday speech, the term 'parasite' is loaded with derogatory meaning. A parasite is a sponger, a lazy profiteer, a drain on society.[130]
186
+
187
+ The satirical cleric Jonathan Swift refers to hyperparasitism in his 1733 poem "On Poetry: A Rhapsody", comparing poets to "vermin" who "teaze and pinch their foes":[131]
188
+
189
+ The vermin only teaze and pinch
190
+ Their foes superior by an inch.
191
+ So nat'ralists observe, a flea
192
+ Hath smaller fleas that on him prey;
193
+
194
+ And these have smaller fleas to bite 'em.
195
+ And so proceeds ad infinitum.
196
+ Thus every poet, in his kind,
197
+ Is bit by him that comes behind:
198
+
199
+ In Bram Stoker's 1897 Gothic horror novel Dracula, and its many film adaptations, the eponymous Count Dracula is a blood-drinking parasite. The critic Laura Otis argues that as a "thief, seducer, creator, and mimic, Dracula is the ultimate parasite. The whole point of vampirism is sucking other people's blood—living at other people's expense."[132]
200
+
201
+ Disgusting and terrifying parasitic alien species are widespread in science fiction,[133][134] as for instance in Ridley Scott's 1979 film Alien.[135][136] In one scene, a Xenomorph bursts out of the chest of a dead man, with blood squirting out under high pressure assisted by explosive squibs. Animal organs were used to reinforce the shock effect. The scene was filmed in a single take, and the startled reaction of the actors was genuine.[4][137]
en/4661.html.txt ADDED
@@ -0,0 +1,294 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+
5
+ Plants are mainly multicellular organisms, predominantly photosynthetic eukaryotes of the kingdom Plantae. Historically, plants were treated as one of two kingdoms including all living things that were not animals, and all algae and fungi were treated as plants. However, all current definitions of Plantae exclude the fungi and some algae, as well as the prokaryotes (the archaea and bacteria). By one definition, plants form the clade Viridiplantae (Latin name for "green plants"), a group that includes the flowering plants, conifers and other gymnosperms, ferns and their allies, hornworts, liverworts, mosses and the green algae, but excludes the red and brown algae.
6
+
7
+ Green plants obtain most of their energy from sunlight via photosynthesis by primary chloroplasts that are derived from endosymbiosis with cyanobacteria. Their chloroplasts contain chlorophylls a and b, which gives them their green color. Some plants are parasitic or mycotrophic and have lost the ability to produce normal amounts of chlorophyll or to photosynthesize. Plants are characterized by sexual reproduction and alternation of generations, although asexual reproduction is also common.
8
+
9
+ There are about 320,000 species of plants, of which the great majority, some 260–290 thousand, produce seeds.[5] Green plants provide a substantial proportion of the world's molecular oxygen,[6] and are the basis of most of Earth's ecosystems. Plants that produce grain, fruit and vegetables also form basic human foods and have been domesticated for millennia. Plants have many cultural and other uses, as ornaments, building materials, writing material and, in great variety, they have been the source of medicines and psychoactive drugs. The scientific study of plants is known as botany, a branch of biology.
10
+
11
+ All living things were traditionally placed into one of two groups, plants and animals. This classification may date from Aristotle (384 BC – 322 BC), who made the distinction between plants, which generally do not move, and animals, which often are mobile to catch their food. Much later, when Linnaeus (1707–1778) created the basis of the modern system of scientific classification, these two groups became the kingdoms Vegetabilia (later Metaphyta or Plantae) and Animalia (also called Metazoa). Since then, it has become clear that the plant kingdom as originally defined included several unrelated groups, and the fungi and several groups of algae were removed to new kingdoms. However, these organisms are still often considered plants, particularly in popular contexts.[citation needed]
12
+
13
+ The term "plant" generally implies the possession of the following traits: multicellularity, possession of cell walls containing cellulose, and the ability to carry out photosynthesis with primary chloroplasts.[7][8]
14
+
15
+ When the name Plantae or plant is applied to a specific group of organisms or taxon, it usually refers to one of four concepts. From least to most inclusive, these four groupings are:
16
+
17
+ Another way of looking at the relationships between the different groups that have been called "plants" is through a cladogram, which shows their evolutionary relationships. These are not yet completely settled, but one accepted relationship between the three groups described above is shown below[clarification needed].[16][17][18][19][20][21][22] Those which have been called "plants" are in bold (some minor groups have been omitted).
18
+
19
+ Rhodophyta (red algae)
20
+
21
+ Rhodelphidia (predatorial)
22
+
23
+ Picozoa
24
+
25
+ Glaucophyta (glaucophyte algae)
26
+
27
+ Mesostigmatophyceae
28
+
29
+ Chlorokybophyceae
30
+
31
+ Spirotaenia
32
+
33
+ Chlorophyta
34
+
35
+ Charales (stoneworts)
36
+
37
+ land plants or embryophytes
38
+
39
+ Cryptista
40
+
41
+ The way in which the groups of green algae are combined and named varies considerably between authors.
42
+
43
+ Algae comprise several different groups of organisms which produce food by photosynthesis and thus have traditionally been included in the plant kingdom. The seaweeds range from large multicellular algae to single-celled organisms and are classified into three groups, the green algae, red algae and brown algae. There is good evidence that the brown algae evolved independently from the others, from non-photosynthetic ancestors that formed endosymbiotic relationships with red algae rather than from cyanobacteria, and they are no longer classified as plants as defined here.[23][24]
44
+
45
+ The Viridiplantae, the green plants – green algae and land plants – form a clade, a group consisting of all the descendants of a common ancestor. With a few exceptions, the green plants have the following features in common; primary chloroplasts derived from cyanobacteria containing chlorophylls a and b, cell walls containing cellulose, and food stores in the form of starch contained within the plastids. They undergo closed mitosis without centrioles, and typically have mitochondria with flat cristae. The chloroplasts of green plants are surrounded by two membranes, suggesting they originated directly from endosymbiotic cyanobacteria.
46
+
47
+ Two additional groups, the Rhodophyta (red algae) and Glaucophyta (glaucophyte algae), also have primary chloroplasts that appear to be derived directly from endosymbiotic cyanobacteria, although they differ from Viridiplantae in the pigments which are used in photosynthesis and so are different in colour. These groups also differ from green plants in that the storage polysaccharide is floridean starch and is stored in the cytoplasm rather than in the plastids. They appear to have had a common origin with Viridiplantae and the three groups form the clade Archaeplastida, whose name implies that their chloroplasts were derived from a single ancient endosymbiotic event. This is the broadest modern definition of the term 'plant'.
48
+
49
+ In contrast, most other algae (e.g. brown algae/diatoms, haptophytes, dinoflagellates, and euglenids) not only have different pigments but also have chloroplasts with three or four surrounding membranes. They are not close relatives of the Archaeplastida, presumably having acquired chloroplasts separately from ingested or symbiotic green and red algae. They are thus not included in even the broadest modern definition of the plant kingdom, although they were in the past.
50
+
51
+ The green plants or Viridiplantae were traditionally divided into the green algae (including the stoneworts) and the land plants. However, it is now known that the land plants evolved from within a group of green algae, so that the green algae by themselves are a paraphyletic group, i.e. a group that excludes some of the descendants of a common ancestor. Paraphyletic groups are generally avoided in modern classifications, so that in recent treatments the Viridiplantae have been divided into two clades, the Chlorophyta and the Streptophyta (including the land plants and Charophyta).[25][26]
52
+
53
+ The Chlorophyta (a name that has also been used for all green algae) are the sister group to the Charophytes, from which the land plants evolved. There are about 4,300 species,[27] mainly unicellular or multicellular marine organisms such as the sea lettuce, Ulva.
54
+
55
+ The other group within the Viridiplantae are the mainly freshwater or terrestrial Streptophyta, which consists of the land plants together with the Charophyta, itself consisting of several groups of green algae such as the desmids and stoneworts. Streptophyte algae are either unicellular or form multicellular filaments, branched or unbranched.[26] The genus Spirogyra is a filamentous streptophyte alga familiar to many, as it is often used in teaching and is one of the organisms responsible for the algal "scum" on ponds. The freshwater stoneworts strongly resemble land plants and are believed to be their closest relatives.[citation needed] Growing immersed in fresh water, they consist of a central stalk with whorls of branchlets.
56
+
57
+ Linnaeus' original classification placed the fungi within the Plantae, since they were unquestionably neither animals or minerals and these were the only other alternatives. With 19th century developments in microbiology, Ernst Haeckel introduced the new kingdom Protista in addition to Plantae and Animalia, but whether fungi were best placed in the Plantae or should be reclassified as protists remained controversial. In 1969, Robert Whittaker proposed the creation of the kingdom Fungi. Molecular evidence has since shown that the most recent common ancestor (concestor), of the Fungi was probably more similar to that of the Animalia than to that of Plantae or any other kingdom.[28]
58
+
59
+ Whittaker's original reclassification was based on the fundamental difference in nutrition between the Fungi and the Plantae. Unlike plants, which generally gain carbon through photosynthesis, and so are called autotrophs, fungi do not possess chloroplasts and generally obtain carbon by breaking down and absorbing surrounding materials, and so are called heterotrophic saprotrophs. In addition, the substructure of multicellular fungi is different from that of plants, taking the form of many chitinous microscopic strands called hyphae, which may be further subdivided into cells or may form a syncytium containing many eukaryotic nuclei. Fruiting bodies, of which mushrooms are the most familiar example, are the reproductive structures of fungi, and are unlike any structures produced by plants.[citation needed]
60
+
61
+ The table below shows some species count estimates of different green plant (Viridiplantae) divisions. It suggests there are about 300,000 species of living Viridiplantae, of which 85–90% are flowering plants. (Note: as these are from different sources and different dates, they are not necessarily comparable, and like all species counts, are subject to a degree of uncertainty in some cases.)
62
+
63
+ (6,600–10,300)
64
+
65
+ (18,100–20,200)
66
+
67
+ (12,200)
68
+
69
+ (259,511)
70
+
71
+ The naming of plants is governed by the International Code of Nomenclature for algae, fungi, and plants and International Code of Nomenclature for Cultivated Plants (see cultivated plant taxonomy).
72
+
73
+ The evolution of plants has resulted in increasing levels of complexity, from the earliest algal mats, through bryophytes, lycopods, ferns to the complex gymnosperms and angiosperms of today. Plants in all of these groups continue to thrive, especially in the environments in which they evolved.
74
+
75
+ An algal scum formed on the land 1,200 million years ago, but it was not until the Ordovician Period, around 450 million years ago, that land plants appeared.[39] However, new evidence from the study of carbon isotope ratios in Precambrian rocks has suggested that complex photosynthetic plants developed on the earth over 1000 m.y.a.[40] For more than a century it has been assumed that the ancestors of land plants evolved in aquatic environments and then adapted to a life on land, an idea usually credited to botanist Frederick Orpen Bower in his 1908 book The Origin of a Land Flora. A recent alternative view, supported by genetic evidence, is that they evolved from terrestrial single-celled algae,[41] and that even the common ancestor of red and green algae, and the unicellular freshwater algae glaucophytes, originated in a terrestrial environment in freshwater biofilms or microbial mats.[42] Primitive land plants began to diversify in the late Silurian Period, around 420 million years ago, and the results of their diversification are displayed in remarkable detail in an early Devonian fossil assemblage from the Rhynie chert. This chert preserved early plants in cellular detail, petrified in volcanic springs. By the middle of the Devonian Period most of the features recognised in plants today are present, including roots, leaves and secondary wood, and by late Devonian times seeds had evolved.[43] Late Devonian plants had thereby reached a degree of sophistication that allowed them to form forests of tall trees. Evolutionary innovation continued in the Carboniferous and later geological periods and is ongoing today. Most plant groups were relatively unscathed by the Permo-Triassic extinction event, although the structures of communities changed. This may have set the scene for the evolution of flowering plants in the Triassic (~200 million years ago), which exploded in the Cretaceous and Tertiary. The latest major group of plants to evolve were the grasses, which became important in the mid Tertiary, from around 40 million years ago. The grasses, as well as many other groups, evolved new mechanisms of metabolism to survive the low CO2 and warm, dry conditions of the tropics over the last 10 million years.
76
+
77
+ A 1997 proposed phylogenetic tree of Plantae, after Kenrick and Crane,[44] is as follows, with modification to the Pteridophyta from Smith et al.[45] The Prasinophyceae are a paraphyletic assemblage of early diverging green algal lineages, but are treated as a group outside the Chlorophyta:[46] later authors have not followed this suggestion.
78
+
79
+ Prasinophyceae (micromonads)
80
+
81
+ Spermatophytes (seed plants)
82
+
83
+ Progymnospermophyta †
84
+
85
+ Pteridopsida (true ferns)
86
+
87
+ Marattiopsida
88
+
89
+ Equisetopsida (horsetails)
90
+
91
+ Psilotopsida (whisk ferns & adders'-tongues)
92
+
93
+ Cladoxylopsida †
94
+
95
+ Lycopodiophyta
96
+
97
+ Zosterophyllophyta †
98
+
99
+ Rhyniophyta †
100
+
101
+ Aglaophyton †
102
+
103
+ Horneophytopsida †
104
+
105
+ Bryophyta (mosses)
106
+
107
+ Anthocerotophyta (hornworts)
108
+
109
+ Marchantiophyta (liverworts)
110
+
111
+ Charophyta
112
+
113
+ Trebouxiophyceae (Pleurastrophyceae)
114
+
115
+ Chlorophyceae
116
+
117
+ Ulvophyceae
118
+
119
+ A newer proposed classification follows Leliaert et al. 2011[47] and modified with Silar 2016[20][21][48][49] for the green algae clades and Novíkov & Barabaš-Krasni 2015[50] for the land plants clade. Notice that the Prasinophyceae are here placed inside the Chlorophyta.
120
+
121
+ Mesostigmatophyceae
122
+
123
+ Chlorokybophyceae
124
+
125
+ Spirotaenia
126
+
127
+ Chlorophyta inc. Prasinophyceae
128
+
129
+ Streptofilum
130
+
131
+ Klebsormidiophyta
132
+
133
+ Charophyta Rabenhorst 1863 emend. Lewis & McCourt 2004 (Stoneworts)
134
+
135
+ Coleochaetophyta
136
+
137
+ Zygnematophyta
138
+
139
+ Marchantiophyta (Liverworts)
140
+
141
+ Bryophyta (True mosses)
142
+
143
+ Anthocerotophyta (Non-flowering hornworts)
144
+
145
+ †Horneophyta
146
+
147
+ †Aglaophyta
148
+
149
+ Tracheophyta (Vascular Plants)
150
+
151
+ Later, a phylogeny based on genomes and transcriptomes from 1,153 plant species was proposed.[51] The placing of algal groups is supported by phylogenies based on genomes from the Mesostigmatophyceae and Chlorokybophyceae that have since been sequenced.[52][53] The classification of Bryophyta is supported both by Puttick et al. 2018,[54] and by phylogenies involving the hornwort genomes that have also since been sequenced.[55][56]
152
+
153
+ Rhodophyta
154
+
155
+ Glaucophyta
156
+
157
+ Chlorophyta
158
+
159
+ Prasinococcales
160
+
161
+
162
+
163
+ Mesostigmatophyceae
164
+
165
+ Chlorokybophyceae
166
+
167
+ Spirotaenia
168
+
169
+ Klebsormidiales
170
+
171
+ Chara
172
+
173
+ Coleochaetales
174
+
175
+ Zygnematophyceae
176
+
177
+ Hornworts
178
+
179
+ Liverworts
180
+
181
+ Mosses
182
+
183
+ Lycophytes
184
+
185
+ Ferns
186
+
187
+ Gymnosperms
188
+
189
+ Angiosperms
190
+
191
+ The plants that are likely most familiar to us are the multicellular land plants, called embryophytes. Embryophytes include the vascular plants, such as ferns, conifers and flowering plants. They also include the bryophytes, of which mosses and liverworts are the most common.
192
+
193
+ All of these plants have eukaryotic cells with cell walls composed of cellulose, and most obtain their energy through photosynthesis, using light, water and carbon dioxide to synthesize food. About three hundred plant species do not photosynthesize but are parasites on other species of photosynthetic plants. Embryophytes are distinguished from green algae, which represent a mode of photosynthetic life similar to the kind modern plants are believed to have evolved from, by having specialized reproductive organs protected by non-reproductive tissues.
194
+
195
+ Bryophytes first appeared during the early Paleozoic. They mainly live in habitats where moisture is available for significant periods, although some species, such as Targionia, are desiccation-tolerant. Most species of bryophytes remain small throughout their life-cycle. This involves an alternation between two generations: a haploid stage, called the gametophyte, and a diploid stage, called the sporophyte. In bryophytes, the sporophyte is always unbranched and remains nutritionally dependent on its parent gametophyte. The embryophytes have the ability to secrete a cuticle on their outer surface, a waxy layer that confers resistant to desiccation. In the mosses and hornworts a cuticle is usually only produced on the sporophyte. Stomata are absent from liverworts, but occur on the sporangia of mosses and hornworts, allowing gas exchange.
196
+
197
+ Vascular plants first appeared during the Silurian period, and by the Devonian had diversified and spread into many different terrestrial environments. They developed a number of adaptations that allowed them to spread into increasingly more arid places, notably the vascular tissues xylem and phloem, that transport water and food throughout the organism. Root systems capable of obtaining soil water and nutrients also evolved during the Devonian. In modern vascular plants, the sporophyte is typically large, branched, nutritionally independent and long-lived, but there is increasing evidence that Paleozoic gametophytes were just as complex as the sporophytes. The gametophytes of all vascular plant groups evolved to become reduced in size and prominence in the life cycle.
198
+
199
+ In seed plants, the microgametophyte is reduced from a multicellular free-living organism to a few cells in a pollen grain and the miniaturised megagametophyte remains inside the megasporangium, attached to and dependent on the parent plant. A megasporangium enclosed in a protective layer called an integument is known as an ovule. After fertilisation by means of sperm produced by pollen grains, an embryo sporophyte develops inside the ovule. The integument becomes a seed coat, and the ovule develops into a seed. Seed plants can survive and reproduce in extremely arid conditions, because they are not dependent on free water for the movement of sperm, or the development of free living gametophytes.
200
+
201
+ The first seed plants, pteridosperms (seed ferns), now extinct, appeared in the Devonian and diversified through the Carboniferous. They were the ancestors of modern gymnosperms, of which four surviving groups are widespread today, particularly the conifers, which are dominant trees in several biomes. The name gymnosperm comes from the Greek composite word γυμνόσπερμος (γυμνός gymnos, "naked" and σπέρμα sperma, "seed"), as the ovules and subsequent seeds are not enclosed in a protective structure (carpels or fruit), but are borne naked, typically on cone scales.
202
+
203
+ Plant fossils include roots, wood, leaves, seeds, fruit, pollen, spores, phytoliths, and amber (the fossilized resin produced by some plants). Fossil land plants are recorded in terrestrial, lacustrine, fluvial and nearshore marine sediments. Pollen, spores and algae (dinoflagellates and acritarchs) are used for dating sedimentary rock sequences. The remains of fossil plants are not as common as fossil animals, although plant fossils are locally abundant in many regions worldwide.
204
+
205
+ The earliest fossils clearly assignable to Kingdom Plantae are fossil green algae from the Cambrian. These fossils resemble calcified multicellular members of the Dasycladales. Earlier Precambrian fossils are known that resemble single-cell green algae, but definitive identity with that group of algae is uncertain.
206
+
207
+ The earliest fossils attributed to green algae date from the Precambrian (ca. 1200 mya).[57][58] The resistant outer walls of prasinophyte cysts (known as phycomata) are well preserved in fossil deposits of the Paleozoic (ca. 250–540 mya). A filamentous fossil (Proterocladus) from middle Neoproterozoic deposits (ca. 750 mya) has been attributed to the Cladophorales, while the oldest reliable records of the Bryopsidales, Dasycladales) and stoneworts are from the Paleozoic.[46][59]
208
+
209
+ The oldest known fossils of embryophytes date from the Ordovician, though such fossils are fragmentary. By the Silurian, fossils of whole plants are preserved, including the simple vascular plant Cooksonia in mid-Silurian and the much larger and more complex lycophyte Baragwanathia longifolia in late Silurian. From the early Devonian Rhynie chert, detailed fossils of lycophytes and rhyniophytes have been found that show details of the individual cells within the plant organs and the symbiotic association of these plants with fungi of the order Glomales. The Devonian period also saw the evolution of leaves and roots, and the first modern tree, Archaeopteris. This tree with fern-like foliage and a trunk with conifer-like wood was heterosporous producing spores of two different sizes, an early step in the evolution of seeds.[60]
210
+
211
+ The Coal measures are a major source of Paleozoic plant fossils, with many groups of plants in existence at this time. The spoil heaps of coal mines are the best places to collect; coal itself is the remains of fossilised plants, though structural detail of the plant fossils is rarely visible in coal. In the Fossil Grove at Victoria Park in Glasgow, Scotland, the stumps of Lepidodendron trees are found in their original growth positions.
212
+
213
+ The fossilized remains of conifer and angiosperm roots, stems and branches may be locally abundant in lake and inshore sedimentary rocks from the Mesozoic and Cenozoic eras. Sequoia and its allies, magnolia, oak, and palms are often found.
214
+
215
+ Petrified wood is common in some parts of the world, and is most frequently found in arid or desert areas where it is more readily exposed by erosion. Petrified wood is often heavily silicified (the organic material replaced by silicon dioxide), and the impregnated tissue is often preserved in fine detail. Such specimens may be cut and polished using lapidary equipment. Fossil forests of petrified wood have been found in all continents.
216
+
217
+ Fossils of seed ferns such as Glossopteris are widely distributed throughout several continents of the Southern Hemisphere, a fact that gave support to Alfred Wegener's early ideas regarding Continental drift theory.
218
+
219
+ Most of the solid material in a plant is taken from the atmosphere. Through the process of photosynthesis, most plants use the energy in sunlight to convert carbon dioxide from the atmosphere, plus water, into simple sugars. These sugars are then used as building blocks and form the main structural component of the plant. Chlorophyll, a green-colored, magnesium-containing pigment is essential to this process; it is generally present in plant leaves, and often in other plant parts as well. Parasitic plants, on the other hand, use the resources of their host to provide the materials needed for metabolism and growth.
220
+
221
+ Plants usually rely on soil primarily for support and water (in quantitative terms), but they also obtain compounds of nitrogen, phosphorus, potassium, magnesium and other elemental nutrients from the soil. Epiphytic and lithophytic plants depend on air and nearby debris for nutrients, and carnivorous plants supplement their nutrient requirements, particularly for nitrogen and phosphorus, with insect prey that they capture. For the majority of plants to grow successfully they also require oxygen in the atmosphere and around their roots (soil gas) for respiration. Plants use oxygen and glucose (which may be produced from stored starch) to provide energy.[61] Some plants grow as submerged aquatics, using oxygen dissolved in the surrounding water, and a few specialized vascular plants, such as mangroves and reed (Phragmites australis),[62] can grow with their roots in anoxic conditions.
222
+
223
+ The genome of a plant controls its growth. For example, selected varieties or genotypes of wheat grow rapidly, maturing within 110 days, whereas others, in the same environmental conditions, grow more slowly and mature within 155 days.[63]
224
+
225
+ Growth is also determined by environmental factors, such as temperature, available water, available light, carbon dioxide and available nutrients in the soil. Any change in the availability of these external conditions will be reflected in the plant's growth and the timing of its development.[citation needed]
226
+
227
+ Biotic factors also affect plant growth. Plants can be so crowded that no single individual produces normal growth, causing etiolation and chlorosis. Optimal plant growth can be hampered by grazing animals, suboptimal soil composition, lack of mycorrhizal fungi, and attacks by insects or plant diseases, including those caused by bacteria, fungi, viruses, and nematodes.[63]
228
+
229
+ Simple plants like algae may have short life spans as individuals, but their populations are commonly seasonal. Annual plants grow and reproduce within one growing season, biennial plants grow for two growing seasons and usually reproduce in second year, and perennial plants live for many growing seasons and once mature will often reproduce annually. These designations often depend on climate and other environmental factors. Plants that are annual in alpine or temperate regions can be biennial or perennial in warmer climates. Among the vascular plants, perennials include both evergreens that keep their leaves the entire year, and deciduous plants that lose their leaves for some part of it. In temperate and boreal climates, they generally lose their leaves during the winter; many tropical plants lose their leaves during the dry season.[citation needed]
230
+
231
+ The growth rate of plants is extremely variable. Some mosses grow less than 0.001 millimeters per hour (mm/h), while most trees grow 0.025–0.250 mm/h. Some climbing species, such as kudzu, which do not need to produce thick supportive tissue, may grow up to 12.5 mm/h.[citation needed]
232
+
233
+ Plants protect themselves from frost and dehydration stress with antifreeze proteins, heat-shock proteins and sugars (sucrose is common). LEA (Late Embryogenesis Abundant) protein expression is induced by stresses and protects other proteins from aggregation as a result of desiccation and freezing.[64]
234
+
235
+ When water freezes in plants, the consequences for the plant depend very much on whether the freezing occurs within cells (intracellularly) or outside cells in intercellular spaces.[65] Intracellular freezing, which usually kills the cell[66] regardless of the hardiness of the plant and its tissues, seldom occurs in nature because rates of cooling are rarely high enough to support it. Rates of cooling of several degrees Celsius per minute are typically needed to cause intracellular formation of ice.[67] At rates of cooling of a few degrees Celsius per hour, segregation of ice occurs in intercellular spaces.[68] This may or may not be lethal, depending on the hardiness of the tissue. At freezing temperatures, water in the intercellular spaces of plant tissue freezes first, though the water may remain unfrozen until temperatures drop below −7 °C (19 °F).[65] After the initial formation of intercellular ice, the cells shrink as water is lost to the segregated ice, and the cells undergo freeze-drying. This dehydration is now considered the fundamental cause of freezing injury.
236
+
237
+ Plants are continuously exposed to a range of biotic and abiotic stresses. These stresses often cause DNA damage directly, or indirectly via the generation of reactive oxygen species.[69] Plants are capable of a DNA damage response that is a critical mechanism for maintaining genome stability.[70] The DNA damage response is particularly important during seed germination, since seed quality tends to deteriorate with age in association with DNA damage accumulation.[71] During germination repair processes are activated to deal with this accumulated DNA damage.[72] In particular, single- and double-strand breaks in DNA can be repaired.[73] The DNA checkpoint kinase ATM has a key role in integrating progression through germination with repair responses to the DNA damages accumulated by the aged seed.[74]
238
+
239
+ Plant cells are typically distinguished by their large water-filled central vacuole, chloroplasts, and rigid cell walls that are made up of cellulose, hemicellulose, and pectin. Cell division is also characterized by the development of a phragmoplast for the construction of a cell plate in the late stages of cytokinesis. Just as in animals, plant cells differentiate and develop into multiple cell types. Totipotent meristematic cells can differentiate into vascular, storage, protective (e.g. epidermal layer), or reproductive tissues, with more primitive plants lacking some tissue types.[75]
240
+
241
+ Plants are photosynthetic, which means that they manufacture their own food molecules using energy obtained from light. The primary mechanism plants have for capturing light energy is the pigment chlorophyll. All green plants contain two forms of chlorophyll, chlorophyll a and chlorophyll b. The latter of these pigments is not found in red or brown algae.
242
+ The simple equation of photosynthesis is as follows:
243
+
244
+ By means of cells that behave like nerves, plants receive and distribute within their systems information about incident light intensity and quality. Incident light that stimulates a chemical reaction in one leaf, will cause a chain reaction of signals to the entire plant via a type of cell termed a bundle sheath cell. Researchers, from the Warsaw University of Life Sciences in Poland, found that plants have a specific memory for varying light conditions, which prepares their immune systems against seasonal pathogens.[76] Plants use pattern-recognition receptors to recognize conserved microbial signatures. This recognition triggers an immune response. The first plant receptors of conserved microbial signatures were identified in rice (XA21, 1995)[77] and in Arabidopsis thaliana (FLS2, 2000).[78] Plants also carry immune receptors that recognize highly variable pathogen effectors. These include the NBS-LRR class of proteins.
245
+
246
+ Vascular plants differ from other plants in that nutrients are transported between their different parts through specialized structures, called xylem and phloem. They also have roots for taking up water and minerals. The xylem moves water and minerals from the root to the rest of the plant, and the phloem provides the roots with sugars and other nutrient produced by the leaves.[75]
247
+
248
+ Plants have some of the largest genomes among all organisms.[79] The largest plant genome (in terms of gene number) is that of wheat (Triticum asestivum), predicted to encode ≈94,000 genes[80] and thus almost 5 times as many as the human genome. The first plant genome sequenced was that of Arabidopsis thaliana which encodes about 25,500 genes.[81] In terms of sheer DNA sequence, the smallest published genome is that of the carnivorous bladderwort (Utricularia gibba) at 82 Mb (although it still encodes 28,500 genes)[82] while the largest, from the Norway Spruce (Picea abies), extends over 19,600 Mb (encoding about 28,300 genes).[83]
249
+
250
+ The photosynthesis conducted by land plants and algae is the ultimate source of energy and organic material in nearly all ecosystems. Photosynthesis, at first by cyanobacteria and later by photosynthetic eukaryotes, radically changed the composition of the early Earth's anoxic atmosphere, which as a result is now 21% oxygen. Animals and most other organisms are aerobic, relying on oxygen; those that do not are confined to relatively rare anaerobic environments. Plants are the primary producers in most terrestrial ecosystems and form the basis of the food web in those ecosystems. Many animals rely on plants for shelter as well as oxygen and food.[citation needed]
251
+
252
+ Land plants are key components of the water cycle and several other biogeochemical cycles. Some plants have coevolved with nitrogen fixing bacteria, making plants an important part of the nitrogen cycle. Plant roots play an essential role in soil development and the prevention of soil erosion.[citation needed]
253
+
254
+ Plants are distributed almost worldwide. While they inhabit a multitude of biomes and ecoregions, few can be found beyond the tundras at the northernmost regions of continental shelves. At the southern extremes, plants of the Antarctic flora have adapted tenaciously to the prevailing conditions.[citation needed]
255
+
256
+ Plants are often the dominant physical and structural component of habitats where they occur. Many of the Earth's biomes are named for the type of vegetation because plants are the dominant organisms in those biomes, such as grasslands, taiga and tropical rainforest.[citation needed]
257
+
258
+ Numerous animals have coevolved with plants. Many animals pollinate flowers in exchange for food in the form of pollen or nectar. Many animals disperse seeds, often by eating fruit and passing the seeds in their feces. Myrmecophytes are plants that have coevolved with ants. The plant provides a home, and sometimes food, for the ants. In exchange, the ants defend the plant from herbivores and sometimes competing plants. Ant wastes provide organic fertilizer.
259
+
260
+ The majority of plant species have various kinds of fungi associated with their root systems in a kind of mutualistic symbiosis known as mycorrhiza. The fungi help the plants gain water and mineral nutrients from the soil, while the plant gives the fungi carbohydrates manufactured in photosynthesis. Some plants serve as homes for endophytic fungi that protect the plant from herbivores by producing toxins. The fungal endophyte, Neotyphodium coenophialum, in tall fescue (Festuca arundinacea) does tremendous economic damage to the cattle industry in the U.S.
261
+
262
+ Various forms of parasitism are also fairly common among plants, from the semi-parasitic mistletoe that merely takes some nutrients from its host, but still has photosynthetic leaves, to the fully parasitic broomrape and toothwort that acquire all their nutrients through connections to the roots of other plants, and so have no chlorophyll. Some plants, known as myco-heterotrophs, parasitize mycorrhizal fungi, and hence act as epiparasites on other plants.
263
+
264
+ Many plants are epiphytes, meaning they grow on other plants, usually trees, without parasitizing them. Epiphytes may indirectly harm their host plant by intercepting mineral nutrients and light that the host would otherwise receive. The weight of large numbers of epiphytes may break tree limbs. Hemiepiphytes like the strangler fig begin as epiphytes but eventually set their own roots and overpower and kill their host. Many orchids, bromeliads, ferns and mosses often grow as epiphytes. Bromeliad epiphytes accumulate water in leaf axils to form phytotelmata that may contain complex aquatic food webs.[84]
265
+
266
+ Approximately 630 plants are carnivorous, such as the Venus Flytrap (Dionaea muscipula) and sundew (Drosera species). They trap small animals and digest them to obtain mineral nutrients, especially nitrogen and phosphorus.[85]
267
+
268
+ The study of plant uses by people is called economic botany or ethnobotany.[86] Human cultivation of plants is part of agriculture, which is the basis of human civilization.[87] Plant agriculture is subdivided into agronomy, horticulture and forestry.[88]
269
+
270
+ Humans depend on plants for food, either directly or as feed for domestic animals. Agriculture deals with the production of food crops, and has played a key role in the history of world civilizations. Agriculture includes agronomy for arable crops, horticulture for vegetables and fruit, and forestry for timber.[89] About 7,000 species of plant have been used for food, though most of today's food is derived from only 30 species. The major staples include cereals such as rice and wheat, starchy roots and tubers such as cassava and potato, and legumes such as peas and beans. Vegetable oils such as olive oil provide lipids, while fruit and vegetables contribute vitamins and minerals to the diet.[90]
271
+
272
+ Medicinal plants are a primary source of organic compounds, both for their medicinal and physiological effects, and for the industrial synthesis of a vast array of organic chemicals.[91] Many hundreds of medicines are derived from plants, both traditional medicines used in herbalism[92][93] and chemical substances purified from plants or first identified in them, sometimes by ethnobotanical search, and then synthesised for use in modern medicine. Modern medicines derived from plants include aspirin, taxol, morphine, quinine, reserpine, colchicine, digitalis and vincristine. Plants used in herbalism include ginkgo, echinacea, feverfew, and Saint John's wort. The pharmacopoeia of Dioscorides, De Materia Medica, describing some 600 medicinal plants, was written between 50 and 70 AD and remained in use in Europe and the Middle East until around 1600 AD; it was the precursor of all modern pharmacopoeias.[94][95][96]
273
+
274
+ Plants grown as industrial crops are the source of a wide range of products used in manufacturing, sometimes so intensively as to risk harm to the environment.[97] Nonfood products include essential oils, natural dyes, pigments, waxes, resins, tannins, alkaloids, amber and cork. Products derived from plants include soaps, shampoos, perfumes, cosmetics, paint, varnish, turpentine, rubber, latex, lubricants, linoleum, plastics, inks, and gums. Renewable fuels from plants include firewood, peat and other biofuels.[98][99] The fossil fuels coal, petroleum and natural gas are derived from the remains of aquatic organisms including phytoplankton in geological time.[100]
275
+
276
+ Structural resources and fibres from plants are used to construct dwellings and to manufacture clothing. Wood is used not only for buildings, boats, and furniture, but also for smaller items such as musical instruments and sports equipment. Wood is pulped to make paper and cardboard.[101] Cloth is often made from cotton, flax, ramie or synthetic fibres such as rayon and acetate derived from plant cellulose. Thread used to sew cloth likewise comes in large part from cotton.[102]
277
+
278
+ Thousands of plant species are cultivated for aesthetic purposes as well as to provide shade, modify temperatures, reduce wind, abate noise, provide privacy, and prevent soil erosion. Plants are the basis of a multibillion-dollar per year tourism industry, which includes travel to historic gardens, national parks, rainforests, forests with colorful autumn leaves, and festivals such as Japan's[103] and America's cherry blossom festivals.[104]
279
+
280
+ While some gardens are planted with food crops, many are planted for aesthetic, ornamental, or conservation purposes. Arboretums and botanical gardens are public collections of living plants. In private outdoor gardens, lawn grasses, shade trees, ornamental trees, shrubs, vines, herbaceous perennials and bedding plants are used. Gardens may cultivate the plants in a naturalistic state, or may sculpture their growth, as with topiary or espalier. Gardening is the most popular leisure activity in the U.S., and working with plants or horticulture therapy is beneficial for rehabilitating people with disabilities.[citation needed]
281
+
282
+ Plants may also be grown or kept indoors as houseplants, or in specialized buildings such as greenhouses that are designed for the care and cultivation of living plants. Venus Flytrap, sensitive plant and resurrection plant are examples of plants sold as novelties. There are also art forms specializing in the arrangement of cut or living plant, such as bonsai, ikebana, and the arrangement of cut or dried flowers. Ornamental plants have sometimes changed the course of history, as in tulipomania.[105]
283
+
284
+ Architectural designs resembling plants appear in the capitals of Ancient Egyptian columns, which were carved to resemble either the Egyptian white lotus or the papyrus.[106] Images of plants are often used in painting and photography, as well as on textiles, money, stamps, flags and coats of arms.[citation needed]
285
+
286
+ Basic biological research has often been done with plants. In genetics, the breeding of pea plants allowed Gregor Mendel to derive the basic laws governing inheritance,[107] and examination of chromosomes in maize allowed Barbara McClintock to demonstrate their connection to inherited traits.[108] The plant Arabidopsis thaliana is used in laboratories as a model organism to understand how genes control the growth and development of plant structures.[109] NASA predicts that space stations or space colonies will one day rely on plants for life support.[110]
287
+
288
+ Ancient trees are revered and many are famous. Tree rings themselves are an important method of dating in archeology, and serve as a record of past climates.[citation needed]
289
+
290
+ Plants figure prominently in mythology, religion and literature. They are used as national and state emblems, including state trees and state flowers. Plants are often used as memorials, gifts and to mark special occasions such as births, deaths, weddings and holidays. The arrangement of flowers may be used to send hidden messages.[citation needed]
291
+
292
+ Weeds are unwanted plants growing in managed environments such as farms, urban areas, gardens, lawns, and parks. People have spread plants beyond their native ranges and some of these introduced plants become invasive, damaging existing ecosystems by displacing native species, and sometimes becoming serious weeds of cultivation.[citation needed]
293
+
294
+ Plants may cause harm to animals, including people. Plants that produce windblown pollen invoke allergic reactions in people who suffer from hay fever. A wide variety of plants are poisonous. Toxalbumins are plant poisons fatal to most mammals and act as a serious deterrent to consumption. Several plants cause skin irritations when touched, such as poison ivy. Certain plants contain psychotropic chemicals, which are extracted and ingested or smoked, including nicotine from tobacco, cannabinoids from Cannabis sativa, cocaine from Erythroxylon coca and opium from opium poppy. Smoking causes damage to health or even death, while some drugs may also be harmful or fatal to people.[111][112] Both illegal and legal drugs derived from plants may have negative effects on the economy, affecting worker productivity and law enforcement costs.[113][114]
en/4662.html.txt ADDED
@@ -0,0 +1,173 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ Plate tectonics (from the Late Latin: tectonicus, from the Ancient Greek: τεκτονικός, lit. 'pertaining to building')[1] is a scientific theory describing the large-scale motion of seven large plates and the movements of a larger number of smaller plates of Earth's lithosphere, since tectonic processes began on Earth between 3.3[2] and 3.5 billion years ago. The model builds on the concept of continental drift, an idea developed during the first decades of the 20th century. The geoscientific community accepted plate-tectonic theory after seafloor spreading was validated in the late 1950s and early 1960s.
4
+
5
+ The lithosphere, which is the rigid outermost shell of a planet (the crust and upper mantle), is broken into tectonic plates. The Earth's lithosphere is composed of seven or eight major plates (depending on how they are defined) and many minor plates. Where the plates meet, their relative motion determines the type of boundary: convergent, divergent, or transform. Earthquakes, volcanic activity, mountain-building, and oceanic trench formation occur along these plate boundaries (or faults). The relative movement of the plates typically ranges from zero to 100 mm annually.[3]
6
+
7
+ Tectonic plates are composed of oceanic lithosphere and thicker continental lithosphere, each topped by its own kind of crust. Along convergent boundaries, subduction, or one plate moving under another, carries the lower one down into the mantle; the material lost is roughly balanced by the formation of new (oceanic) crust along divergent margins by seafloor spreading. In this way, the total surface of the lithosphere remains the same. This prediction of plate tectonics is also referred to as the conveyor belt principle. Earlier theories, since disproven, proposed gradual shrinking (contraction) or gradual expansion of the globe.[4]
8
+
9
+ Tectonic plates are able to move because the Earth's lithosphere has greater mechanical strength than the underlying asthenosphere. Lateral density variations in the mantle result in convection; that is, the slow creeping motion of Earth's solid mantle. Plate movement is thought to be driven by a combination of the motion of the seafloor away from spreading ridges due to variations in topography (the ridge is a topographic high) and density changes in the crust (density increases as newly formed crust cools and moves away from the ridge). At subduction zones the relatively cold, dense oceanic crust is "pulled" or sinks down into the mantle over the downward convecting limb of a mantle cell.[5] Another explanation lies in the different forces generated by tidal forces of the Sun and Moon. The relative importance of each of these factors and their relationship to each other is unclear, and still the subject of much debate.
10
+
11
+ The outer layers of the Earth are divided into the lithosphere and asthenosphere. The division is based on differences in mechanical properties and in the method for the transfer of heat. The lithosphere is cooler and more rigid, while the asthenosphere is hotter and flows more easily. In terms of heat transfer, the lithosphere loses heat by conduction, whereas the asthenosphere also transfers heat by convection and has a nearly adiabatic temperature gradient. This division should not be confused with the chemical subdivision of these same layers into the mantle (comprising both the asthenosphere and the mantle portion of the lithosphere) and the crust: a given piece of mantle may be part of the lithosphere or the asthenosphere at different times depending on its temperature and pressure.
12
+
13
+ The key principle of plate tectonics is that the lithosphere exists as separate and distinct tectonic plates, which ride on the fluid-like (visco-elastic solid) asthenosphere. Plate motions range up to a typical 10–40 mm/year (Mid-Atlantic Ridge; about as fast as fingernails grow), to about 160 mm/year (Nazca Plate; about as fast as hair grows).[6] The driving mechanism behind this movement is described below.
14
+
15
+ Tectonic lithosphere plates consist of lithospheric mantle overlain by one or two types of crustal material: oceanic crust (in older texts called sima from silicon and magnesium) and continental crust (sial from silicon and aluminium). Average oceanic lithosphere is typically 100 km (62 mi) thick;[7] its thickness is a function of its age: as time passes, it conductively cools and subjacent cooling mantle is added to its base. Because it is formed at mid-ocean ridges and spreads outwards, its thickness is therefore a function of its distance from the mid-ocean ridge where it was formed. For a typical distance that oceanic lithosphere must travel before being subducted, the thickness varies from about 6 km (4 mi) thick at mid-ocean ridges to greater than 100 km (62 mi) at subduction zones; for shorter or longer distances, the subduction zone (and therefore also the mean) thickness becomes smaller or larger, respectively.[8] Continental lithosphere is typically about 200 km thick, though this varies considerably between basins, mountain ranges, and stable cratonic interiors of continents.
16
+
17
+ The location where two plates meet is called a plate boundary. Plate boundaries are commonly associated with geological events such as earthquakes and the creation of topographic features such as mountains, volcanoes, mid-ocean ridges, and oceanic trenches. The majority of the world's active volcanoes occur along plate boundaries, with the Pacific Plate's Ring of Fire being the most active and widely known today. These boundaries are discussed in further detail below. Some volcanoes occur in the interiors of plates, and these have been variously attributed to internal plate deformation[9] and to mantle plumes.
18
+
19
+ As explained above, tectonic plates may include continental crust or oceanic crust, and most plates contain both. For example, the African Plate includes the continent and parts of the floor of the Atlantic and Indian Oceans. The distinction between oceanic crust and continental crust is based on their modes of formation. Oceanic crust is formed at sea-floor spreading centers, and continental crust is formed through arc volcanism and accretion of terranes through tectonic processes, though some of these terranes may contain ophiolite sequences, which are pieces of oceanic crust considered to be part of the continent when they exit the standard cycle of formation and spreading centers and subduction beneath continents. Oceanic crust is also denser than continental crust owing to their different compositions. Oceanic crust is denser because it has less silicon and more heavier elements ("mafic") than continental crust ("felsic").[10] As a result of this density stratification, oceanic crust generally lies below sea level (for example most of the Pacific Plate), while continental crust buoyantly projects above sea level (see the page isostasy for explanation of this principle).
20
+
21
+ Three types of plate boundaries exist,[11] with a fourth, mixed type, characterized by the way the plates move relative to each other. They are associated with different types of surface phenomena. The different types of plate boundaries are:[12][13]
22
+
23
+ It has generally been accepted that tectonic plates are able to move because of the relative density of oceanic lithosphere and the relative weakness of the asthenosphere. Dissipation of heat from the mantle is acknowledged to be the original source of the energy required to drive plate tectonics through convection or large scale upwelling and doming. The current view, though still a matter of some debate, asserts that as a consequence, a powerful source of plate motion is generated due to the excess density of the oceanic lithosphere sinking in subduction zones. When the new crust forms at mid-ocean ridges, this oceanic lithosphere is initially less dense than the underlying asthenosphere, but it becomes denser with age as it conductively cools and thickens. The greater density of old lithosphere relative to the underlying asthenosphere allows it to sink into the deep mantle at subduction zones, providing most of the driving force for plate movement. The weakness of the asthenosphere allows the tectonic plates to move easily towards a subduction zone.[15] Although subduction is thought to be the strongest force driving plate motions, it cannot be the only force since there are plates such as the North American Plate which are moving, yet are nowhere being subducted. The same is true for the enormous Eurasian Plate. The sources of plate motion are a matter of intensive research and discussion among scientists. One of the main points is that the kinematic pattern of the movement itself should be separated clearly from the possible geodynamic mechanism that is invoked as the driving force of the observed movement, as some patterns may be explained by more than one mechanism.[16] In short, the driving forces advocated at the moment can be divided into three categories based on the relationship to the movement: mantle dynamics related, gravity related (main driving force accepted nowadays), and earth rotation related.
24
+
25
+ For much of the last quarter century, the leading theory of the driving force behind tectonic plate motions envisaged large scale convection currents in the upper mantle, which can be transmitted through the asthenosphere. This theory was launched by Arthur Holmes and some forerunners in the 1930s[17] and was immediately recognized as the solution for the acceptance of the theory as originally discussed in the papers of Alfred Wegener in the early years of the century. However, despite its acceptance, it was long debated in the scientific community because the leading theory still envisaged a static Earth without moving continents up until the major breakthroughs of the early sixties.
26
+
27
+ Two- and three-dimensional imaging of Earth's interior (seismic tomography) shows a varying lateral density distribution throughout the mantle. Such density variations can be material (from rock chemistry), mineral (from variations in mineral structures), or thermal (through thermal expansion and contraction from heat energy). The manifestation of this varying lateral density is mantle convection from buoyancy forces.[18]
28
+
29
+ How mantle convection directly and indirectly relates to plate motion is a matter of ongoing study and discussion in geodynamics. Somehow, this energy must be transferred to the lithosphere for tectonic plates to move. There are essentially two main types of forces that are thought to influence plate motion: friction and gravity.
30
+
31
+ Lately, the convection theory has been much debated, as modern techniques based on 3D seismic tomography still fail to recognize these predicted large scale convection cells.[citation needed] Alternative views have been proposed.
32
+
33
+ In the theory of plume tectonics followed by numerous researchers during the 1990s, a modified concept of mantle convection currents is used. It asserts that super plumes rise from the deeper mantle and are the drivers or substitutes of the major convection cells. These ideas find their roots in the early 1930s in the works of Beloussov and van Bemmelen, which were initially opposed to plate tectonics and placed the mechanism in a fixistic frame of verticalistic movements. Van Bemmelen later on modulated on the concept in his "Undulation Models" and used it as the driving force for horizontal movements, invoking gravitational forces away from the regional crustal doming.[19][20]
34
+ The theories find resonance in the modern theories which envisage hot spots or mantle plumes which remain fixed and are overridden by oceanic and continental lithosphere plates over time and leave their traces in the geological record (though these phenomena are not invoked as real driving mechanisms, but rather as modulators).
35
+ The mechanism is nowadays still advocated for example to explain the break-up of supercontinents during specific geological epochs.[21] It has also still numerous followers [22] [23] also amongst the scientists involved in the theory of Earth expansion [24]
36
+
37
+ Another theory is that the mantle flows neither in cells nor large plumes but rather as a series of channels just below the Earth's crust, which then provide basal friction to the lithosphere. This theory, called "surge tectonics", became quite popular in geophysics and geodynamics during the 1980s and 1990s.[25] Recent research, based on three-dimensional computer modeling, suggests that plate geometry is governed by a feedback between mantle convection patterns and the strength of the lithosphere.[26]
38
+
39
+ Forces related to gravity are invoked as secondary phenomena within the framework of a more general driving mechanism such as the various forms of mantle dynamics described above. In moderns views, gravity is invoked as the major driving force, through slab pull along subduction zones.
40
+
41
+ Gravitational sliding away from a spreading ridge: According to many authors, plate motion is driven by the higher elevation of plates at ocean ridges.[27] As oceanic lithosphere is formed at spreading ridges from hot mantle material, it gradually cools and thickens with age (and thus adds distance from the ridge). Cool oceanic lithosphere is significantly denser than the hot mantle material from which it is derived and so with increasing thickness it gradually subsides into the mantle to compensate the greater load. The result is a slight lateral incline with increased distance from the ridge axis.
42
+
43
+ This force is regarded as a secondary force and is often referred to as "ridge push". This is a misnomer as nothing is "pushing" horizontally and tensional features are dominant along ridges. It is more accurate to refer to this mechanism as gravitational sliding as variable topography across the totality of the plate can vary considerably and the topography of spreading ridges is only the most prominent feature. Other mechanisms generating this gravitational secondary force include flexural bulging of the lithosphere before it dives underneath an adjacent plate which produces a clear topographical feature that can offset, or at least affect, the influence of topographical ocean ridges, and mantle plumes and hot spots, which are postulated to impinge on the underside of tectonic plates.
44
+
45
+ Slab-pull: Current scientific opinion is that the asthenosphere is insufficiently competent or rigid to directly cause motion by friction along the base of the lithosphere. Slab pull is therefore most widely thought to be the greatest force acting on the plates. In this current understanding, plate motion is mostly driven by the weight of cold, dense plates sinking into the mantle at trenches.[28] Recent models indicate that trench suction plays an important role as well. However, the fact that the North American Plate is nowhere being subducted, although it is in motion, presents a problem. The same holds for the African, Eurasian, and Antarctic plates.
46
+
47
+ Gravitational sliding away from mantle doming: According to older theories, one of the driving mechanisms of the plates is the existence of large scale asthenosphere/mantle domes which cause the gravitational sliding of lithosphere plates away from them (see the paragraph on Mantle Mechanisms). This gravitational sliding represents a secondary phenomenon of this basically vertically oriented mechanism. It finds its roots in the Undation Model of van Bemmelen. This can act on various scales, from the small scale of one island arc up to the larger scale of an entire ocean basin.[29]
48
+
49
+ Alfred Wegener, being a meteorologist, had proposed tidal forces and centrifugal forces as the main driving mechanisms behind continental drift; however, these forces were considered far too small to cause continental motion as the concept was of continents plowing through oceanic crust.[30] Therefore, Wegener later changed his position and asserted that convection currents are the main driving force of plate tectonics in the last edition of his book in 1929.
50
+
51
+ However, in the plate tectonics context (accepted since the seafloor spreading proposals of Heezen, Hess, Dietz, Morley, Vine, and Matthews (see below) during the early 1960s), the oceanic crust is suggested to be in motion with the continents which caused the proposals related to Earth rotation to be reconsidered. In more recent literature, these driving forces are:
52
+
53
+ Forces that are small and generally negligible are:
54
+
55
+ For these mechanisms to be overall valid, systematic relationships should exist all over the globe between the orientation and kinematics of deformation and the geographical latitudinal and longitudinal grid of the Earth itself. Ironically, these systematic relations studies in the second half of the nineteenth century and the first half of the twentieth century underline exactly the opposite: that the plates had not moved in time, that the deformation grid was fixed with respect to the Earth equator and axis, and that gravitational driving forces were generally acting vertically and caused only local horizontal movements (the so-called pre-plate tectonic, "fixist theories"). Later studies (discussed below on this page), therefore, invoked many of the relationships recognized during this pre-plate tectonics period to support their theories (see the anticipations and reviews in the work of van Dijk and collaborators).[34]
56
+
57
+ Of the many forces discussed in this paragraph, tidal force is still highly debated and defended as a possible principal driving force of plate tectonics. The other forces are only used in global geodynamic models not using plate tectonics concepts (therefore beyond the discussions treated in this section) or proposed as minor modulations within the overall plate tectonics model.
58
+
59
+ In 1973, George W. Moore[35] of the USGS and R. C. Bostrom[36] presented evidence for a general westward drift of the Earth's lithosphere with respect to the mantle. He concluded that tidal forces (the tidal lag or "friction") caused by the Earth's rotation and the forces acting upon it by the Moon are a driving force for plate tectonics. As the Earth spins eastward beneath the moon, the moon's gravity ever so slightly pulls the Earth's surface layer back westward, just as proposed by Alfred Wegener (see above). In a more recent 2006 study,[37] scientists reviewed and advocated these earlier proposed ideas. It has also been suggested recently in Lovett (2006) that this observation may also explain why Venus and Mars have no plate tectonics, as Venus has no moon and Mars' moons are too small to have significant tidal effects on the planet. In a recent paper,[38] it was suggested that, on the other hand, it can easily be observed that many plates are moving north and eastward, and that the dominantly westward motion of the Pacific Ocean basins derives simply from the eastward bias of the Pacific spreading center (which is not a predicted manifestation of such lunar forces). In the same paper the authors admit, however, that relative to the lower mantle, there is a slight westward component in the motions of all the plates. They demonstrated though that the westward drift, seen only for the past 30 Ma, is attributed to the increased dominance of the steadily growing and accelerating Pacific plate. The debate is still open.
60
+
61
+ The vector of a plate's motion is a function of all the forces acting on the plate; however, therein lies the problem regarding the degree to which each process contributes to the overall motion of each tectonic plate.
62
+
63
+ The diversity of geodynamic settings and the properties of each plate result from the impact of the various processes actively driving each individual plate. One method of dealing with this problem is to consider the relative rate at which each plate is moving as well as the evidence related to the significance of each process to the overall driving force on the plate.
64
+
65
+ One of the most significant correlations discovered to date is that lithospheric plates attached to downgoing (subducting) plates move much faster than plates not attached to subducting plates. The Pacific plate, for instance, is essentially surrounded by zones of subduction (the so-called Ring of Fire) and moves much faster than the plates of the Atlantic basin, which are attached (perhaps one could say 'welded') to adjacent continents instead of subducting plates. It is thus thought that forces associated with the downgoing plate (slab pull and slab suction) are the driving forces which determine the motion of plates, except for those plates which are not being subducted.[28] This view however has been contradicted by a recent study which found that the actual motions of the Pacific Plate and other plates associated with the East Pacific Rise do not correlate mainly with either slab pull or slab push, but rather with a mantle convection upwelling whose horizontal spreading along the bases of the various plates drives them along via viscosity-related traction forces.[39] The driving forces of plate motion continue to be active subjects of on-going research within geophysics and tectonophysics.
66
+
67
+ Around the start of the twentieth century, various theorists unsuccessfully attempted to explain the many geographical, geological, and biological continuities between continents. In 1912 the meteorologist Alfred Wegener described what he called continental drift, an idea that culminated fifty years later in the modern theory of plate tectonics.[40].
68
+
69
+ Wegener expanded his theory in his 1915 book The Origin of Continents and Oceans[41]. Starting from the idea (also expressed by his forerunners) that the present continents once formed a single land mass (later called Pangea), Wegener suggested that these separated and drifted apart, likening them to "icebergs" of low density granite floating on a sea of denser basalt.[42] Supporting evidence for the idea came from the dove-tailing outlines of South America's east coast and Africa's west coast, and from the matching of the rock formations along these edges. Confirmation of their previous contiguous nature also came from the fossil plants Glossopteris and Gangamopteris, and the therapsid or mammal-like reptile Lystrosaurus, all widely distributed over South America, Africa, Antarctica, India, and Australia. The evidence for such an erstwhile joining of these continents was patent to field geologists working in the southern hemisphere. The South African Alex du Toit put together a mass of such information in his 1937 publication Our Wandering Continents, and went further than Wegener in recognising the strong links between the Gondwana fragments.
70
+
71
+ Wegener's work was initially not widely accepted, in part due to a lack of detailed evidence. The Earth might have a solid crust and mantle and a liquid core, but there seemed to be no way that portions of the crust could move around. Distinguished scientists, such as Harold Jeffreys and Charles Schuchert, were outspoken critics of continental drift.
72
+
73
+ Despite much opposition, the view of continental drift gained support and a lively debate started between "drifters" or "mobilists" (proponents of the theory) and "fixists" (opponents). During the 1920s, 1930s and 1940s, the former reached important milestones proposing that convection currents might have driven the plate movements, and that spreading may have occurred below the sea within the oceanic crust. Concepts close to the elements now incorporated in plate tectonics were proposed by geophysicists and geologists (both fixists and mobilists) like Vening-Meinesz, Holmes, and Umbgrove.
74
+
75
+ One of the first pieces of geophysical evidence that was used to support the movement of lithospheric plates came from paleomagnetism. This is based on the fact that rocks of different ages show a variable magnetic field direction, evidenced by studies since the mid–nineteenth century. The magnetic north and south poles reverse through time, and, especially important in paleotectonic studies, the relative position of the magnetic north pole varies through time. Initially, during the first half of the twentieth century, the latter phenomenon was explained by introducing what was called "polar wander" (see apparent polar wander) (i.e., it was assumed that the north pole location had been shifting through time). An alternative explanation, though, was that the continents had moved (shifted and rotated) relative to the north pole, and each continent, in fact, shows its own "polar wander path". During the late 1950s it was successfully shown on two occasions that these data could show the validity of continental drift: by Keith Runcorn in a paper in 1956,[43] and by Warren Carey in a symposium held in March 1956.[44]
76
+
77
+ The second piece of evidence in support of continental drift came during the late 1950s and early 60s from data on the bathymetry of the deep ocean floors and the nature of the oceanic crust such as magnetic properties and, more generally, with the development of marine geology[45] which gave evidence for the association of seafloor spreading along the mid-oceanic ridges and magnetic field reversals, published between 1959 and 1963 by Heezen, Dietz, Hess, Mason, Vine & Matthews, and Morley.[46]
78
+
79
+ Simultaneous advances in early seismic imaging techniques in and around Wadati–Benioff zones along the trenches bounding many continental margins, together with many other geophysical (e.g. gravimetric) and geological observations, showed how the oceanic crust could disappear into the mantle, providing the mechanism to balance the extension of the ocean basins with shortening along its margins.
80
+
81
+ All this evidence, both from the ocean floor and from the continental margins, made it clear around 1965 that continental drift was feasible and the theory of plate tectonics, which was defined in a series of papers between 1965 and 1967, was born, with all its extraordinary explanatory and predictive power. The theory revolutionized the Earth sciences, explaining a diverse range of geological phenomena and their implications in other studies such as paleogeography and paleobiology.
82
+
83
+ In the late 19th and early 20th centuries, geologists assumed that the Earth's major features were fixed, and that most geologic features such as basin development and mountain ranges could be explained by vertical crustal movement, described in what is called the geosynclinal theory. Generally, this was placed in the context of a contracting planet Earth due to heat loss in the course of a relatively short geological time.
84
+
85
+ It was observed as early as 1596 that the opposite coasts of the Atlantic Ocean—or, more precisely, the edges of the continental shelves—have similar shapes and seem to have once fitted together.[47]
86
+
87
+ Since that time many theories were proposed to explain this apparent complementarity, but the assumption of a solid Earth made these various proposals difficult to accept.[48]
88
+
89
+ The discovery of radioactivity and its associated heating properties in 1895 prompted a re-examination of the apparent age of the Earth.[49] This had previously been estimated by its cooling rate under the assumption that the Earth's surface radiated like a black body.[50] Those calculations had implied that, even if it started at red heat, the Earth would have dropped to its present temperature in a few tens of millions of years. Armed with the knowledge of a new heat source, scientists realized that the Earth would be much older, and that its core was still sufficiently hot to be liquid.
90
+
91
+ By 1915, after having published a first article in 1912,[51] Alfred Wegener was making serious arguments for the idea of continental drift in the first edition of The Origin of Continents and Oceans.[41] In that book (re-issued in four successive editions up to the final one in 1936), he noted how the east coast of South America and the west coast of Africa looked as if they were once attached. Wegener was not the first to note this (Abraham Ortelius, Antonio Snider-Pellegrini, Eduard Suess, Roberto Mantovani and Frank Bursley Taylor preceded him just to mention a few), but he was the first to marshal significant fossil and paleo-topographical and climatological evidence to support this simple observation (and was supported in this by researchers such as Alex du Toit). Furthermore, when the rock strata of the margins of separate continents are very similar it suggests that these rocks were formed in the same way, implying that they were joined initially. For instance, parts of Scotland and Ireland contain rocks very similar to those found in Newfoundland and New Brunswick. Furthermore, the Caledonian Mountains of Europe and parts of the Appalachian Mountains of North America are very similar in structure and lithology.
92
+
93
+ However, his ideas were not taken seriously by many geologists, who pointed out that there was no apparent mechanism for continental drift. Specifically, they did not see how continental rock could plow through the much denser rock that makes up oceanic crust. Wegener could not explain the force that drove continental drift, and his vindication did not come until after his death in 1930.[52]
94
+
95
+ As it was observed early that although granite existed on continents, seafloor seemed to be composed of denser basalt, the prevailing concept during the first half of the twentieth century was that there were two types of crust, named "sial" (continental type crust) and "sima" (oceanic type crust). Furthermore, it was supposed that a static shell of strata was present under the continents. It therefore looked apparent that a layer of basalt (sial) underlies the continental rocks.
96
+
97
+ However, based on abnormalities in plumb line deflection by the Andes in Peru, Pierre Bouguer had deduced that less-dense mountains must have a downward projection into the denser layer underneath. The concept that mountains had "roots" was confirmed by George B. Airy a hundred years later, during study of Himalayan gravitation, and seismic studies detected corresponding density variations. Therefore, by the mid-1950s, the question remained unresolved as to whether mountain roots were clenched in surrounding basalt or were floating on it like an iceberg.
98
+
99
+ During the 20th century, improvements in and greater use of seismic instruments such as seismographs enabled scientists to learn that earthquakes tend to be concentrated in specific areas, most notably along the oceanic trenches and spreading ridges. By the late 1920s, seismologists were beginning to identify several prominent earthquake zones parallel to the trenches that typically were inclined 40–60° from the horizontal and extended several hundred kilometers into the Earth. These zones later became known as Wadati–Benioff zones, or simply Benioff zones, in honor of the seismologists who first recognized them, Kiyoo Wadati of Japan and Hugo Benioff of the United States. The study of global seismicity greatly advanced in the 1960s with the establishment of the Worldwide Standardized Seismograph Network (WWSSN)[53] to monitor the compliance of the 1963 treaty banning above-ground testing of nuclear weapons. The much improved data from the WWSSN instruments allowed seismologists to map precisely the zones of earthquake concentration worldwide.
100
+
101
+ Meanwhile, debates developed around the phenomenon of polar wander. Since the early debates of continental drift, scientists had discussed and used evidence that polar drift had occurred because continents seemed to have moved through different climatic zones during the past. Furthermore, paleomagnetic data had shown that the magnetic pole had also shifted during time. Reasoning in an opposite way, the continents might have shifted and rotated, while the pole remained relatively fixed. The first time the evidence of magnetic polar wander was used to support the movements of continents was in a paper by Keith Runcorn in 1956,[43] and successive papers by him and his students Ted Irving (who was actually the first to be convinced of the fact that paleomagnetism supported continental drift) and Ken Creer.
102
+
103
+ This was immediately followed by a symposium in Tasmania in March 1956.[54] In this symposium, the evidence was used in the theory of an expansion of the global crust. In this hypothesis, the shifting of the continents can be simply explained by a large increase in the size of the Earth since its formation. However, this was unsatisfactory because its supporters could offer no convincing mechanism to produce a significant expansion of the Earth. Certainly there is no evidence that the moon has expanded in the past 3 billion years; other work would soon show that the evidence was equally in support of continental drift on a globe with a stable radius.
104
+
105
+ During the thirties up to the late fifties, works by Vening-Meinesz, Holmes, Umbgrove, and numerous others outlined concepts that were close or nearly identical to modern plate tectonics theory. In particular, the English geologist Arthur Holmes proposed in 1920 that plate junctions might lie beneath the sea, and in 1928 that convection currents within the mantle might be the driving force.[55] Often, these contributions are forgotten because:
106
+
107
+ In 1947, a team of scientists led by Maurice Ewing utilizing the Woods Hole Oceanographic Institution's research vessel Atlantis and an array of instruments, confirmed the existence of a rise in the central Atlantic Ocean, and found that the floor of the seabed beneath the layer of sediments consisted of basalt, not the granite which is the main constituent of continents. They also found that the oceanic crust was much thinner than continental crust. All these new findings raised important and intriguing questions.[56]
108
+
109
+ The new data that had been collected on the ocean basins also showed particular characteristics regarding the bathymetry. One of the major outcomes of these datasets was that all along the globe, a system of mid-oceanic ridges was detected. An important conclusion was that along this system, new ocean floor was being created, which led to the concept of the "Great Global Rift". This was described in the crucial paper of Bruce Heezen (1960),[57] which would trigger a real revolution in thinking. A profound consequence of seafloor spreading is that new crust was, and still is, being continually created along the oceanic ridges. Therefore, Heezen advocated the so-called "expanding Earth" hypothesis of S. Warren Carey (see above). So, still the question remained: how can new crust be continuously added along the oceanic ridges without increasing the size of the Earth? In reality, this question had been solved already by numerous scientists during the forties and the fifties, like Arthur Holmes, Vening-Meinesz, Coates and many others: The crust in excess disappeared along what were called the oceanic trenches, where so-called "subduction" occurred. Therefore, when various scientists during the early sixties started to reason on the data at their disposal regarding the ocean floor, the pieces of the theory quickly fell into place.
110
+
111
+ The question particularly intrigued Harry Hammond Hess, a Princeton University geologist and a Naval Reserve Rear Admiral, and Robert S. Dietz, a scientist with the U.S. Coast and Geodetic Survey who first coined the term seafloor spreading. Dietz and Hess (the former published the same idea one year earlier in Nature,[58] but priority belongs to Hess who had already distributed an unpublished manuscript of his 1962 article by 1960)[59] were among the small handful who really understood the broad implications of sea floor spreading and how it would eventually agree with the, at that time, unconventional and unaccepted ideas of continental drift and the elegant and mobilistic models proposed by previous workers like Holmes.
112
+
113
+ In the same year, Robert R. Coats of the U.S. Geological Survey described the main features of island arc subduction in the Aleutian Islands. His paper, though little noted (and even ridiculed) at the time, has since been called "seminal" and "prescient". In reality, it actually shows that the work by the European scientists on island arcs and mountain belts performed and published during the 1930s up until the 1950s was applied and appreciated also in the United States.
114
+
115
+ If the Earth's crust was expanding along the oceanic ridges, Hess and Dietz reasoned like Holmes and others before them, it must be shrinking elsewhere. Hess followed Heezen, suggesting that new oceanic crust continuously spreads away from the ridges in a conveyor belt–like motion. And, using the mobilistic concepts developed before, he correctly concluded that many millions of years later, the oceanic crust eventually descends along the continental margins where oceanic trenches—very deep, narrow canyons—are formed, e.g. along the rim of the Pacific Ocean basin. The important step Hess made was that convection currents would be the driving force in this process, arriving at the same conclusions as Holmes had decades before with the only difference that the thinning of the ocean crust was performed using Heezen's mechanism of spreading along the ridges. Hess therefore concluded that the Atlantic Ocean was expanding while the Pacific Ocean was shrinking. As old oceanic crust is "consumed" in the trenches (like Holmes and others, he thought this was done by thickening of the continental lithosphere, not, as now understood, by underthrusting at a larger scale of the oceanic crust itself into the mantle), new magma rises and erupts along the spreading ridges to form new crust. In effect, the ocean basins are perpetually being "recycled," with the creation of new crust and the destruction of old oceanic lithosphere occurring simultaneously. Thus, the new mobilistic concepts neatly explained why the Earth does not get bigger with sea floor spreading, why there is so little sediment accumulation on the ocean floor, and why oceanic rocks are much younger than continental rocks.
116
+
117
+ Beginning in the 1950s, scientists like Victor Vacquier, using magnetic instruments (magnetometers) adapted from airborne devices developed during World War II to detect submarines, began recognizing odd magnetic variations across the ocean floor. This finding, though unexpected, was not entirely surprising because it was known that basalt—the iron-rich, volcanic rock making up the ocean floor—contains a strongly magnetic mineral (magnetite) and can locally distort compass readings. This distortion was recognized by Icelandic mariners as early as the late 18th century. More important, because the presence of magnetite gives the basalt measurable magnetic properties, these newly discovered magnetic variations provided another means to study the deep ocean floor. When newly formed rock cools, such magnetic materials recorded the Earth's magnetic field at the time.
118
+
119
+ As more and more of the seafloor was mapped during the 1950s, the magnetic variations turned out not to be random or isolated occurrences, but instead revealed recognizable patterns. When these magnetic patterns were mapped over a wide region, the ocean floor showed a zebra-like pattern: one stripe with normal polarity and the adjoining stripe with reversed polarity. The overall pattern, defined by these alternating bands of normally and reversely polarized rock, became known as magnetic striping, and was published by Ron G. Mason and co-workers in 1961, who did not find, though, an explanation for these data in terms of sea floor spreading, like Vine, Matthews and Morley a few years later.[60]
120
+
121
+ The discovery of magnetic striping called for an explanation. In the early 1960s scientists such as Heezen, Hess and Dietz had begun to theorise that mid-ocean ridges mark structurally weak zones where the ocean floor was being ripped in two lengthwise along the ridge crest (see the previous paragraph). New magma from deep within the Earth rises easily through these weak zones and eventually erupts along the crest of the ridges to create new oceanic crust. This process, at first denominated the "conveyer belt hypothesis" and later called seafloor spreading, operating over many millions of years continues to form new ocean floor all across the 50,000 km-long system of mid-ocean ridges.
122
+
123
+ Only four years after the maps with the "zebra pattern" of magnetic stripes were published, the link between sea floor spreading and these patterns was correctly placed, independently by Lawrence Morley, and by Fred Vine and Drummond Matthews, in 1963,[61] now called the Vine–Matthews–Morley hypothesis. This hypothesis linked these patterns to geomagnetic reversals and was supported by several lines of evidence:[62]
124
+
125
+ By explaining both the zebra-like magnetic striping and the construction of the mid-ocean ridge system, the seafloor spreading hypothesis (SFS) quickly gained converts and represented another major advance in the development of the plate-tectonics theory. Furthermore, the oceanic crust now came to be appreciated as a natural "tape recording" of the history of the geomagnetic field reversals (GMFR) of the Earth's magnetic field. Today, extensive studies are dedicated to the calibration of the normal-reversal patterns in the oceanic crust on one hand and known timescales derived from the dating of basalt layers in sedimentary sequences (magnetostratigraphy) on the other, to arrive at estimates of past spreading rates and plate reconstructions.
126
+
127
+ After all these considerations, Plate Tectonics (or, as it was initially called "New Global Tectonics") became quickly accepted in the scientific world, and numerous papers followed that defined the concepts:
128
+
129
+ The Plate Tectonics Revolution was the scientific and cultural change which developed from the acceptance of the plate tectonics theory. The event was a paradigm shift and scientific revolution.[69]
130
+
131
+ Continental drift theory helps biogeographers to explain the disjunct biogeographic distribution of present-day life found on different continents but having similar ancestors.[70] In particular, it explains the Gondwanan distribution of ratites and the Antarctic flora.
132
+
133
+ Reconstruction is used to establish past (and future) plate configurations, helping determine the shape and make-up of ancient supercontinents and providing a basis for paleogeography.
134
+
135
+ Current plate boundaries are defined by their seismicity.[71] Past plate boundaries within existing plates are identified from a variety of evidence, such as the presence of ophiolites that are indicative of vanished oceans.[72]
136
+
137
+ Tectonic motion is believed to have begun around 3 to 3.5 billion years ago.[73][74][why?]
138
+
139
+ Various types of quantitative and semi-quantitative information are available to constrain past plate motions. The geometric fit between continents, such as between west Africa and South America is still an important part of plate reconstruction. Magnetic stripe patterns provide a reliable guide to relative plate motions going back into the Jurassic period.[75] The tracks of hotspots give absolute reconstructions, but these are only available back to the Cretaceous.[76] Older reconstructions rely mainly on paleomagnetic pole data, although these only constrain the latitude and rotation, but not the longitude. Combining poles of different ages in a particular plate to produce apparent polar wander paths provides a method for comparing the motions of different plates through time.[77] Additional evidence comes from the distribution of certain sedimentary rock types,[78] faunal provinces shown by particular fossil groups, and the position of orogenic belts.[76]
140
+
141
+ The movement of plates has caused the formation and break-up of continents over time, including occasional formation of a supercontinent that contains most or all of the continents. The supercontinent Columbia or Nuna formed during a period of 2,000 to 1,800 million years ago and broke up about 1,500 to 1,300 million years ago.[79] The supercontinent Rodinia is thought to have formed about 1 billion years ago and to have embodied most or all of Earth's continents, and broken up into eight continents around 600 million years ago. The eight continents later re-assembled into another supercontinent called Pangaea; Pangaea broke up into Laurasia (which became North America and Eurasia) and Gondwana (which became the remaining continents).
142
+
143
+ The Himalayas, the world's tallest mountain range, are assumed to have been formed by the collision of two major plates. Before uplift, they were covered by the Tethys Ocean.
144
+
145
+ Depending on how they are defined, there are usually seven or eight "major" plates: African, Antarctic, Eurasian, North American, South American, Pacific, and Indo-Australian. The latter is sometimes subdivided into the Indian and Australian plates.
146
+
147
+ There are dozens of smaller plates, the seven largest of which are the Arabian, Caribbean, Juan de Fuca, Cocos, Nazca, Philippine Sea, and Scotia.
148
+
149
+ The current motion of the tectonic plates is today determined by remote sensing satellite data sets, calibrated with ground station measurements.
150
+
151
+ The appearance of plate tectonics on terrestrial planets is related to planetary mass, with more massive planets than Earth expected to exhibit plate tectonics. Earth may be a borderline case, owing its tectonic activity to abundant water [80] (silica and water form a deep eutectic).
152
+
153
+ Venus shows no evidence of active plate tectonics. There is debatable evidence of active tectonics in the planet's distant past; however, events taking place since then (such as the plausible and generally accepted hypothesis that the Venusian lithosphere has thickened greatly over the course of several hundred million years) has made constraining the course of its geologic record difficult. However, the numerous well-preserved impact craters have been utilized as a dating method to approximately date the Venusian surface (since there are thus far no known samples of Venusian rock to be dated by more reliable methods). Dates derived are dominantly in the range 500 to 750 million years ago, although ages of up to 1,200 million years ago have been calculated. This research has led to the fairly well accepted hypothesis that Venus has undergone an essentially complete volcanic resurfacing at least once in its distant past, with the last event taking place approximately within the range of estimated surface ages. While the mechanism of such an impressive thermal event remains a debated issue in Venusian geosciences, some scientists are advocates of processes involving plate motion to some extent.
154
+
155
+ One explanation for Venus's lack of plate tectonics is that on Venus temperatures are too high for significant water to be present.[81][82] The Earth's crust is soaked with water, and water plays an important role in the development of shear zones. Plate tectonics requires weak surfaces in the crust along which crustal slices can move, and it may well be that such weakening never took place on Venus because of the absence of water. However, some researchers[who?] remain convinced that plate tectonics is or was once active on this planet.
156
+
157
+ Mars is considerably smaller than Earth and Venus, and there is evidence for ice on its surface and in its crust.
158
+
159
+ In the 1990s, it was proposed that Martian Crustal Dichotomy was created by plate tectonic processes.[83] Scientists today disagree, and think that it was created either by upwelling within the Martian mantle that thickened the crust of the Southern Highlands and formed Tharsis[84] or by a giant impact that excavated the Northern Lowlands.[85]
160
+
161
+ Valles Marineris may be a tectonic boundary.[86]
162
+
163
+ Observations made of the magnetic field of Mars by the Mars Global Surveyor spacecraft in 1999 showed patterns of magnetic striping discovered on this planet. Some scientists interpreted these as requiring plate tectonic processes, such as seafloor spreading.[87] However, their data fail a "magnetic reversal test", which is used to see if they were formed by flipping polarities of a global magnetic field.[88]
164
+
165
+ Some of the satellites of Jupiter have features that may be related to plate-tectonic style deformation, although the materials and specific mechanisms may be different from plate-tectonic activity on Earth. On 8 September 2014, NASA reported finding evidence of plate tectonics on Europa, a satellite of Jupiter—the first sign of subduction activity on another world other than Earth.[89]
166
+
167
+ Titan, the largest moon of Saturn, was reported to show tectonic activity in images taken by the Huygens probe, which landed on Titan on January 14, 2005.[90]
168
+
169
+ On Earth-sized planets, plate tectonics is more likely if there are oceans of water. However, in 2007, two independent teams of researchers came to opposing conclusions about the likelihood of plate tectonics on larger super-Earths[91][92] with one team saying that plate tectonics would be episodic or stagnant[93] and the other team saying that plate tectonics is very likely on super-earths even if the planet is dry.[80]
170
+
171
+ Consideration of plate tectonics is a part of the search for extraterrestrial intelligence and extraterrestrial life.[94]
172
+
173
+ Videos
en/4663.html.txt ADDED
@@ -0,0 +1,173 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ Plate tectonics (from the Late Latin: tectonicus, from the Ancient Greek: τεκτονικός, lit. 'pertaining to building')[1] is a scientific theory describing the large-scale motion of seven large plates and the movements of a larger number of smaller plates of Earth's lithosphere, since tectonic processes began on Earth between 3.3[2] and 3.5 billion years ago. The model builds on the concept of continental drift, an idea developed during the first decades of the 20th century. The geoscientific community accepted plate-tectonic theory after seafloor spreading was validated in the late 1950s and early 1960s.
4
+
5
+ The lithosphere, which is the rigid outermost shell of a planet (the crust and upper mantle), is broken into tectonic plates. The Earth's lithosphere is composed of seven or eight major plates (depending on how they are defined) and many minor plates. Where the plates meet, their relative motion determines the type of boundary: convergent, divergent, or transform. Earthquakes, volcanic activity, mountain-building, and oceanic trench formation occur along these plate boundaries (or faults). The relative movement of the plates typically ranges from zero to 100 mm annually.[3]
6
+
7
+ Tectonic plates are composed of oceanic lithosphere and thicker continental lithosphere, each topped by its own kind of crust. Along convergent boundaries, subduction, or one plate moving under another, carries the lower one down into the mantle; the material lost is roughly balanced by the formation of new (oceanic) crust along divergent margins by seafloor spreading. In this way, the total surface of the lithosphere remains the same. This prediction of plate tectonics is also referred to as the conveyor belt principle. Earlier theories, since disproven, proposed gradual shrinking (contraction) or gradual expansion of the globe.[4]
8
+
9
+ Tectonic plates are able to move because the Earth's lithosphere has greater mechanical strength than the underlying asthenosphere. Lateral density variations in the mantle result in convection; that is, the slow creeping motion of Earth's solid mantle. Plate movement is thought to be driven by a combination of the motion of the seafloor away from spreading ridges due to variations in topography (the ridge is a topographic high) and density changes in the crust (density increases as newly formed crust cools and moves away from the ridge). At subduction zones the relatively cold, dense oceanic crust is "pulled" or sinks down into the mantle over the downward convecting limb of a mantle cell.[5] Another explanation lies in the different forces generated by tidal forces of the Sun and Moon. The relative importance of each of these factors and their relationship to each other is unclear, and still the subject of much debate.
10
+
11
+ The outer layers of the Earth are divided into the lithosphere and asthenosphere. The division is based on differences in mechanical properties and in the method for the transfer of heat. The lithosphere is cooler and more rigid, while the asthenosphere is hotter and flows more easily. In terms of heat transfer, the lithosphere loses heat by conduction, whereas the asthenosphere also transfers heat by convection and has a nearly adiabatic temperature gradient. This division should not be confused with the chemical subdivision of these same layers into the mantle (comprising both the asthenosphere and the mantle portion of the lithosphere) and the crust: a given piece of mantle may be part of the lithosphere or the asthenosphere at different times depending on its temperature and pressure.
12
+
13
+ The key principle of plate tectonics is that the lithosphere exists as separate and distinct tectonic plates, which ride on the fluid-like (visco-elastic solid) asthenosphere. Plate motions range up to a typical 10–40 mm/year (Mid-Atlantic Ridge; about as fast as fingernails grow), to about 160 mm/year (Nazca Plate; about as fast as hair grows).[6] The driving mechanism behind this movement is described below.
14
+
15
+ Tectonic lithosphere plates consist of lithospheric mantle overlain by one or two types of crustal material: oceanic crust (in older texts called sima from silicon and magnesium) and continental crust (sial from silicon and aluminium). Average oceanic lithosphere is typically 100 km (62 mi) thick;[7] its thickness is a function of its age: as time passes, it conductively cools and subjacent cooling mantle is added to its base. Because it is formed at mid-ocean ridges and spreads outwards, its thickness is therefore a function of its distance from the mid-ocean ridge where it was formed. For a typical distance that oceanic lithosphere must travel before being subducted, the thickness varies from about 6 km (4 mi) thick at mid-ocean ridges to greater than 100 km (62 mi) at subduction zones; for shorter or longer distances, the subduction zone (and therefore also the mean) thickness becomes smaller or larger, respectively.[8] Continental lithosphere is typically about 200 km thick, though this varies considerably between basins, mountain ranges, and stable cratonic interiors of continents.
16
+
17
+ The location where two plates meet is called a plate boundary. Plate boundaries are commonly associated with geological events such as earthquakes and the creation of topographic features such as mountains, volcanoes, mid-ocean ridges, and oceanic trenches. The majority of the world's active volcanoes occur along plate boundaries, with the Pacific Plate's Ring of Fire being the most active and widely known today. These boundaries are discussed in further detail below. Some volcanoes occur in the interiors of plates, and these have been variously attributed to internal plate deformation[9] and to mantle plumes.
18
+
19
+ As explained above, tectonic plates may include continental crust or oceanic crust, and most plates contain both. For example, the African Plate includes the continent and parts of the floor of the Atlantic and Indian Oceans. The distinction between oceanic crust and continental crust is based on their modes of formation. Oceanic crust is formed at sea-floor spreading centers, and continental crust is formed through arc volcanism and accretion of terranes through tectonic processes, though some of these terranes may contain ophiolite sequences, which are pieces of oceanic crust considered to be part of the continent when they exit the standard cycle of formation and spreading centers and subduction beneath continents. Oceanic crust is also denser than continental crust owing to their different compositions. Oceanic crust is denser because it has less silicon and more heavier elements ("mafic") than continental crust ("felsic").[10] As a result of this density stratification, oceanic crust generally lies below sea level (for example most of the Pacific Plate), while continental crust buoyantly projects above sea level (see the page isostasy for explanation of this principle).
20
+
21
+ Three types of plate boundaries exist,[11] with a fourth, mixed type, characterized by the way the plates move relative to each other. They are associated with different types of surface phenomena. The different types of plate boundaries are:[12][13]
22
+
23
+ It has generally been accepted that tectonic plates are able to move because of the relative density of oceanic lithosphere and the relative weakness of the asthenosphere. Dissipation of heat from the mantle is acknowledged to be the original source of the energy required to drive plate tectonics through convection or large scale upwelling and doming. The current view, though still a matter of some debate, asserts that as a consequence, a powerful source of plate motion is generated due to the excess density of the oceanic lithosphere sinking in subduction zones. When the new crust forms at mid-ocean ridges, this oceanic lithosphere is initially less dense than the underlying asthenosphere, but it becomes denser with age as it conductively cools and thickens. The greater density of old lithosphere relative to the underlying asthenosphere allows it to sink into the deep mantle at subduction zones, providing most of the driving force for plate movement. The weakness of the asthenosphere allows the tectonic plates to move easily towards a subduction zone.[15] Although subduction is thought to be the strongest force driving plate motions, it cannot be the only force since there are plates such as the North American Plate which are moving, yet are nowhere being subducted. The same is true for the enormous Eurasian Plate. The sources of plate motion are a matter of intensive research and discussion among scientists. One of the main points is that the kinematic pattern of the movement itself should be separated clearly from the possible geodynamic mechanism that is invoked as the driving force of the observed movement, as some patterns may be explained by more than one mechanism.[16] In short, the driving forces advocated at the moment can be divided into three categories based on the relationship to the movement: mantle dynamics related, gravity related (main driving force accepted nowadays), and earth rotation related.
24
+
25
+ For much of the last quarter century, the leading theory of the driving force behind tectonic plate motions envisaged large scale convection currents in the upper mantle, which can be transmitted through the asthenosphere. This theory was launched by Arthur Holmes and some forerunners in the 1930s[17] and was immediately recognized as the solution for the acceptance of the theory as originally discussed in the papers of Alfred Wegener in the early years of the century. However, despite its acceptance, it was long debated in the scientific community because the leading theory still envisaged a static Earth without moving continents up until the major breakthroughs of the early sixties.
26
+
27
+ Two- and three-dimensional imaging of Earth's interior (seismic tomography) shows a varying lateral density distribution throughout the mantle. Such density variations can be material (from rock chemistry), mineral (from variations in mineral structures), or thermal (through thermal expansion and contraction from heat energy). The manifestation of this varying lateral density is mantle convection from buoyancy forces.[18]
28
+
29
+ How mantle convection directly and indirectly relates to plate motion is a matter of ongoing study and discussion in geodynamics. Somehow, this energy must be transferred to the lithosphere for tectonic plates to move. There are essentially two main types of forces that are thought to influence plate motion: friction and gravity.
30
+
31
+ Lately, the convection theory has been much debated, as modern techniques based on 3D seismic tomography still fail to recognize these predicted large scale convection cells.[citation needed] Alternative views have been proposed.
32
+
33
+ In the theory of plume tectonics followed by numerous researchers during the 1990s, a modified concept of mantle convection currents is used. It asserts that super plumes rise from the deeper mantle and are the drivers or substitutes of the major convection cells. These ideas find their roots in the early 1930s in the works of Beloussov and van Bemmelen, which were initially opposed to plate tectonics and placed the mechanism in a fixistic frame of verticalistic movements. Van Bemmelen later on modulated on the concept in his "Undulation Models" and used it as the driving force for horizontal movements, invoking gravitational forces away from the regional crustal doming.[19][20]
34
+ The theories find resonance in the modern theories which envisage hot spots or mantle plumes which remain fixed and are overridden by oceanic and continental lithosphere plates over time and leave their traces in the geological record (though these phenomena are not invoked as real driving mechanisms, but rather as modulators).
35
+ The mechanism is nowadays still advocated for example to explain the break-up of supercontinents during specific geological epochs.[21] It has also still numerous followers [22] [23] also amongst the scientists involved in the theory of Earth expansion [24]
36
+
37
+ Another theory is that the mantle flows neither in cells nor large plumes but rather as a series of channels just below the Earth's crust, which then provide basal friction to the lithosphere. This theory, called "surge tectonics", became quite popular in geophysics and geodynamics during the 1980s and 1990s.[25] Recent research, based on three-dimensional computer modeling, suggests that plate geometry is governed by a feedback between mantle convection patterns and the strength of the lithosphere.[26]
38
+
39
+ Forces related to gravity are invoked as secondary phenomena within the framework of a more general driving mechanism such as the various forms of mantle dynamics described above. In moderns views, gravity is invoked as the major driving force, through slab pull along subduction zones.
40
+
41
+ Gravitational sliding away from a spreading ridge: According to many authors, plate motion is driven by the higher elevation of plates at ocean ridges.[27] As oceanic lithosphere is formed at spreading ridges from hot mantle material, it gradually cools and thickens with age (and thus adds distance from the ridge). Cool oceanic lithosphere is significantly denser than the hot mantle material from which it is derived and so with increasing thickness it gradually subsides into the mantle to compensate the greater load. The result is a slight lateral incline with increased distance from the ridge axis.
42
+
43
+ This force is regarded as a secondary force and is often referred to as "ridge push". This is a misnomer as nothing is "pushing" horizontally and tensional features are dominant along ridges. It is more accurate to refer to this mechanism as gravitational sliding as variable topography across the totality of the plate can vary considerably and the topography of spreading ridges is only the most prominent feature. Other mechanisms generating this gravitational secondary force include flexural bulging of the lithosphere before it dives underneath an adjacent plate which produces a clear topographical feature that can offset, or at least affect, the influence of topographical ocean ridges, and mantle plumes and hot spots, which are postulated to impinge on the underside of tectonic plates.
44
+
45
+ Slab-pull: Current scientific opinion is that the asthenosphere is insufficiently competent or rigid to directly cause motion by friction along the base of the lithosphere. Slab pull is therefore most widely thought to be the greatest force acting on the plates. In this current understanding, plate motion is mostly driven by the weight of cold, dense plates sinking into the mantle at trenches.[28] Recent models indicate that trench suction plays an important role as well. However, the fact that the North American Plate is nowhere being subducted, although it is in motion, presents a problem. The same holds for the African, Eurasian, and Antarctic plates.
46
+
47
+ Gravitational sliding away from mantle doming: According to older theories, one of the driving mechanisms of the plates is the existence of large scale asthenosphere/mantle domes which cause the gravitational sliding of lithosphere plates away from them (see the paragraph on Mantle Mechanisms). This gravitational sliding represents a secondary phenomenon of this basically vertically oriented mechanism. It finds its roots in the Undation Model of van Bemmelen. This can act on various scales, from the small scale of one island arc up to the larger scale of an entire ocean basin.[29]
48
+
49
+ Alfred Wegener, being a meteorologist, had proposed tidal forces and centrifugal forces as the main driving mechanisms behind continental drift; however, these forces were considered far too small to cause continental motion as the concept was of continents plowing through oceanic crust.[30] Therefore, Wegener later changed his position and asserted that convection currents are the main driving force of plate tectonics in the last edition of his book in 1929.
50
+
51
+ However, in the plate tectonics context (accepted since the seafloor spreading proposals of Heezen, Hess, Dietz, Morley, Vine, and Matthews (see below) during the early 1960s), the oceanic crust is suggested to be in motion with the continents which caused the proposals related to Earth rotation to be reconsidered. In more recent literature, these driving forces are:
52
+
53
+ Forces that are small and generally negligible are:
54
+
55
+ For these mechanisms to be overall valid, systematic relationships should exist all over the globe between the orientation and kinematics of deformation and the geographical latitudinal and longitudinal grid of the Earth itself. Ironically, these systematic relations studies in the second half of the nineteenth century and the first half of the twentieth century underline exactly the opposite: that the plates had not moved in time, that the deformation grid was fixed with respect to the Earth equator and axis, and that gravitational driving forces were generally acting vertically and caused only local horizontal movements (the so-called pre-plate tectonic, "fixist theories"). Later studies (discussed below on this page), therefore, invoked many of the relationships recognized during this pre-plate tectonics period to support their theories (see the anticipations and reviews in the work of van Dijk and collaborators).[34]
56
+
57
+ Of the many forces discussed in this paragraph, tidal force is still highly debated and defended as a possible principal driving force of plate tectonics. The other forces are only used in global geodynamic models not using plate tectonics concepts (therefore beyond the discussions treated in this section) or proposed as minor modulations within the overall plate tectonics model.
58
+
59
+ In 1973, George W. Moore[35] of the USGS and R. C. Bostrom[36] presented evidence for a general westward drift of the Earth's lithosphere with respect to the mantle. He concluded that tidal forces (the tidal lag or "friction") caused by the Earth's rotation and the forces acting upon it by the Moon are a driving force for plate tectonics. As the Earth spins eastward beneath the moon, the moon's gravity ever so slightly pulls the Earth's surface layer back westward, just as proposed by Alfred Wegener (see above). In a more recent 2006 study,[37] scientists reviewed and advocated these earlier proposed ideas. It has also been suggested recently in Lovett (2006) that this observation may also explain why Venus and Mars have no plate tectonics, as Venus has no moon and Mars' moons are too small to have significant tidal effects on the planet. In a recent paper,[38] it was suggested that, on the other hand, it can easily be observed that many plates are moving north and eastward, and that the dominantly westward motion of the Pacific Ocean basins derives simply from the eastward bias of the Pacific spreading center (which is not a predicted manifestation of such lunar forces). In the same paper the authors admit, however, that relative to the lower mantle, there is a slight westward component in the motions of all the plates. They demonstrated though that the westward drift, seen only for the past 30 Ma, is attributed to the increased dominance of the steadily growing and accelerating Pacific plate. The debate is still open.
60
+
61
+ The vector of a plate's motion is a function of all the forces acting on the plate; however, therein lies the problem regarding the degree to which each process contributes to the overall motion of each tectonic plate.
62
+
63
+ The diversity of geodynamic settings and the properties of each plate result from the impact of the various processes actively driving each individual plate. One method of dealing with this problem is to consider the relative rate at which each plate is moving as well as the evidence related to the significance of each process to the overall driving force on the plate.
64
+
65
+ One of the most significant correlations discovered to date is that lithospheric plates attached to downgoing (subducting) plates move much faster than plates not attached to subducting plates. The Pacific plate, for instance, is essentially surrounded by zones of subduction (the so-called Ring of Fire) and moves much faster than the plates of the Atlantic basin, which are attached (perhaps one could say 'welded') to adjacent continents instead of subducting plates. It is thus thought that forces associated with the downgoing plate (slab pull and slab suction) are the driving forces which determine the motion of plates, except for those plates which are not being subducted.[28] This view however has been contradicted by a recent study which found that the actual motions of the Pacific Plate and other plates associated with the East Pacific Rise do not correlate mainly with either slab pull or slab push, but rather with a mantle convection upwelling whose horizontal spreading along the bases of the various plates drives them along via viscosity-related traction forces.[39] The driving forces of plate motion continue to be active subjects of on-going research within geophysics and tectonophysics.
66
+
67
+ Around the start of the twentieth century, various theorists unsuccessfully attempted to explain the many geographical, geological, and biological continuities between continents. In 1912 the meteorologist Alfred Wegener described what he called continental drift, an idea that culminated fifty years later in the modern theory of plate tectonics.[40].
68
+
69
+ Wegener expanded his theory in his 1915 book The Origin of Continents and Oceans[41]. Starting from the idea (also expressed by his forerunners) that the present continents once formed a single land mass (later called Pangea), Wegener suggested that these separated and drifted apart, likening them to "icebergs" of low density granite floating on a sea of denser basalt.[42] Supporting evidence for the idea came from the dove-tailing outlines of South America's east coast and Africa's west coast, and from the matching of the rock formations along these edges. Confirmation of their previous contiguous nature also came from the fossil plants Glossopteris and Gangamopteris, and the therapsid or mammal-like reptile Lystrosaurus, all widely distributed over South America, Africa, Antarctica, India, and Australia. The evidence for such an erstwhile joining of these continents was patent to field geologists working in the southern hemisphere. The South African Alex du Toit put together a mass of such information in his 1937 publication Our Wandering Continents, and went further than Wegener in recognising the strong links between the Gondwana fragments.
70
+
71
+ Wegener's work was initially not widely accepted, in part due to a lack of detailed evidence. The Earth might have a solid crust and mantle and a liquid core, but there seemed to be no way that portions of the crust could move around. Distinguished scientists, such as Harold Jeffreys and Charles Schuchert, were outspoken critics of continental drift.
72
+
73
+ Despite much opposition, the view of continental drift gained support and a lively debate started between "drifters" or "mobilists" (proponents of the theory) and "fixists" (opponents). During the 1920s, 1930s and 1940s, the former reached important milestones proposing that convection currents might have driven the plate movements, and that spreading may have occurred below the sea within the oceanic crust. Concepts close to the elements now incorporated in plate tectonics were proposed by geophysicists and geologists (both fixists and mobilists) like Vening-Meinesz, Holmes, and Umbgrove.
74
+
75
+ One of the first pieces of geophysical evidence that was used to support the movement of lithospheric plates came from paleomagnetism. This is based on the fact that rocks of different ages show a variable magnetic field direction, evidenced by studies since the mid–nineteenth century. The magnetic north and south poles reverse through time, and, especially important in paleotectonic studies, the relative position of the magnetic north pole varies through time. Initially, during the first half of the twentieth century, the latter phenomenon was explained by introducing what was called "polar wander" (see apparent polar wander) (i.e., it was assumed that the north pole location had been shifting through time). An alternative explanation, though, was that the continents had moved (shifted and rotated) relative to the north pole, and each continent, in fact, shows its own "polar wander path". During the late 1950s it was successfully shown on two occasions that these data could show the validity of continental drift: by Keith Runcorn in a paper in 1956,[43] and by Warren Carey in a symposium held in March 1956.[44]
76
+
77
+ The second piece of evidence in support of continental drift came during the late 1950s and early 60s from data on the bathymetry of the deep ocean floors and the nature of the oceanic crust such as magnetic properties and, more generally, with the development of marine geology[45] which gave evidence for the association of seafloor spreading along the mid-oceanic ridges and magnetic field reversals, published between 1959 and 1963 by Heezen, Dietz, Hess, Mason, Vine & Matthews, and Morley.[46]
78
+
79
+ Simultaneous advances in early seismic imaging techniques in and around Wadati–Benioff zones along the trenches bounding many continental margins, together with many other geophysical (e.g. gravimetric) and geological observations, showed how the oceanic crust could disappear into the mantle, providing the mechanism to balance the extension of the ocean basins with shortening along its margins.
80
+
81
+ All this evidence, both from the ocean floor and from the continental margins, made it clear around 1965 that continental drift was feasible and the theory of plate tectonics, which was defined in a series of papers between 1965 and 1967, was born, with all its extraordinary explanatory and predictive power. The theory revolutionized the Earth sciences, explaining a diverse range of geological phenomena and their implications in other studies such as paleogeography and paleobiology.
82
+
83
+ In the late 19th and early 20th centuries, geologists assumed that the Earth's major features were fixed, and that most geologic features such as basin development and mountain ranges could be explained by vertical crustal movement, described in what is called the geosynclinal theory. Generally, this was placed in the context of a contracting planet Earth due to heat loss in the course of a relatively short geological time.
84
+
85
+ It was observed as early as 1596 that the opposite coasts of the Atlantic Ocean—or, more precisely, the edges of the continental shelves—have similar shapes and seem to have once fitted together.[47]
86
+
87
+ Since that time many theories were proposed to explain this apparent complementarity, but the assumption of a solid Earth made these various proposals difficult to accept.[48]
88
+
89
+ The discovery of radioactivity and its associated heating properties in 1895 prompted a re-examination of the apparent age of the Earth.[49] This had previously been estimated by its cooling rate under the assumption that the Earth's surface radiated like a black body.[50] Those calculations had implied that, even if it started at red heat, the Earth would have dropped to its present temperature in a few tens of millions of years. Armed with the knowledge of a new heat source, scientists realized that the Earth would be much older, and that its core was still sufficiently hot to be liquid.
90
+
91
+ By 1915, after having published a first article in 1912,[51] Alfred Wegener was making serious arguments for the idea of continental drift in the first edition of The Origin of Continents and Oceans.[41] In that book (re-issued in four successive editions up to the final one in 1936), he noted how the east coast of South America and the west coast of Africa looked as if they were once attached. Wegener was not the first to note this (Abraham Ortelius, Antonio Snider-Pellegrini, Eduard Suess, Roberto Mantovani and Frank Bursley Taylor preceded him just to mention a few), but he was the first to marshal significant fossil and paleo-topographical and climatological evidence to support this simple observation (and was supported in this by researchers such as Alex du Toit). Furthermore, when the rock strata of the margins of separate continents are very similar it suggests that these rocks were formed in the same way, implying that they were joined initially. For instance, parts of Scotland and Ireland contain rocks very similar to those found in Newfoundland and New Brunswick. Furthermore, the Caledonian Mountains of Europe and parts of the Appalachian Mountains of North America are very similar in structure and lithology.
92
+
93
+ However, his ideas were not taken seriously by many geologists, who pointed out that there was no apparent mechanism for continental drift. Specifically, they did not see how continental rock could plow through the much denser rock that makes up oceanic crust. Wegener could not explain the force that drove continental drift, and his vindication did not come until after his death in 1930.[52]
94
+
95
+ As it was observed early that although granite existed on continents, seafloor seemed to be composed of denser basalt, the prevailing concept during the first half of the twentieth century was that there were two types of crust, named "sial" (continental type crust) and "sima" (oceanic type crust). Furthermore, it was supposed that a static shell of strata was present under the continents. It therefore looked apparent that a layer of basalt (sial) underlies the continental rocks.
96
+
97
+ However, based on abnormalities in plumb line deflection by the Andes in Peru, Pierre Bouguer had deduced that less-dense mountains must have a downward projection into the denser layer underneath. The concept that mountains had "roots" was confirmed by George B. Airy a hundred years later, during study of Himalayan gravitation, and seismic studies detected corresponding density variations. Therefore, by the mid-1950s, the question remained unresolved as to whether mountain roots were clenched in surrounding basalt or were floating on it like an iceberg.
98
+
99
+ During the 20th century, improvements in and greater use of seismic instruments such as seismographs enabled scientists to learn that earthquakes tend to be concentrated in specific areas, most notably along the oceanic trenches and spreading ridges. By the late 1920s, seismologists were beginning to identify several prominent earthquake zones parallel to the trenches that typically were inclined 40–60° from the horizontal and extended several hundred kilometers into the Earth. These zones later became known as Wadati–Benioff zones, or simply Benioff zones, in honor of the seismologists who first recognized them, Kiyoo Wadati of Japan and Hugo Benioff of the United States. The study of global seismicity greatly advanced in the 1960s with the establishment of the Worldwide Standardized Seismograph Network (WWSSN)[53] to monitor the compliance of the 1963 treaty banning above-ground testing of nuclear weapons. The much improved data from the WWSSN instruments allowed seismologists to map precisely the zones of earthquake concentration worldwide.
100
+
101
+ Meanwhile, debates developed around the phenomenon of polar wander. Since the early debates of continental drift, scientists had discussed and used evidence that polar drift had occurred because continents seemed to have moved through different climatic zones during the past. Furthermore, paleomagnetic data had shown that the magnetic pole had also shifted during time. Reasoning in an opposite way, the continents might have shifted and rotated, while the pole remained relatively fixed. The first time the evidence of magnetic polar wander was used to support the movements of continents was in a paper by Keith Runcorn in 1956,[43] and successive papers by him and his students Ted Irving (who was actually the first to be convinced of the fact that paleomagnetism supported continental drift) and Ken Creer.
102
+
103
+ This was immediately followed by a symposium in Tasmania in March 1956.[54] In this symposium, the evidence was used in the theory of an expansion of the global crust. In this hypothesis, the shifting of the continents can be simply explained by a large increase in the size of the Earth since its formation. However, this was unsatisfactory because its supporters could offer no convincing mechanism to produce a significant expansion of the Earth. Certainly there is no evidence that the moon has expanded in the past 3 billion years; other work would soon show that the evidence was equally in support of continental drift on a globe with a stable radius.
104
+
105
+ During the thirties up to the late fifties, works by Vening-Meinesz, Holmes, Umbgrove, and numerous others outlined concepts that were close or nearly identical to modern plate tectonics theory. In particular, the English geologist Arthur Holmes proposed in 1920 that plate junctions might lie beneath the sea, and in 1928 that convection currents within the mantle might be the driving force.[55] Often, these contributions are forgotten because:
106
+
107
+ In 1947, a team of scientists led by Maurice Ewing utilizing the Woods Hole Oceanographic Institution's research vessel Atlantis and an array of instruments, confirmed the existence of a rise in the central Atlantic Ocean, and found that the floor of the seabed beneath the layer of sediments consisted of basalt, not the granite which is the main constituent of continents. They also found that the oceanic crust was much thinner than continental crust. All these new findings raised important and intriguing questions.[56]
108
+
109
+ The new data that had been collected on the ocean basins also showed particular characteristics regarding the bathymetry. One of the major outcomes of these datasets was that all along the globe, a system of mid-oceanic ridges was detected. An important conclusion was that along this system, new ocean floor was being created, which led to the concept of the "Great Global Rift". This was described in the crucial paper of Bruce Heezen (1960),[57] which would trigger a real revolution in thinking. A profound consequence of seafloor spreading is that new crust was, and still is, being continually created along the oceanic ridges. Therefore, Heezen advocated the so-called "expanding Earth" hypothesis of S. Warren Carey (see above). So, still the question remained: how can new crust be continuously added along the oceanic ridges without increasing the size of the Earth? In reality, this question had been solved already by numerous scientists during the forties and the fifties, like Arthur Holmes, Vening-Meinesz, Coates and many others: The crust in excess disappeared along what were called the oceanic trenches, where so-called "subduction" occurred. Therefore, when various scientists during the early sixties started to reason on the data at their disposal regarding the ocean floor, the pieces of the theory quickly fell into place.
110
+
111
+ The question particularly intrigued Harry Hammond Hess, a Princeton University geologist and a Naval Reserve Rear Admiral, and Robert S. Dietz, a scientist with the U.S. Coast and Geodetic Survey who first coined the term seafloor spreading. Dietz and Hess (the former published the same idea one year earlier in Nature,[58] but priority belongs to Hess who had already distributed an unpublished manuscript of his 1962 article by 1960)[59] were among the small handful who really understood the broad implications of sea floor spreading and how it would eventually agree with the, at that time, unconventional and unaccepted ideas of continental drift and the elegant and mobilistic models proposed by previous workers like Holmes.
112
+
113
+ In the same year, Robert R. Coats of the U.S. Geological Survey described the main features of island arc subduction in the Aleutian Islands. His paper, though little noted (and even ridiculed) at the time, has since been called "seminal" and "prescient". In reality, it actually shows that the work by the European scientists on island arcs and mountain belts performed and published during the 1930s up until the 1950s was applied and appreciated also in the United States.
114
+
115
+ If the Earth's crust was expanding along the oceanic ridges, Hess and Dietz reasoned like Holmes and others before them, it must be shrinking elsewhere. Hess followed Heezen, suggesting that new oceanic crust continuously spreads away from the ridges in a conveyor belt–like motion. And, using the mobilistic concepts developed before, he correctly concluded that many millions of years later, the oceanic crust eventually descends along the continental margins where oceanic trenches—very deep, narrow canyons—are formed, e.g. along the rim of the Pacific Ocean basin. The important step Hess made was that convection currents would be the driving force in this process, arriving at the same conclusions as Holmes had decades before with the only difference that the thinning of the ocean crust was performed using Heezen's mechanism of spreading along the ridges. Hess therefore concluded that the Atlantic Ocean was expanding while the Pacific Ocean was shrinking. As old oceanic crust is "consumed" in the trenches (like Holmes and others, he thought this was done by thickening of the continental lithosphere, not, as now understood, by underthrusting at a larger scale of the oceanic crust itself into the mantle), new magma rises and erupts along the spreading ridges to form new crust. In effect, the ocean basins are perpetually being "recycled," with the creation of new crust and the destruction of old oceanic lithosphere occurring simultaneously. Thus, the new mobilistic concepts neatly explained why the Earth does not get bigger with sea floor spreading, why there is so little sediment accumulation on the ocean floor, and why oceanic rocks are much younger than continental rocks.
116
+
117
+ Beginning in the 1950s, scientists like Victor Vacquier, using magnetic instruments (magnetometers) adapted from airborne devices developed during World War II to detect submarines, began recognizing odd magnetic variations across the ocean floor. This finding, though unexpected, was not entirely surprising because it was known that basalt—the iron-rich, volcanic rock making up the ocean floor—contains a strongly magnetic mineral (magnetite) and can locally distort compass readings. This distortion was recognized by Icelandic mariners as early as the late 18th century. More important, because the presence of magnetite gives the basalt measurable magnetic properties, these newly discovered magnetic variations provided another means to study the deep ocean floor. When newly formed rock cools, such magnetic materials recorded the Earth's magnetic field at the time.
118
+
119
+ As more and more of the seafloor was mapped during the 1950s, the magnetic variations turned out not to be random or isolated occurrences, but instead revealed recognizable patterns. When these magnetic patterns were mapped over a wide region, the ocean floor showed a zebra-like pattern: one stripe with normal polarity and the adjoining stripe with reversed polarity. The overall pattern, defined by these alternating bands of normally and reversely polarized rock, became known as magnetic striping, and was published by Ron G. Mason and co-workers in 1961, who did not find, though, an explanation for these data in terms of sea floor spreading, like Vine, Matthews and Morley a few years later.[60]
120
+
121
+ The discovery of magnetic striping called for an explanation. In the early 1960s scientists such as Heezen, Hess and Dietz had begun to theorise that mid-ocean ridges mark structurally weak zones where the ocean floor was being ripped in two lengthwise along the ridge crest (see the previous paragraph). New magma from deep within the Earth rises easily through these weak zones and eventually erupts along the crest of the ridges to create new oceanic crust. This process, at first denominated the "conveyer belt hypothesis" and later called seafloor spreading, operating over many millions of years continues to form new ocean floor all across the 50,000 km-long system of mid-ocean ridges.
122
+
123
+ Only four years after the maps with the "zebra pattern" of magnetic stripes were published, the link between sea floor spreading and these patterns was correctly placed, independently by Lawrence Morley, and by Fred Vine and Drummond Matthews, in 1963,[61] now called the Vine–Matthews–Morley hypothesis. This hypothesis linked these patterns to geomagnetic reversals and was supported by several lines of evidence:[62]
124
+
125
+ By explaining both the zebra-like magnetic striping and the construction of the mid-ocean ridge system, the seafloor spreading hypothesis (SFS) quickly gained converts and represented another major advance in the development of the plate-tectonics theory. Furthermore, the oceanic crust now came to be appreciated as a natural "tape recording" of the history of the geomagnetic field reversals (GMFR) of the Earth's magnetic field. Today, extensive studies are dedicated to the calibration of the normal-reversal patterns in the oceanic crust on one hand and known timescales derived from the dating of basalt layers in sedimentary sequences (magnetostratigraphy) on the other, to arrive at estimates of past spreading rates and plate reconstructions.
126
+
127
+ After all these considerations, Plate Tectonics (or, as it was initially called "New Global Tectonics") became quickly accepted in the scientific world, and numerous papers followed that defined the concepts:
128
+
129
+ The Plate Tectonics Revolution was the scientific and cultural change which developed from the acceptance of the plate tectonics theory. The event was a paradigm shift and scientific revolution.[69]
130
+
131
+ Continental drift theory helps biogeographers to explain the disjunct biogeographic distribution of present-day life found on different continents but having similar ancestors.[70] In particular, it explains the Gondwanan distribution of ratites and the Antarctic flora.
132
+
133
+ Reconstruction is used to establish past (and future) plate configurations, helping determine the shape and make-up of ancient supercontinents and providing a basis for paleogeography.
134
+
135
+ Current plate boundaries are defined by their seismicity.[71] Past plate boundaries within existing plates are identified from a variety of evidence, such as the presence of ophiolites that are indicative of vanished oceans.[72]
136
+
137
+ Tectonic motion is believed to have begun around 3 to 3.5 billion years ago.[73][74][why?]
138
+
139
+ Various types of quantitative and semi-quantitative information are available to constrain past plate motions. The geometric fit between continents, such as between west Africa and South America is still an important part of plate reconstruction. Magnetic stripe patterns provide a reliable guide to relative plate motions going back into the Jurassic period.[75] The tracks of hotspots give absolute reconstructions, but these are only available back to the Cretaceous.[76] Older reconstructions rely mainly on paleomagnetic pole data, although these only constrain the latitude and rotation, but not the longitude. Combining poles of different ages in a particular plate to produce apparent polar wander paths provides a method for comparing the motions of different plates through time.[77] Additional evidence comes from the distribution of certain sedimentary rock types,[78] faunal provinces shown by particular fossil groups, and the position of orogenic belts.[76]
140
+
141
+ The movement of plates has caused the formation and break-up of continents over time, including occasional formation of a supercontinent that contains most or all of the continents. The supercontinent Columbia or Nuna formed during a period of 2,000 to 1,800 million years ago and broke up about 1,500 to 1,300 million years ago.[79] The supercontinent Rodinia is thought to have formed about 1 billion years ago and to have embodied most or all of Earth's continents, and broken up into eight continents around 600 million years ago. The eight continents later re-assembled into another supercontinent called Pangaea; Pangaea broke up into Laurasia (which became North America and Eurasia) and Gondwana (which became the remaining continents).
142
+
143
+ The Himalayas, the world's tallest mountain range, are assumed to have been formed by the collision of two major plates. Before uplift, they were covered by the Tethys Ocean.
144
+
145
+ Depending on how they are defined, there are usually seven or eight "major" plates: African, Antarctic, Eurasian, North American, South American, Pacific, and Indo-Australian. The latter is sometimes subdivided into the Indian and Australian plates.
146
+
147
+ There are dozens of smaller plates, the seven largest of which are the Arabian, Caribbean, Juan de Fuca, Cocos, Nazca, Philippine Sea, and Scotia.
148
+
149
+ The current motion of the tectonic plates is today determined by remote sensing satellite data sets, calibrated with ground station measurements.
150
+
151
+ The appearance of plate tectonics on terrestrial planets is related to planetary mass, with more massive planets than Earth expected to exhibit plate tectonics. Earth may be a borderline case, owing its tectonic activity to abundant water [80] (silica and water form a deep eutectic).
152
+
153
+ Venus shows no evidence of active plate tectonics. There is debatable evidence of active tectonics in the planet's distant past; however, events taking place since then (such as the plausible and generally accepted hypothesis that the Venusian lithosphere has thickened greatly over the course of several hundred million years) has made constraining the course of its geologic record difficult. However, the numerous well-preserved impact craters have been utilized as a dating method to approximately date the Venusian surface (since there are thus far no known samples of Venusian rock to be dated by more reliable methods). Dates derived are dominantly in the range 500 to 750 million years ago, although ages of up to 1,200 million years ago have been calculated. This research has led to the fairly well accepted hypothesis that Venus has undergone an essentially complete volcanic resurfacing at least once in its distant past, with the last event taking place approximately within the range of estimated surface ages. While the mechanism of such an impressive thermal event remains a debated issue in Venusian geosciences, some scientists are advocates of processes involving plate motion to some extent.
154
+
155
+ One explanation for Venus's lack of plate tectonics is that on Venus temperatures are too high for significant water to be present.[81][82] The Earth's crust is soaked with water, and water plays an important role in the development of shear zones. Plate tectonics requires weak surfaces in the crust along which crustal slices can move, and it may well be that such weakening never took place on Venus because of the absence of water. However, some researchers[who?] remain convinced that plate tectonics is or was once active on this planet.
156
+
157
+ Mars is considerably smaller than Earth and Venus, and there is evidence for ice on its surface and in its crust.
158
+
159
+ In the 1990s, it was proposed that Martian Crustal Dichotomy was created by plate tectonic processes.[83] Scientists today disagree, and think that it was created either by upwelling within the Martian mantle that thickened the crust of the Southern Highlands and formed Tharsis[84] or by a giant impact that excavated the Northern Lowlands.[85]
160
+
161
+ Valles Marineris may be a tectonic boundary.[86]
162
+
163
+ Observations made of the magnetic field of Mars by the Mars Global Surveyor spacecraft in 1999 showed patterns of magnetic striping discovered on this planet. Some scientists interpreted these as requiring plate tectonic processes, such as seafloor spreading.[87] However, their data fail a "magnetic reversal test", which is used to see if they were formed by flipping polarities of a global magnetic field.[88]
164
+
165
+ Some of the satellites of Jupiter have features that may be related to plate-tectonic style deformation, although the materials and specific mechanisms may be different from plate-tectonic activity on Earth. On 8 September 2014, NASA reported finding evidence of plate tectonics on Europa, a satellite of Jupiter—the first sign of subduction activity on another world other than Earth.[89]
166
+
167
+ Titan, the largest moon of Saturn, was reported to show tectonic activity in images taken by the Huygens probe, which landed on Titan on January 14, 2005.[90]
168
+
169
+ On Earth-sized planets, plate tectonics is more likely if there are oceans of water. However, in 2007, two independent teams of researchers came to opposing conclusions about the likelihood of plate tectonics on larger super-Earths[91][92] with one team saying that plate tectonics would be episodic or stagnant[93] and the other team saying that plate tectonics is very likely on super-earths even if the planet is dry.[80]
170
+
171
+ Consideration of plate tectonics is a part of the search for extraterrestrial intelligence and extraterrestrial life.[94]
172
+
173
+ Videos
en/4664.html.txt ADDED
@@ -0,0 +1,131 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Platelets, also called thrombocytes (from Greek θρόμβος, "clot" and κύτος, "cell"), are a component of blood whose function (along with the coagulation factors) is to react to bleeding from blood vessel injury by clumping, thereby initiating a blood clot.[1] Platelets have no cell nucleus: they are fragments of cytoplasm that are derived from the megakaryocytes[2] of the bone marrow, which then enter the circulation. Circulating unactivated platelets are biconvex discoid (lens-shaped) structures,[3][4]:117–18 2–3 µm in greatest diameter.[5] Activated platelets have cell membrane projections covering their surface. Platelets are found only in mammals, whereas in other vertebrates (e.g. birds, amphibians), thrombocytes circulate as intact mononuclear cells.[4]:3
2
+
3
+ On a stained blood smear, platelets appear as dark purple spots, about 20% the diameter of red blood cells. The smear is used to examine platelets for size, shape, qualitative number, and clumping. A healthy adult typically has 10 to 20 times more red blood cells than platelets. One major function of platelets is to contribute to hemostasis: the process of stopping bleeding at the site of interrupted endothelium. They gather at the site and, unless the interruption is physically too large, they plug the hole.
4
+ First, platelets attach to substances outside the interrupted endothelium: adhesion. Second, they change shape, turn on receptors and secrete chemical messengers: activation. Third, they connect to each other through receptor bridges: aggregation.[6] Formation of this platelet plug (primary hemostasis) is associated with activation of the coagulation cascade, with resultant fibrin deposition and linking (secondary hemostasis). These processes may overlap: the spectrum is from a predominantly platelet plug, or "white clot" to a predominantly fibrin, or "red clot" or the more typical mixture.
5
+ Some would add the subsequent retraction and platelet inhibition as fourth and fifth steps to the completion of the process[7] and still others would add a sixth step, wound repair. Platelets also participate in both innate[8] and adaptive[9] intravascular immune responses.
6
+ The platelet cell membrane has receptors for collagen. Following the rupture of the blood vessel wall, the platelets are exposed and they adhere to the collagen in the surrounding connective tissue.
7
+
8
+ Low platelet concentration is called thrombocytopenia, and is due to either decreased production or increased destruction. Elevated platelet concentration is called thrombocytosis, and is either congenital, reactive (to cytokines), or due to unregulated production: one of the myeloproliferative neoplasms or certain other myeloid neoplasms. A disorder of platelet function is a thrombocytopathy.
9
+
10
+ Normal platelets can respond to an abnormality on the vessel wall rather than to hemorrhage, resulting in inappropriate platelet adhesion/activation and thrombosis: the formation of a clot within an intact vessel. This type of thrombosis arises by mechanisms different from those of a normal clot: namely, extending the fibrin of venous thrombosis; extending an unstable or ruptured arterial plaque, causing arterial thrombosis; and microcirculatory thrombosis. An arterial thrombus may partially obstruct blood flow, causing downstream ischemia, or may completely obstruct it, causing downstream tissue death.
11
+
12
+ Platelet concentration is measured either manually using a hemocytometer, or by placing blood in an automated platelet analyzer using electrical impedance, such as a Coulter counter.[10] The normal range (99% of population analyzed) for platelets in healthy Caucasians is 150,000 to 450,000 per cubic millimeter [11](a mm3 equals a microliter). or 150–450 × 109 per liter. The normal range has been confirmed to be the same in the elderly[12] and Spanish populations.[13]
13
+
14
+ The number of platelets varies across individuals. The normal physiologic range is 200000 to 500000 per microliter of blood.
15
+ Since they contain receptors for thrombopoietin (the protein that facilitates the maturation of megakaryocytes and release of platelets), a higher number of platelets binds more of the protein. Consequently, there is stimulation for more production of thrombopoietin in the liver and kidneys. This is the basis for production of more thrombopoietin and, as a result, more platelets in the bloodstream during the blood clotting process.
16
+
17
+ In a first approximation, the platelet shape can be considered similar to oblate spheroids, with a semiaxis ratio of 2 to 8.[14] This approximation is often used to model the hydrodynamic and optical properties of a platelet population, as well as to restore the geometric parameters of individual measured platelets by flow cytometry.[15] More accurate biophysical models of the platelet surface morphology, which model its shape from first principles, make it possible to obtain a more realistic platelet geometry in a calm and activated state [16]
18
+
19
+ Structurally the platelet can be divided into four zones, from peripheral to innermost:
20
+
21
+ An overview summarizing platelet dynamics, the complex process of converting inactive platelets into a platelet plug, is essentialEL 2. Complicating any verbal description is the fact that at least 193 proteins and 301 interactions are involved in platelet dynamics. The separation of platelet dynamics into three stages is useful in this regard, but it is artificial: in fact, each stage is initiated in rapid succession, and each continues until the trigger for that stage is no longer present, so there is overlap.[6]
22
+
23
+ Thrombus formation on an intact endothelium is prevented by nitric oxide,[19] prostacyclin,[20] and CD39.[21]
24
+
25
+ Endothelial cells are attached to the subendothelial collagen by von Willebrand factor (VWF), which these cells produce. VWF is also stored in the Weibel-Palade bodies of the endothelial cells and secreted constitutively into the blood. Platelets store vWF in their alpha granules.
26
+
27
+ When the endothelial layer is disrupted, collagen and VWF anchor platelets to the subendothelium. Platelet GP1b-IX-V receptor binds with VWF; and GPVI receptor and integrin α2β1 bind with collagen.[22]
28
+
29
+ The intact endothelial lining inhibits platelet activation by producing nitric oxide, endothelial-ADPase, and PGI2 (Prostacyclin). Endothelial-ADPase degrades the platelet activator ADP.
30
+
31
+ Resting platelets maintain active calcium efflux via a cyclic AMP-activated calcium pump. Intracellular calcium concentration determines platelet activation status, as it is the second messenger that drives platelet conformational change and degranulation (see below). Endothelial prostacyclin binds to prostanoid receptors on the surface of resting platelets. This event stimulates the coupled Gs protein to increase adenylate cyclase activity and increases the production of cAMP, further promoting the efflux of calcium and reducing intracellular calcium availability for platelet activation.
32
+
33
+ ADP on the other hand binds to purinergic receptors on the platelet surface. Since the thrombocytic purinergic receptor P2Y12 is coupled to Gi proteins, ADP reduces platelet adenylate cyclase activity and cAMP production, leading to accumulation of calcium inside the platelet by inactivating the cAMP calcium efflux pump. The other ADP-receptor P2Y1 couples to Gq that activates phospholipase C-beta 2 (PLCB2), resulting in inositol 1,4,5-trisphosphate (IP3) generation and intracellular release of more calcium. This together induces platelet activation. Endothelial ADPase degrades ADP and prevents this from happening. Clopidogrel and related antiplatelet medications also work as purinergic receptor P2Y12 antagonists.
34
+
35
+ Platelet activation begins seconds after adhesion occurs. It is triggered when collagen from the subendothelium binds with its receptors (GPVI receptor and integrin α2β1) on the platelet. GPVI is associated with the Fc receptor gamma chain and leads via the activation of a tyrosine kinase cascade finally to the activation of PLC-gamma2 (PLCG2) and more calcium release.
36
+
37
+ Tissue factor also binds to factor VII in the blood, which initiates the extrinsic coagulation cascade to increase thrombin production. Thrombin is a potent platelet activator, acting through Gq and G12. These are G protein coupled receptors and they turn on calcium-mediated signaling pathways within the platelet, overcoming the baseline calcium efflux. Families of three G proteins (Gq, Gi, G12) operate together for full activation. Thrombin also promotes secondary fibrin-reinforcement of the platelet plug. Platelet activation in turn degranulates and releases factor V and fibrinogen, potentiating the coagulation cascade. So, in reality, the process of platelet plugging and coagulation are occurring simultaneously rather than sequentially, with each inducing the other to form the final fibrin-crosslinked thrombus.
38
+
39
+ Collagen-mediated GPVI signalling increases the platelet production of thromboxane A2 (TXA2) and decreases the production of prostacyclin. This occurs by altering the metabolic flux of platelet's eicosanoid synthesis pathway, which involves enzymes phospholipase A2, cyclo-oxygenase 1, and thromboxane-A synthase. Platelets secrete thromboxane A2, which acts on the platelet's own thromboxane receptors on the platelet surface (hence the so-called "out-in" mechanism), and those of other platelets. These receptors trigger intraplatelet signaling, which converts GPIIb/IIIa receptors to their active form to initiate aggregation.[6]
40
+
41
+ Platelets contain dense granules, lambda granules and alpha granules. Activated platelets secrete the contents of these granules through their canalicular systems to the exterior. Simplistically, bound and activated platelets degranulate to release platelet chemotactic agents to attract more platelets to the site of endothelial injury. Granule characteristics:
42
+
43
+ As shown by flow cytometry and electron microscopy, the most sensitive sign of activation, when exposed to platelets using ADP, are morphological changes.[23] Mitochondrial hyperpolarization is a key event in initiating changes in morphology.[24] Intraplatelet calcium concentration increases, stimulating the interplay between the microtubule/actin filament complex. The continuous changes in shape from the unactivated to the fully activated platelet is best seen on scanning electron microscopy. Three steps along this path are named early dendritic, early spread and spread. The surface of the unactivated platelet looks very similar to the surface of the brain, with a wrinkled appearance from numerous shallow folds to increase the surface area; early dendritic, an octopus with multiple arms and legs; early spread, an uncooked frying egg in a pan, the "yolk" being the central body; and the spread, a cooked fried egg with a denser central body.
44
+
45
+ These changes are all brought about by the interaction of the microtubule/actin complex with the platelet cell membrane and open canalicular system (OCS), which is an extension and invagination of that membrane. This complex runs just beneath these membranes and is the chemical motor which literally pulls the invaginated OCS out of the interior of the platelet, like turning pants pockets inside out, creating the dendrites. This process is similar to the mechanism of contraction in a muscle cell.[25] The entire OCS thus becomes indistinguishable from the initial platelet membrane as it forms the "fried egg". This dramatic increase in surface area comes about with neither stretching nor adding phospholipids to the platelet membrane.[26]
46
+
47
+ Platelet activation causes its membrane surface to become negatively charged. One of the signaling pathways turns on scramblase, which moves negatively charged phospholipids from the inner to the outer platelet membrane surface. These phospholipids then bind the tenase and prothrombinase complexes, two of the sites of interplay between platelets and the coagulation cascade. Calcium ions are essential for the binding of these coagulation factors.
48
+
49
+ In addition to interacting with vWF and fibrin, platelets interact with thrombin, Factors X, Va, VIIa, XI, IX, and prothrombin to complete formation via the coagulation cascade.[27][28]
50
+ Six studies suggested platelets express tissue factor: the definitive study shows they do not.[27] The platelets from rats were conclusively shown to express tissue factor protein and also it was proved that the rat platelets carry both the tissue factor pre-mRNA and mature mRNA.[29]
51
+
52
+ Aggregation begins minutes after activation, and occurs as a result of turning on the GPIIb/IIIa receptor, allowing these receptors to bind with vWF or fibrinogen.[6] There are around 60 000 of these receptors per platelet.[30] When any one or more of at least nine different platelet surface receptors are turned on during activation, intraplatelet signaling pathways cause existing GpIIb/IIIa receptors to change shape – curled to straight – and thus become capable of binding.[6]
53
+
54
+ Since fibrinogen is a rod-like protein with nodules on either end capable of binding GPIIb/IIIa, activated platelets with exposed GPIIb/IIIa can bind fibrinogen to aggregate. GPIIb/IIIa may also further anchor the platelets to subendothelial vWF for additional structural stabilisation.
55
+
56
+ Classically it was thought that this was the only mechanism involved in aggregation, but three new mechanisms have been identified which can initiate aggregation, depending on the velocity of blood flow (i.e. shear range).[31]
57
+
58
+ The blood clot is only a temporary solution to stop bleeding; tissue repair is needed. Small interruptions in the endothelium are handled by physiological mechanisms; large interruptions by the trauma surgeon.
59
+ [32] The fibrin is slowly dissolved by the fibrinolytic enzyme, plasmin, and the platelets are cleared by phagocytosis.[33]
60
+
61
+ Platelets have central role in innate immunity, initiating and participating in multiple inflammatory processes, directly binding pathogens and even destroying them. This support clinical data which show that many with serious bacterial or viral infections have thrombocytopenia, thus reducing their contribution to inflammation. Also platelet-leukocyte aggregates (PLAs) found in circulation are typical in sepsis or inflammatory bowel disease, showing the connection between thrombocytes and immune cells sensu stricto.[34]
62
+
63
+ As hemostasis is a basic function of thrombocytes in mammals, it also has its uses in possible infection confinement.[8] In case of injury, platelets, together with the coagulation cascade, form the first line of defense by forming a blood clot. Thus, hemostasis and host defense were intertwined in evolution. For example, in the Atlantic horseshoe crab (living fossil estimated to be over 400 million years old), the only blood cell type, the amebocyte, facilitates both the hemostatic function and the encapsulation and phagocytosis of pathogens by means of exocytosis of intracellular granules containing bactericidal defense molecules. Blood clotting supports the immune function by trapping the pathogenic bacteria within.[35]
64
+
65
+ Although thrombosis, blood coagulation in intact blood vessels, is usually viewed as a pathological immune response, leading to obturation of lumen of blood vessel and subsequent hypoxic tissue damage, in some cases, directed thrombosis, called immunothrombosis, can locally control the spread of the infection. The thrombosis is directed in concordance of platelets, neutrophils and monocytes. The process is initiated either by immune cells sensu stricto by activating their pattern recognition receptors (PRRs), or by platelet-bacterial binding. Platelets can bind to bacteria either directly through thrombocytic PRRs[34] and bacterial surface proteins, or via plasma proteins that bind both to platelets and bacteria.[36] Monocytes respond to bacterial pathogen-associated molecular patterns (PAMPs), or damage-associated molecular patterns (DAMPs) by activating the extrinsic pathway of coagulation. Neutrophils facilitate the blood coagulation by NETosis. In turn, the platelets facilitate neutrophils' NETosis. NETs bind tissue factor, binding the coagulation centres to the location of infection. They also activate the intrinsic coagulation pathway by providing its negatively charged surface to the factor XII. Other neutrophil secretions, such as proteolytic enzymes, which cleave coagulation inhibitors, also bolster the process.[8]
66
+
67
+ In case of imbalance throughout the regulation of immunothrombosis, this process can quickly become aberrant. Regulatory defects in immunothrombosis are suspected to be major factor in causing pathological thrombosis in many forms, such as disseminated intravascular coagulation (DIC) or deep vein thrombosis. DIC in sepsis is a prime example of both dysregulated coagulation process as well as undue systemic inflammatory response resulting in multitude of microthrombi of similar composition to that in physiological immunothrombosis - fibrin, platelets, neutrophils and NETs.[8]
68
+
69
+ Platelets are rapidly deployed to sites of injury or infection, and potentially modulate inflammatory processes by interacting with leukocytes and by secreting cytokines, chemokines and other inflammatory mediators.[37][38][39][40][41]
70
+ Platelets also secrete platelet-derived growth factor (PDGF).
71
+
72
+ Platelets modulate neutrophils by forming platelet-leukocyte aggregates (PLAs). These formations induce upregulated production of αmβ2 (Mac-1) integrin in neutrophils. Interaction with PLAs also induce degranulation and increased phagocytosis in neutrophils. Platelets are also the largest source of soluble CD40L which induces production of reactive oxygen species (ROS) and upregulate expression of adhesion molecules, such as E-selectin, ICAM-1 and VCAM-1, in neutrophils, activates macrophages and activates cytotoxic response in T and B lymphocytes.[34]
73
+
74
+ Recently, the dogma that mammalian platelets lacking nucleus are unable of autonomous locomotion was broken.[42] In fact, the platelets are active scavengers, scaling walls of blood vessels and reorganising the thrombus. They are able to recognize and adhere to many surfaces, including bacteria. They are even able to fully envelop them in their open canalicular system (OCP), leading to proposed name of the process being "covercytosis", rather than phagocytosis, as OCS is merely an invagination of outer plasma membrane. These platelet-bacteria bundles are then used as an interaction platform for neutrophils which destroy the bacteria using the NETosis and phagocytosis.
75
+
76
+ Platelets also participate in chronic inflammatory diseases, such as synovitis or rheumatoid arthritis.[43] Platelets are activated by collagen receptor glycoprotein IV (GPVI). Proinflammatory platelet microvesicles trigger constant cytokine secretion from neighboring fibroblast-like synoviocytes, most prominently Il-6 and Il-8. Inflammatory damage to surrounding extracellular matrix continually reveals more collagen, maintaining the microvesicle production.
77
+
78
+ Activated platelets are able to participate in adaptive immunity, interacting with antibodies. They are able to specifically bind IgG through FcγRIIA, receptor for constant fragment (Fc) of IgG. When activated and bound to IgG opsonised bacteria, the platelets subsequently release reactive oxygen species (ROS), antimicrobial peptides, defensins, kinocidins and proteases, killing the bacteria directly.[44] Platelets also secrete proinflammatory and procoagulant mediators such as inorganic polyphosphates or platelet factor 4 (PF4), connecting innate and adaptive immune responses.[44][45]
79
+
80
+ Spontaneous and excessive bleeding can occur because of platelet disorders. This bleeding can be caused by deficient numbers of platelets, dysfunctional platelets, or very excessive numbers of platelets: over 1.0 million/microliter. (The excessive numbers create a relative von Willebrand factor deficiency due to sequestration.)[46][47]
81
+
82
+ One can get a clue as to whether bleeding is due to a platelet disorder or a coagulation factor disorder by the characteristics and location of the bleeding.[4]:815, Table 39–4 All of the following suggest platelet bleeding, not coagulation bleeding: the bleeding from a skin cut such as a razor nick is prompt and excessive, but can be controlled by pressure; spontaneous bleeding into the skin which causes a purplish stain named by its size: petechiae, purpura, ecchymoses; bleeding into mucous membranes causing bleeding gums, nose bleed, and gastrointestinal bleeding; menorrhagia; and intraretinal and intracranial bleeding.
83
+
84
+ Excessive numbers of platelets, and/or normal platelets responding to abnormal vessel walls, can result in venous thrombosis and arterial thrombosis. The symptoms depend on the site of thrombosis.
85
+
86
+ Bleeding time was first developed as a test of platelet function by Duke in 1910.[48] Duke's test measured the time taken for bleeding to stop from a standardized wound in the ear lobe which was blotted every 30 seconds. The normal time for bleeding to stop was less than 3 minutes.[49] More modern techniques are now used. A normal bleeding time reflects sufficient platelet numbers and function plus normal microvascular.
87
+
88
+ In the Multiplate analyzer, anticoagulated whole blood is mixed with saline and a platelet agonist in a single-use cuvette with two pairs of electrodes. The increase in impedance between the electrodes as platelets aggregate onto them, is measured and visualized as a curve.[citation needed]
89
+
90
+ The PFA-100 (Platelet Function Assay-100) is a system for analysing platelet function in which citrated whole blood is aspirated through a disposable cartridge containing an aperture within a membrane coated with either collagen and epinephrine or collagen and ADP. These agonists induce platelet adhesion, activation and aggregation, leading to rapid occlusion of the aperture and cessation of blood flow termed the closure time (CT). An elevated CT with EPI and collagen can indicate intrinsic defects such as von Willebrand disease, uremia, or circulating platelet inhibitors. The follow up test involving collagen and ADP is used to indicate if the abnormal CT with collagen and EPI was caused by the effects of acetyl sulfosalicylic acid (aspirin) or medications containing inhibitors.[50]
91
+
92
+ Adapted from:[4]:vii
93
+
94
+ The three broad categories of platelet disorders are "not enough"; "dysfunctional"; and "too many".[4]:vii
95
+
96
+ Some drugs used to treat inflammation have the unwanted side effect of suppressing normal platelet function. These are the non-steroidal anti-inflammatory drugs (NSAIDS). Aspirin irreversibly disrupts platelet function by inhibiting cyclooxygenase-1 (COX1), and hence normal hemostasis. The resulting platelets are unable to produce new cyclooxygenase because they have no DNA. Normal platelet function will not return until the use of aspirin has ceased and enough of the affected platelets have been replaced by new ones, which can take over a week. Ibuprofen, another NSAID, does not have such a long duration effect, with platelet function usually returning within 24 hours,[57] and taking ibuprofen before aspirin prevents the irreversible effects of aspirin.[58]
97
+
98
+ These drugs are used to prevent thrombus formation.
99
+
100
+ Platelet transfusion is most frequently used to correct unusually low platelet counts, either to prevent spontaneous bleeding (typically at counts below 10×109/L) or in anticipation of medical procedures that will necessarily involve some bleeding. For example, in patients undergoing surgery, a level below 50×109/L is associated with abnormal surgical bleeding, and regional anaesthetic procedures such as epidurals are avoided for levels below 80×109/L.[59] Platelets may also be transfused when the platelet count is normal but the platelets are dysfunctional, such as when an individual is taking aspirin or clopidogrel.[60] Finally, platelets may be transfused as part of a massive transfusion protocol, in which the three major blood components (red blood cells, plasma, and platelets) are transfused to address severe hemorrhage. Platelet transfusion is contraindicated in thrombotic thrombocytopenic purpura (TTP), as it fuels the coagulopathy.
101
+
102
+ In 2018, Estcourt et al. did some research on the use of platelet transfusions for people undergoing surgery. They conducted some Cochrane reviews. First of all, they conducted a Cochrane review with randomised controlled trials in 2018 to measure the safety and effectiveness of prophylactic platelet transfusions prior to surgery for adult people that suffer from a low platelet count. The participants did not receive a treatment of the low platelet count before and they did not suffer from a bleeding event in the past. Moreover the included people suffered from chronic diseases or haematological malignancies. The exact inclusion and exclusion criteria and information regarding the dose of the intervention can be found in the original Cochrane review. Estcourt et al. conducted three different analyses. The first analysis compared the prophylactic transfusion to no transfusion: The evidence is very uncertain about the effect of prophylactic platelet transfusions on the all-cause mortality up to 30 days after surgery, the number of participants with major bleeding within 7 days of surgery, the number of participants with a minor surgery-related bleeding up to 7 days and the serious adverse events that are surgery-related and occur within 30 days. The second analysis was conducted to compare prophylactic platelet transfusions to alternative treatments: Prophylactic platelet transfusions may have little to no effect on the number of participants that suffer from a major bleeding up to 7 days after surgery, the number of participants with a minor bleeding that is related to the procedure and occurs within 7 days after surgery and the transfusion-related serious adverse events within 24 hours, but the evidence is very uncertain. The last analysis compared different thresholds to determine whether participants received a platelet transfusion: The evidence is very uncertain about the effect of different thresholds for platelet transfusions on the number of participants that suffer from a major bleeding within 7 days of surgery and the number of participants that suffer from a minor procedure-related bleeding that occurs within 7 days.[61]
103
+
104
+ Furthermore they conducted another Cochrane review by comparing retrospective trials in 2018 to determine the effect of platelet transfusions prior to a lumbar puncture or epidural anaesthesia for participants that suffer from thrombocytopenia. There was no age restriction and the participants additionally suffered from leukaemia or other haematological malignancies. People were excluded from study participation if they already got the diagnosis of a coagulopathy or if they already had a bleeding event in the past. Estcourt et al. conducted one analysis by comparing a platelet transfusion with no platelet transfusion: The evidence is very uncertain about the effect of platelet transfusions prior to lumbar puncture on major surgery-related bleeding within 24 hours and the surgery-related complications up to 7 days after the procedure.[62]
105
+
106
+ Platelets are either isolated from collected units of whole blood and pooled to make a therapeutic dose, or collected by platelet apheresis: blood is taken from the donor, passed through a device which removes the platelets, and the remainder is returned to the donor in a closed loop. The industry standard is for platelets to be tested for bacteria before transfusion to avoid septic reactions, which can be fatal. Recently the AABB Industry Standards for Blood Banks and Transfusion Services (5.1.5.1) has allowed for use of pathogen reduction technology as an alternative to bacterial screenings in platelets.[63]
107
+
108
+ Pooled whole-blood platelets, sometimes called "random" platelets, are separated by one of two methods.[64] In the US, a unit of whole blood is placed into a large centrifuge in what is referred to as a "soft spin". At these settings, the platelets remain suspended in the plasma. The platelet-rich plasma (PRP) is removed from the red cells, then centrifuged at a faster setting to harvest the platelets from the plasma. In other regions of the world, the unit of whole blood is centrifuged using settings that cause the platelets to become suspended in the "buffy coat" layer, which includes the platelets and the white blood cells. The "buffy coat" is isolated in a sterile bag, suspended in a small amount of red blood cells and plasma, then centrifuged again to separate the platelets and plasma from the red and white blood cells. Regardless of the initial method of preparation, multiple donations may be combined into one container using a sterile connection device to manufacture a single product with the desired therapeutic dose.
109
+
110
+ Apheresis platelets are collected using a mechanical device that draws blood from the donor and centrifuges the collected blood to separate out the platelets and other components to be collected. The remaining blood is returned to the donor. The advantage to this method is that a single donation provides at least one therapeutic dose, as opposed to the multiple donations for whole-blood platelets. This means that a recipient is not exposed to as many different donors and has less risk of transfusion-transmitted disease and other complications. Sometimes a person such as a cancer patient who requires routine transfusions of platelets will receive repeated donations from a specific donor to further minimize the risk. Pathogen reduction of platelets using for example, riboflavin and UV light treatments can also be carried out to reduce the infectious load of pathogens contained in donated blood products, thereby reducing the risk of transmission of transfusion transmitted diseases.[65][66] Another photochemical treatment process utilizing amotosalen and UVA light has been developed for the inactivation of viruses, bacteria, parasites, and leukocytes that can contaminate blood components intended for transfusion.[67] In addition, apheresis platelets tend to contain fewer contaminating red blood cells because the collection method is more efficient than "soft spin" centrifugation at isolating the desired blood component.
111
+
112
+ Platelets collected by either method have a very short shelf life, typically five days. This results in frequent problems with short supply, as testing the donations often requires up to a full day. Since there are no effective preservative solutions for platelets, they lose potency quickly and are best when fresh.
113
+
114
+ Platelets are stored under constant agitation at 20–24 °C (68–75.2 °F). Units can not be refrigerated as this causes platelets to change shape and lose function. Storage at room temperature provides an environment where any bacteria that are introduced to the blood component during the collection process may proliferate and subsequently cause bacteremia in the patient. Regulations are in place in the United States that require products to be tested for the presence of bacterial contamination before transfusion.[68]
115
+
116
+ Platelets do not need to belong to the same A-B-O blood group as the recipient or be cross-matched to ensure immune compatibility between donor and recipient unless they contain a significant amount of red blood cells (RBCs). The presence of RBCs imparts a reddish-orange color to the product, and is usually associated with whole-blood platelets. An effort is sometimes made to issue type specific platelets, but this is not critical as it is with RBCs.
117
+
118
+ Prior to issuing platelets to the recipient, they may be irradiated to prevent transfusion-associated graft versus host disease or they may be washed to remove the plasma if indicated.
119
+
120
+ The change in the recipient's platelet count after transfusion is termed the "increment" and is calculated by subtracting the pre-transfusion platelet count from the post-transfusion platelet count. Many factors affect the increment including the recipient's body size, the number of platelets transfused, and clinical features that may cause premature destruction of the transfused platelets. When recipients fail to demonstrate an adequate post-transfusion increment, this is termed platelet transfusion refractoriness.
121
+
122
+ Platelets, either apheresis-derived or random-donor, can be processed through a volume reduction process. In this process, the platelets are spun in a centrifuge and the excess plasma is removed, leaving 10 to 100 mL of platelet concentrate. Such volume-reduced platelets are normally transfused only to neonatal and pediatric patients, when a large volume of plasma could overload the child's small circulatory system. The lower volume of plasma also reduces the chances of an adverse transfusion reaction to plasma proteins.[69] Volume reduced platelets have a shelf life of only four hours.[70]
123
+
124
+ Platelets release platelet-derived growth factor (PDGF), a potent chemotactic agent; and TGF beta, which stimulates the deposition of extracellular matrix; fibroblast growth factor, insulin-like growth factor 1, platelet-derived epidermal growth factor, and vascular endothelial growth factor. Local application of these factors in increased concentrations through platelet-rich plasma (PRP) is used as an adjunct in wound healing.[71]
125
+
126
+ Instead of having platelets, non-mammalian vertebrates have nucleated thrombocytes, which resemble B lymphocytes in morphology. They aggregate in response to thrombin, but not to ADP, serotonin, nor adrenaline, as platelets do.[72][73]
127
+
128
+ The term thrombocyte (clot cell) came into use in the early 1900s and is sometimes used as a synonym for platelet; but not generally in the scientific literature, except as a root word for other terms related to platelets (e.g. thrombocytopenia meaning low platelets).[4]:v3 The term thrombocytes is proper for mononuclear cells found in the blood of non-mammalian vertebrates: they are the functional equivalent of platelets, but circulate as intact cells rather than cytoplasmic fragments of bone marrow megakaryocytes.[4]:3
129
+
130
+ In some contexts, the word thrombus is used interchangeably with the word clot, regardless of its composition (white, red, or mixed). In other contexts it is used to contrast a normal from an abnormal clot: thrombus arises from physiologic hemostasis, thrombosis arises from a pathologic and excessive quantity of clot.[84] In a third context it is used to contrast the result from the process: thrombus is the result, thrombosis is the process.
131
+
en/4665.html.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ Plasma or plasm may refer to:
en/4666.html.txt ADDED
@@ -0,0 +1,113 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ Platinum is a chemical element with the symbol Pt and atomic number 78. It is a dense, malleable, ductile, highly unreactive, precious, silverish-white transition metal. Its name is derived from the Spanish term platino, meaning "little silver".[3][4]
4
+
5
+ Platinum is a member of the platinum group of elements and group 10 of the periodic table of elements. It has six naturally occurring isotopes. It is one of the rarer elements in Earth's crust, with an average abundance of approximately 5 μg/kg. It occurs in some nickel and copper ores along with some native deposits, mostly in South Africa, which accounts for 80% of the world production. Because of its scarcity in Earth's crust, only a few hundred tonnes are produced annually, and given its important uses, it is highly valuable and is a major precious metal commodity.[5]
6
+
7
+ Platinum is one of the least reactive metals. It has remarkable resistance to corrosion, even at high temperatures, and is therefore considered a noble metal. Consequently, platinum is often found chemically uncombined as native platinum. Because it occurs naturally in the alluvial sands of various rivers, it was first used by pre-Columbian South American natives to produce artifacts. It was referenced in European writings as early as 16th century, but it was not until Antonio de Ulloa published a report on a new metal of Colombian origin in 1748 that it began to be investigated by scientists.
8
+
9
+ Platinum is used in catalytic converters, laboratory equipment, electrical contacts and electrodes, platinum resistance thermometers, dentistry equipment, and jewelry. Being a heavy metal, it leads to health problems upon exposure to its salts; but due to its corrosion resistance, metallic platinum has not been linked to adverse health effects.[6] Compounds containing platinum, such as cisplatin, oxaliplatin and carboplatin, are applied in chemotherapy against certain types of cancer.[7]
10
+
11
+ As of 2020, the value of platinum is around $32.00 per gram ($1,000 per troy ounce).[8]
12
+
13
+ Pure platinum is a lustrous, ductile, and malleable, silver-white metal.[9] Platinum is more ductile than gold, silver or copper, thus being the most ductile of pure metals, but it is less malleable than gold.[10][11] The metal has excellent resistance to corrosion, is stable at high temperatures and has stable electrical properties. Platinum does oxidize, forming PtO2, at 500 °C; this oxide can be easily removed thermally.[12] It reacts vigorously with fluorine at 500 °C (932 °F) to form platinum tetrafluoride.[13] It is also attacked by chlorine, bromine, iodine, and sulfur. Platinum is insoluble in hydrochloric and nitric acid, but dissolves in hot aqua regia (A mixture of nitric and hydrochloric acids), to form chloroplatinic acid, H2PtCl6.[14]
14
+
15
+ Its physical characteristics and chemical stability make it useful for industrial applications.[15] Its resistance to wear and tarnish is well suited to use in fine jewellery.
16
+
17
+ The most common oxidation states of platinum are +2 and +4. The +1 and +3 oxidation states are less common, and are often stabilized by metal bonding in bimetallic (or polymetallic) species. As is expected, tetracoordinate platinum(II) compounds tend to adopt 16-electron square planar geometries. Although elemental platinum is generally unreactive, it dissolves in hot aqua regia to give aqueous chloroplatinic acid (H2PtCl6):[16]
18
+
19
+ As a soft acid, platinum has a great affinity for sulfur, such as on dimethyl sulfoxide (DMSO); numerous DMSO complexes have been reported and care should be taken in the choice of reaction solvent.[17]
20
+
21
+ In 2007, Gerhard Ertl won the Nobel Prize in Chemistry for determining the detailed molecular mechanisms of the catalytic oxidation of carbon monoxide over platinum (catalytic converter).[18]
22
+
23
+ Platinum has six naturally occurring isotopes: 190Pt, 192Pt, 194Pt, 195Pt, 196Pt, and 198Pt. The most abundant of these is 195Pt, comprising 33.83% of all platinum. It is the only stable isotope with a non-zero spin; with a spin of 1/2, 195Pt satellite peaks are often observed in 1H and 31P NMR spectroscopy (i.e., Pt-phosphine and Pt-alkyl complexes). 190Pt is the least abundant at only 0.01%. Of the naturally occurring isotopes, only 190Pt is unstable, though it decays with a half-life of 6.5×1011 years, causing an activity of 15 Bq/kg of natural platinum. 198Pt can undergo alpha decay, but its decay has never been observed (the half-life is known to be longer than 3.2×1014 years); therefore, it is considered stable. Platinum also has 34 synthetic isotopes ranging in atomic mass from 165 to 204, making the total number of known isotopes 40. The least stable of these are 165Pt and 166Pt, with half-lives of 260 µs, whereas the most stable is 193Pt with a half-life of 50 years. Most platinum isotopes decay by some combination of beta decay and alpha decay. 188Pt, 191Pt, and 193Pt decay primarily by electron capture. 190Pt and 198Pt are predicted to have energetically favorable double beta decay paths.[19]
24
+
25
+ Platinum is an extremely rare metal,[20] occurring at a concentration of only 0.005 ppm in Earth's crust.[21][22] It is sometimes mistaken for silver. Platinum is often found chemically uncombined as native platinum and as alloy with the other platinum-group metals and iron mostly. Most often the native platinum is found in secondary deposits in alluvial deposits. The alluvial deposits used by pre-Columbian people in the Chocó Department, Colombia are still a source for platinum-group metals. Another large alluvial deposit is in the Ural Mountains, Russia, and it is still mined.[14]
26
+
27
+ In nickel and copper deposits, platinum-group metals occur as sulfides (e.g. (Pt,Pd)S), tellurides (e.g. PtBiTe), antimonides (PdSb), and arsenides (e.g. PtAs2), and as end alloys with nickel or copper. Platinum arsenide, sperrylite (PtAs2), is a major source of platinum associated with nickel ores in the Sudbury Basin deposit in Ontario, Canada. At Platinum, Alaska, about 17,000 kg (550,000 ozt) was mined between 1927 and 1975. The mine ceased operations in 1990.[23] The rare sulfide mineral cooperite, (Pt,Pd,Ni)S, contains platinum along with palladium and nickel. Cooperite occurs in the Merensky Reef within the Bushveld complex, Gauteng, South Africa.[24]
28
+
29
+ In 1865, chromites were identified in the Bushveld region of South Africa, followed by the discovery of platinum in 1906.[25] In 1924, the geologist Hans Merensky discovered a large supply of platinum in the Bushveld Igneous Complex in South Africa. The specific layer he found, named the Merensky Reef, contains around 75% of the world's known platinum.[26][27] The large copper–nickel deposits near Norilsk in Russia, and the Sudbury Basin, Canada, are the two other large deposits. In the Sudbury Basin, the huge quantities of nickel ore processed make up for the fact platinum is present as only 0.5 ppm in the ore. Smaller reserves can be found in the United States,[27] for example in the Absaroka Range in Montana.[28] In 2010, South Africa was the top producer of platinum, with an almost 77% share, followed by Russia at 13%; world production in 2010 was 192,000 kg (423,000 lb).[29]
30
+
31
+ Large platinum deposits are present in the state of Tamil Nadu, India.[30]
32
+
33
+ Platinum exists in higher abundances on the Moon and in meteorites. Correspondingly, platinum is found in slightly higher abundances at sites of bolide impact on Earth that are associated with resulting post-impact volcanism, and can be mined economically; the Sudbury Basin is one such example.[31]
34
+
35
+ Hexachloroplatinic acid mentioned above is probably the most important platinum compound, as it serves as the precursor for many other platinum compounds. By itself, it has various applications in photography, zinc etchings, indelible ink, plating, mirrors, porcelain coloring, and as a catalyst.[32]
36
+
37
+ Treatment of hexachloroplatinic acid with an ammonium salt, such as ammonium chloride, gives ammonium hexachloroplatinate,[16] which is relatively insoluble in ammonium solutions. Heating this ammonium salt in the presence of hydrogen reduces it to elemental platinum. Potassium hexachloroplatinate is similarly insoluble, and hexachloroplatinic acid has been used in the determination of potassium ions by gravimetry.[33]
38
+
39
+ When hexachloroplatinic acid is heated, it decomposes through platinum(IV) chloride and platinum(II) chloride to elemental platinum, although the reactions do not occur stepwise:[34]
40
+
41
+ All three reactions are reversible. Platinum(II) and platinum(IV) bromides are known as well. Platinum hexafluoride is a strong oxidizer capable of oxidizing oxygen.
42
+
43
+ Platinum(IV) oxide, PtO2, also known as 'Adams' catalyst', is a black powder that is soluble in potassium hydroxide (KOH) solutions and concentrated acids.[35] PtO2 and the less common PtO both decompose upon heating.[9] Platinum(II,IV) oxide, Pt3O4, is formed in the following reaction:
44
+
45
+ Unlike palladium acetate, platinum(II) acetate is not commercially available. Where a base is desired, the halides have been used in conjunction with sodium acetate.[17] The use of platinum(II) acetylacetonate has also been reported.[36]
46
+
47
+ Several barium platinides have been synthesized in which platinum exhibits negative oxidation states ranging from −1 to −2. These include BaPt, Ba3Pt2, and Ba2Pt.[37] Caesium platinide, Cs2Pt, a dark-red transparent crystalline compound[38] has been shown to contain Pt2− anions.[39] Platinum also exhibits negative oxidation states at surfaces reduced electrochemically.[40] The negative oxidation states exhibited by platinum are unusual for metallic elements, and they are attributed to the relativistic stabilization of the 6s orbitals.[39]
48
+
49
+ Zeise's salt, containing an ethylene ligand, was one of the first organometallic compounds discovered. Dichloro(cycloocta-1,5-diene)platinum(II) is a commercially available olefin complex, which contains easily displaceable cod ligands ("cod" being an abbreviation of 1,5-cyclooctadiene). The cod complex and the halides are convenient starting points to platinum chemistry.[17]
50
+
51
+ Cisplatin, or cis-diamminedichloroplatinum(II) is the first of a series of square planar platinum(II)-containing chemotherapy drugs.[41] Others include carboplatin and oxaliplatin. These compounds are capable of crosslinking DNA, and kill cells by similar pathways to alkylating chemotherapeutic agents.[42] (Side effects of cisplatin include nausea and vomiting, hair loss, tinnitus, hearing loss, and nephrotoxicity.)[43][44]
52
+
53
+ The hexachloroplatinate ion
54
+
55
+ The anion of Zeise's salt
56
+
57
+ Dichloro(cycloocta-1,5-diene)platinum(II)
58
+
59
+ Cisplatin
60
+
61
+ Archaeologists have discovered traces of platinum in the gold used in ancient Egyptian burials as early as 1200 BC. For example, a small box from burial of Shepenupet II was found to be decorated with gold-platinum hieroglyphics.[45] However, the extent of early Egyptians' knowledge of the metal is unclear. It is quite possible they did not recognize there was platinum in their gold.[46][47]
62
+
63
+ The metal was used by pre-Columbian Americans near modern-day Esmeraldas, Ecuador to produce artifacts of a white gold-platinum alloy. Archeologists usually associate the tradition of platinum-working in South America with the La Tolita Culture (circa 600 BC - AD 200), but precise dates and location is difficult, as most platinum artifacts from the area were bought secondhand through the antiquities trade rather than obtained by direct archeological excavation.[48] To work the metal, they would combine gold and platinum powders by sintering. The resulting gold-platinum alloy would then be soft enough to shape with tools.[49][50] The platinum used in such objects was not the pure element, but rather a naturally occurring mixture of the platinum group metals, with small amounts of palladium, rhodium, and iridium.[51]
64
+
65
+ The first European reference to platinum appears in 1557 in the writings of the Italian humanist Julius Caesar Scaliger as a description of an unknown noble metal found between Darién and Mexico, "which no fire nor any Spanish artifice has yet been able to liquefy".[52] From their first encounters with platinum, the Spanish generally saw the metal as a kind of impurity in gold, and it was treated as such. It was often simply thrown away, and there was an official decree forbidding the adulteration of gold with platinum impurities.[51]
66
+
67
+ In 1735, Antonio de Ulloa and Jorge Juan y Santacilia saw Native Americans mining platinum while the Spaniards were travelling through Colombia and Peru for eight years. Ulloa and Juan found mines with the whitish metal nuggets and took them home to Spain. Antonio de Ulloa returned to Spain and established the first mineralogy lab in Spain and was the first to systematically study platinum, which was in 1748. His historical account of the expedition included a description of platinum as being neither separable nor calcinable. Ulloa also anticipated the discovery of platinum mines. After publishing the report in 1748, Ulloa did not continue to investigate the new metal. In 1758, he was sent to superintend mercury mining operations in Huancavelica.[52]
68
+
69
+ In 1741, Charles Wood,[53] a British metallurgist, found various samples of Colombian platinum in Jamaica, which he sent to William Brownrigg for further investigation.
70
+
71
+ In 1750, after studying the platinum sent to him by Wood, Brownrigg presented a detailed account of the metal to the Royal Society, stating that he had seen no mention of it in any previous accounts of known minerals.[54] Brownrigg also made note of platinum's extremely high melting point and refractoriness toward borax.[clarification needed] Other chemists across Europe soon began studying platinum, including Andreas Sigismund Marggraf,[55] Torbern Bergman, Jöns Jakob Berzelius, William Lewis, and Pierre Macquer. In 1752, Henrik Scheffer published a detailed scientific description of the metal, which he referred to as "white gold", including an account of how he succeeded in fusing platinum ore with the aid of arsenic. Scheffer described platinum as being less pliable than gold, but with similar resistance to corrosion.[52]
72
+
73
+ Carl von Sickingen researched platinum extensively in 1772. He succeeded in making malleable platinum by alloying it with gold, dissolving the alloy in hot aqua regia, precipitating the platinum with ammonium chloride, igniting the ammonium chloroplatinate, and hammering the resulting finely divided platinum to make it cohere. Franz Karl Achard made the first platinum crucible in 1784. He worked with the platinum by fusing it with arsenic, then later volatilizing the arsenic.[52]
74
+
75
+ Because the other platinum-family members were not discovered yet (platinum was the first in the list), Scheffer and Sickingen made the false assumption that due to its hardness—which is slightly more than for pure iron—platinum would be a relatively non-pliable material, even brittle at times, when in fact its ductility and malleability are close to that of gold. Their assumptions could not be avoided because the platinum they experimented with was highly contaminated with minute amounts of platinum-family elements such as osmium and iridium, amongst others, which embrittled the platinum alloy. Alloying this impure platinum residue called "plyoxen" with gold was the only solution at the time to obtain a pliable compound, but nowadays, very pure platinum is available and extremely long wires can be drawn from pure platinum, very easily, due to its crystalline structure, which is similar to that of many soft metals.[56]
76
+
77
+ In 1786, Charles III of Spain provided a library and laboratory to Pierre-François Chabaneau to aid in his research of platinum. Chabaneau succeeded in removing various impurities from the ore, including gold, mercury, lead, copper, and iron. This led him to believe he was working with a single metal, but in truth the ore still contained the yet-undiscovered platinum-group metals. This led to inconsistent results in his experiments. At times, the platinum seemed malleable, but when it was alloyed with iridium, it would be much more brittle. Sometimes the metal was entirely incombustible, but when alloyed with osmium, it would volatilize. After several months, Chabaneau succeeded in producing 23 kilograms of pure, malleable platinum by hammering and compressing the sponge form while white-hot. Chabeneau realized the infusibility of platinum would lend value to objects made of it, and so started a business with Joaquín Cabezas producing platinum ingots and utensils. This started what is known as the "platinum age" in Spain.[52]
78
+
79
+ Platinum, along with the rest of the platinum-group metals, is obtained commercially as a by-product from nickel and copper mining and processing. During electrorefining of copper, noble metals such as silver, gold and the platinum-group metals as well as selenium and tellurium settle to the bottom of the cell as "anode mud", which forms the starting point for the extraction of the platinum-group metals.[58]
80
+
81
+ If pure platinum is found in placer deposits or other ores, it is isolated from them by various methods of subtracting impurities. Because platinum is significantly denser than many of its impurities, the lighter impurities can be removed by simply floating them away in a liquid. Platinum is paramagnetic, whereas nickel and iron are both ferromagnetic. These two impurities are thus removed by running an electromagnet over the mixture. Because platinum has a higher melting point than most other substances, many impurities can be burned or melted away without melting the platinum. Finally, platinum is resistant to hydrochloric and sulfuric acids, whereas other substances are readily attacked by them. Metal impurities can be removed by stirring the mixture in either of the two acids and recovering the remaining platinum.[59]
82
+
83
+ One suitable method for purification for the raw platinum, which contains platinum, gold, and the other platinum-group metals, is to process it with aqua regia, in which palladium, gold and platinum are dissolved, whereas osmium, iridium, ruthenium and rhodium stay unreacted. The gold is precipitated by the addition of iron(II) chloride and after filtering off the gold, the platinum is precipitated as ammonium chloroplatinate by the addition of ammonium chloride. Ammonium chloroplatinate can be converted to platinum by heating.[60] Unprecipitated hexachloroplatinate(IV) may be reduced with elemental zinc, and a similar method is suitable for small scale recovery of platinum from laboratory residues.[61] Mining and refining platinum has environmental impacts.[62]
84
+
85
+ Of the 218 tonnes of platinum sold in 2014, 98 tonnes were used for vehicle emissions control devices (45%), 74.7 tonnes for jewelry (34%), 20.0 tonnes for chemical production and petroleum refining (9.2%), and 5.85 tonnes for electrical applications such as hard disk drives (2.7%). The remaining 28.9 tonnes went to various other minor applications, such as medicine and biomedicine, glassmaking equipment, investment, electrodes, anticancer drugs, oxygen sensors, spark plugs and turbine engines.[63]
86
+
87
+ The most common use of platinum is as a catalyst in chemical reactions, often as platinum black. It has been employed as a catalyst since the early 19th century, when platinum powder was used to catalyze the ignition of hydrogen. Its most important application is in automobiles as a catalytic converter, which allows the complete combustion of low concentrations of unburned hydrocarbons from the exhaust into carbon dioxide and water vapor. Platinum is also used in the petroleum industry as a catalyst in a number of separate processes, but especially in catalytic reforming of straight-run naphthas into higher-octane gasoline that becomes rich in aromatic compounds. PtO2, also known as Adams' catalyst, is used as a hydrogenation catalyst, specifically for vegetable oils.[32] Platinum also strongly catalyzes the decomposition of hydrogen peroxide into water and oxygen[64] and it is used in fuel cells[65] as a catalyst for the reduction of oxygen.[66]
88
+
89
+ From 1889 to 1960, the meter was defined as the length of a platinum-iridium (90:10) alloy bar, known as the international prototype of the meter. The previous bar was made of platinum in 1799. Until May 2019, the kilogram was defined as the mass of the international prototype of the kilogram, a cylinder of the same platinum-iridium alloy made in 1879.[67]
90
+
91
+ The standard hydrogen electrode also uses a platinized platinum electrode due to its corrosion resistance, and other attributes.[68]
92
+
93
+ Platinum is a precious metal commodity; its bullion has the ISO currency code of XPT. Coins, bars, and ingots are traded or collected. Platinum finds use in jewellery, usually as a 90–95% alloy, due to its inertness. It is used for this purpose for its prestige and inherent bullion value. Jewellery trade publications advise jewellers to present minute surface scratches (which they term patina) as a desirable feature in attempt to enhance value of platinum products.[69][70]
94
+
95
+ In watchmaking, Vacheron Constantin, Patek Philippe, Rolex, Breitling, and other companies use platinum for producing their limited edition watch series. Watchmakers appreciate the unique properties of platinum, as it neither tarnishes nor wears out (the latter quality relative to gold).[71]
96
+
97
+ The price of platinum, like other industrial commodities, is more volatile than that of gold. In 2008, the price of platinum dropped from $2,252 to $774 per oz,[72] a loss of nearly 2/3 of its value. By contrast, the price of gold dropped from ~$1,000 to ~$700/oz during the same time frame, a loss of only 1/3 of its value.
98
+
99
+ During periods of sustained economic stability and growth, the price of platinum tends to be as much as twice the price of gold, whereas during periods of economic uncertainty,[73] the price of platinum tends to decrease due to reduced industrial demand, falling below the price of gold. Gold prices are more stable in slow economic times, as gold is considered a safe haven. Although gold is also used in industrial applications, especially in electronics due to its use as a conductor, its demand is not so driven by industrial uses. In the 18th century, platinum's rarity made King Louis XV of France declare it the only metal fit for a king.[74]
100
+
101
+ 1,000 cubic centimeters of 99.9% pure platinum, worth about US$696,000 at 29 Jun 2016 prices[75]
102
+
103
+ Average price of platinum from 1992 to 2012 in US$ per troy ounce[76]
104
+
105
+ In the laboratory, platinum wire is used for electrodes; platinum pans and supports are used in thermogravimetric analysis because of the stringent requirements of chemical inertness upon heating to high temperatures (~1000 °C). Platinum is used as an alloying agent for various metal products, including fine wires, noncorrosive laboratory containers, medical instruments, dental prostheses, electrical contacts, and thermocouples. Platinum-cobalt, an alloy of roughly three parts platinum and one part cobalt, is used to make relatively strong permanent magnets.[32] Platinum-based anodes are used in ships, pipelines, and steel piers.[14]
106
+
107
+ Platinum's rarity as a metal has caused advertisers to associate it with exclusivity and wealth. "Platinum" debit and credit cards have greater privileges than "gold" cards.[77] "Platinum awards" are the second highest possible, ranking above "gold", "silver" and "bronze", but below diamond. For example, in the United States, a musical album that has sold more than 1 million copies will be credited as "platinum", whereas an album that has sold more than 10 million copies will be certified as "diamond".[78] Some products, such as blenders and vehicles, with a silvery-white color are identified as "platinum". Platinum is considered a precious metal, although its use is not as common as the use of gold or silver. The frame of the Crown of Queen Elizabeth The Queen Mother, manufactured for her coronation as Consort of King George VI, is made of platinum. It was the first British crown to be made of this particular metal.[79]
108
+
109
+ According to the Centers for Disease Control and Prevention, short-term exposure to platinum salts may cause irritation of the eyes, nose, and throat, and long-term exposure may cause both respiratory and skin allergies. The current OSHA standard is 2 micrograms per cubic meter of air averaged over an 8-hour work shift.[80] The National Institute for Occupational Safety and Health has set a recommended exposure limit (REL) for platinum as 1 mg/m3 over an 8-hour workday.[81]
110
+
111
+ Platinum-based antineoplastic agents are used in chemotherapy, and show good activity against some tumors.[82]
112
+
113
+ As platinum is a catalyst in the manufacture of the silicone rubber and gel components of several types of medical implants (breast implants, joint replacement prosthetics, artificial lumbar discs, vascular access ports, etc.), the possibility that platinum could enter the body and cause adverse effects has merited study. The Food and Drug Administration and other institutions have reviewed the issue and found no evidence to suggest toxicity in vivo.[83][84] Platinum has been identified by the FDA as a "fake cancer 'cure'".[85]
en/4667.html.txt ADDED
@@ -0,0 +1,259 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ Plato (/ˈpleɪtoʊ/ PLAY-toe;[2] Greek: Πλάτων Plátōn, pronounced [plá.tɔːn] in Classical Attic; 428/427 or 424/423 – 348/347 BC) was an Athenian philosopher during the Classical period in Ancient Greece, founder of the Platonist school of thought, and the Academy, the first institution of higher learning in the Western world.
4
+
5
+ He is widely considered the pivotal figure in the history of Ancient Greek and Western philosophy, along with his teacher, Socrates, and his most famous student, Aristotle.[a] Plato has also often been cited as one of the founders of Western religion and spirituality.[4] The so-called Neoplatonism of philosophers like Plotinus and Porphyry influenced Saint Augustine and thus Christianity. Alfred North Whitehead once noted: "the safest general characterization of the European philosophical tradition is that it consists of a series of footnotes to Plato."[5]
6
+
7
+ Plato was the innovator of the written dialogue and dialectic forms in philosophy. Plato is also considered the founder of Western political philosophy. His most famous contribution is the theory of Forms known by pure reason, in which Plato presents a solution to the problem of universals known as Platonism (also ambiguously called either Platonic realism or Platonic idealism). He is also the namesake of Platonic love and the Platonic solids.
8
+
9
+ His own most decisive philosophical influences are usually thought to have been along with Socrates, the pre-Socratics Pythagoras, Heraclitus and Parmenides, although few of his predecessors' works remain extant and much of what we know about these figures today derives from Plato himself.[b] Unlike the work of nearly all of his contemporaries, Plato's entire body of work is believed to have survived intact for over 2,400 years.[7] Although their popularity has fluctuated over the years, the works of Plato have never been without readers since the time they were written.[8]
10
+
11
+ Due to a lack of surviving accounts, little is known about Plato's early life and education. Plato belonged to an aristocratic and influential family. According to a disputed tradition, reported by doxographer Diogenes Laërtius, Plato's father Ariston traced his descent from the king of Athens, Codrus, and the king of Messenia, Melanthus.[9]
12
+
13
+ Plato's mother was Perictione, whose family boasted of a relationship with the famous Athenian lawmaker and lyric poet Solon, one of the seven sages, who repealed the laws of Draco (except for the death penalty for homicide).[10] Perictione was sister of Charmides and niece of Critias, both prominent figures of the Thirty Tyrants, known as the Thirty, the brief oligarchic regime (404–403 BC), which followed on the collapse of Athens at the end of the Peloponnesian War (431–404 BC).[11] According to some accounts, Ariston tried to force his attentions on Perictione, but failed in his purpose; then the god Apollo appeared to him in a vision, and as a result, Ariston left Perictione unmolested.[12]
14
+
15
+ The exact time and place of Plato's birth are unknown. Based on ancient sources, most modern scholars believe that he was born in Athens or Aegina[c] between 429 and 423 BC, not long after the start of the Peloponnesian War.[d] The traditional date of Plato's birth during the 87th or 88th Olympiad, 428 or 427 BC, is based on a dubious interpretation of Diogenes Laërtius, who says, "When [Socrates] was gone, [Plato] joined Cratylus the Heracleitean and Hermogenes, who philosophized in the manner of Parmenides. Then, at twenty-eight, Hermodorus says, [Plato] went to Euclides in Megara." However, as Debra Nails argues, the text does not state that Plato left for Megara immediately after joining Cratylus and Hermogenes.[22] In his Seventh Letter, Plato notes that his coming of age coincided with the taking of power by the Thirty, remarking, "But a youth under the age of twenty made himself a laughingstock if he attempted to enter the political arena." Thus, Nails dates Plato's birth to 424/423.[23]
16
+
17
+ According to Neanthes, Plato was six years younger than Isocrates, and therefore was born the same year the prominent Athenian statesman Pericles died (429 BC).[24] Jonathan Barnes regards 428 BC as the year of Plato's birth.[20][21] The grammarian Apollodorus of Athens in his Chronicles argues that Plato was born in the 88th Olympiad.[17] Both the Suda and Sir Thomas Browne also claimed he was born during the 88th Olympiad.[16][25] Another legend related that, when Plato was an infant, bees settled on his lips while he was sleeping: an augury of the sweetness of style in which he would discourse about philosophy.[26]
18
+
19
+ Besides Plato himself, Ariston and Perictione had three other children; two sons, Adeimantus and Glaucon, and a daughter Potone, the mother of Speusippus (the nephew and successor of Plato as head of the Academy).[11] The brothers Adeimantus and Glaucon are mentioned in the Republic as sons of Ariston,[27] and presumably brothers of Plato, though some have argued they were uncles.[e] In a scenario in the Memorabilia, Xenophon confused the issue by presenting a Glaucon much younger than Plato.[29]
20
+
21
+ Ariston appears to have died in Plato's childhood, although the precise dating of his death is difficult.[30] Perictione then married Pyrilampes, her mother's brother,[31] who had served many times as an ambassador to the Persian court and was a friend of Pericles, the leader of the democratic faction in Athens.[32] Pyrilampes had a son from a previous marriage, Demus, who was famous for his beauty.[33] Perictione gave birth to Pyrilampes' second son, Antiphon, the half-brother of Plato, who appears in Parmenides.[34]
22
+
23
+ In contrast to his reticence about himself, Plato often introduced his distinguished relatives into his dialogues, or referred to them with some precision. In addition to Adeimantus and Glaucon in the Republic, Charmides has a dialogue named after him; and Critias speaks in both Charmides and Protagoras.[35] These and other references suggest a considerable amount of family pride and enable us to reconstruct Plato's family tree. According to Burnet, "the opening scene of the Charmides is a glorification of the whole [family] connection ... Plato's dialogues are not only a memorial to Socrates, but also the happier days of his own family."[36]
24
+
25
+ The fact that the philosopher in his maturity called himself Platon is indisputable, but the origin of this name remains mysterious. Platon is a nickname from the adjective platýs (πλατύς) 'broad'. Although Platon was a fairly common name (31 instances are known from Athens alone),[37] the name does not occur in Plato's known family line.[38] The sources of Diogenes Laërtius account for this by claiming that his wrestling coach, Ariston of Argos, dubbed him "broad" on account of his chest and shoulders, or that Plato derived his name from the breadth of his eloquence, or his wide forehead.[39][40] While recalling a moral lesson about frugal living Seneca mentions the meaning of Plato's name: "His very name was given him because of his broad chest."[41]
26
+
27
+ His true name was supposedly Aristocles (Ἀριστοκλῆς), meaning 'best reputation'.[f] According to Diogenes Laërtius, he was named after his grandfather, as was common in Athenian society.[42] But there is only one inscription of an Aristocles, an early archon of Athens in 605/4 BC. There is no record of a line from Aristocles to Plato's father, Ariston. Recently a scholar has argued that even the name Aristocles for Plato was a much later invention.[43] However, another scholar claims that "there is good reason for not dismissing [the idea that Aristocles was Plato's given name] as a mere invention of his biographers", noting how prevalent that account is in our sources.[38]
28
+
29
+ Ancient sources describe him as a bright though modest boy who excelled in his studies. Apuleius informs us that Speusippus praised Plato's quickness of mind and modesty as a boy, and the "first fruits of his youth infused with hard work and love of study".[44] His father contributed all which was necessary to give to his son a good education, and, therefore, Plato must have been instructed in grammar, music, and gymnastics by the most distinguished teachers of his time.[45] Plato invokes Damon many times in the Republic. Plato was a wrestler, and Dicaearchus went so far as to say that Plato wrestled at the Isthmian games.[46] Plato had also attended courses of philosophy; before meeting Socrates, he first became acquainted with Cratylus and the Heraclitean doctrines.[47]
30
+
31
+ Ambrose believed that Plato met Jeremiah in Egypt and was influenced by his ideas. Augustine initially accepted this claim, but later rejected it, arguing in The City of God that "Plato was born a hundred years after Jeremiah prophesied."[48][need quotation to verify]
32
+
33
+ Plato may have travelled in Italy, Sicily, Egypt and Cyrene.[49] Said to have returned to Athens at the age of forty, Plato founded one of the earliest known organized schools in Western Civilization on a plot of land in the Grove of Hecademus or Academus.[50] The Academy was a large enclosure of ground about six stadia outside of Athens proper. One story is that the name of the Academy comes from the ancient hero, Academus; still another story is that the name came from a supposed former owner of the plot of land, an Athenian citizen whose name was (also) Academus; while yet another account is that it was named after a member of the army of Castor and Pollux, an Arcadian named Echedemus.[51] The Academy operated until it was destroyed by Lucius Cornelius Sulla in 84 BC. Many intellectuals were schooled in the Academy, the most prominent one being Aristotle.[52][53]
34
+
35
+ Throughout his later life, Plato became entangled with the politics of the city of Syracuse. According to Diogenes Laërtius, Plato initially visited Syracuse while it was under the rule of Dionysius.[54] During this first trip Dionysius's brother-in-law, Dion of Syracuse, became one of Plato's disciples, but the tyrant himself turned against Plato. Plato almost faced death, but he was sold into slavery.[g] Anniceris, a Cyrenaic philosopher, subsequently bought Plato's freedom for twenty minas,[56] and sent him home. After Dionysius's death, according to Plato's Seventh Letter, Dion requested Plato return to Syracuse to tutor Dionysius II and guide him to become a philosopher king. Dionysius II seemed to accept Plato's teachings, but he became suspicious of Dion, his uncle. Dionysius expelled Dion and kept Plato against his will. Eventually Plato left Syracuse. Dion would return to overthrow Dionysius and ruled Syracuse for a short time before being usurped by Calippus, a fellow disciple of Plato.
36
+
37
+ According to Seneca, Plato died at the age of 81 on the same day he was born.[57] The Suda indicates that he lived to 82 years,[16] while Neanthes claims an age of 84.[17] A variety of sources have given accounts of his death. One story, based on a mutilated manuscript,[58] suggests Plato died in his bed, whilst a young Thracian girl played the flute to him.[59] Another tradition suggests Plato died at a wedding feast. The account is based on Diogenes Laërtius's reference to an account by Hermippus, a third-century Alexandrian.[60] According to Tertullian, Plato simply died in his sleep.[60]
38
+
39
+ Plato owned an estate at Iphistiadae, which by will he left to a certain youth named Adeimantus, presumably a younger relative, as Plato had an elder brother or uncle by this name.
40
+
41
+ Although Socrates influenced Plato directly as related in the dialogues, the influence of Pythagoras upon Plato, or in a broader sense, the Pythagoreans, such as Archytas also appears to have been significant. Aristotle claimed that the philosophy of Plato closely followed the teachings of the Pythagoreans,[61] and Cicero repeats this claim: "They say Plato learned all things Pythagorean."[62] It is probable that both were influenced by Orphism, and both believed in metempsychosis, transmigration of the soul.
42
+
43
+ Pythagoras held that all things are number, and the cosmos comes from numerical principles. He introduced the concept of form as distinct from matter, and that the physical world is an imitation of an eternal mathematical world. These ideas were very influential on Heraclitus, Parmenides and Plato.[63]
44
+
45
+ George Karamanolis notes that
46
+
47
+ Numenius accepted both Pythagoras and Plato as the two authorities one should follow in philosophy, but he regarded Plato's authority as subordinate to that of Pythagoras, whom he considered to be the source of all true philosophy—including Plato's own. For Numenius it is just that Plato wrote so many philosophical works, whereas Pythagoras' views were originally passed on only orally.[64]
48
+
49
+ According to R. M. Hare, this influence consists of three points:
50
+
51
+ Plato may have studied under the mathematician Theodorus of Cyrene, and has a dialogue named for and whose central character is the mathematician Theaetetus. While not a mathematician, Plato was considered an accomplished teacher of mathematics. Eudoxus of Cnidus, the greatest mathematician in Classical Greece, who contributed much of what is found in Euclid's Elements, was taught by Archytas and Plato. Plato helped to distinguish between pure and applied mathematics by widening the gap between "arithmetic", now called number theory and "logistic", now called arithmetic.[h]
52
+
53
+ In the dialogue Timaeus Plato associated each of the four classical elements (earth, air, water, and fire) with a regular solid (cube, octahedron, icosahedron, and tetrahedron respectively) due to their shape, the so-called Platonic solids. The fifth regular solid, the dodecahedron, was supposed to be the element which made up the heavens.
54
+
55
+ The two philosophers Heraclitus and Parmenides, following the way initiated by pre-Socratic Greek philosophers like Pythagoras, depart from mythology and begin the metaphysical tradition that strongly influenced Plato and continues today.[63]
56
+
57
+ The surviving fragments written by Heraclitus suggest the view that all things are continuously changing, or becoming. His image of the river, with ever-changing waters, is well known. According to some ancient traditions like that of Diogenes Laërtius, Plato received these ideas through Heraclitus' disciple Cratylus, who held the more radical view that continuous change warrants scepticism because we cannot define a thing that does not have a permanent nature.[68]
58
+
59
+ Parmenides adopted an altogether contrary vision, arguing for the idea of changeless Being and the view that change is an illusion.[63] John Palmer notes "Parmenides' distinction among the principal modes of being and his derivation of the attributes that must belong to what must be, simply as such, qualify him to be seen as the founder of metaphysics or ontology as a domain of inquiry distinct from theology."[69]
60
+
61
+ These ideas about change and permanence, or becoming and Being, influenced Plato in formulating his theory of Forms.[68]
62
+
63
+ Plato's most self-critical dialogue is called Parmenides, featuring Parmenides and his student Zeno, who following Parmenides' denial of change argued forcefully with his paradoxes to deny the existence of motion.
64
+
65
+ Plato's Sophist dialogue includes an Eleatic stranger, a follower of Parmenides, as a foil for his arguments against Parmenides. In the dialogue Plato distinguishes nouns and verbs, providing some of the earliest treatment of subject and predicate. He also argues that motion and rest both "are", against followers of Parmenides who say rest is but motion is not.
66
+
67
+ Plato was one of the devoted young followers of Socrates. The precise relationship between Plato and Socrates remains an area of contention among scholars.
68
+
69
+ Plato never speaks in his own voice in his dialogues, and speaks as Socrates in all but the Laws. In the Second Letter, it says, "no writing of Plato exists or ever will exist, but those now said to be his are those of a Socrates become beautiful and new";[70] if the Letter is Plato's, the final qualification seems to call into question the dialogues' historical fidelity. In any case, Xenophon's Memorabilia and Aristophanes's The Clouds seem to present a somewhat different portrait of Socrates from the one Plato paints. Some have called attention to the problem of taking Plato's Socrates to be his mouthpiece, given Socrates' reputation for irony and the dramatic nature of the dialogue form.[71]
70
+
71
+ Aristotle attributes a different doctrine with respect to Forms to Plato and Socrates.[72] Aristotle suggests that Socrates' idea of forms can be discovered through investigation of the natural world, unlike Plato's Forms that exist beyond and outside the ordinary range of human understanding. In the dialogues of Plato though, Socrates sometimes seems to support a mystical side, discussing reincarnation and the mystery religions, this is generally attributed to Plato.[73] Regardless, this view of Socrates cannot be dismissed out of hand, as we cannot be sure of the differences between the views of Plato and Socrates. In the Meno Plato refers to the Eleusinian Mysteries, telling Meno he would understand Socrates's answers better if he could stay for the initiations next week. It is possible that Plato and Socrates took part in the Eleusinian Mysteries.[74]
72
+
73
+ In Plato's dialogues, Socrates and his company of disputants had something to say on many subjects, including several aspects of metaphysics. These include religion and science, human nature, love, and sexuality. More than one dialogue contrasts perception and reality, nature and custom, and body and soul.
74
+
75
+ "Platonism" and its theory of Forms (or theory of Ideas) denies the reality of the material world, considering it only an image or copy of the real world. The theory of Forms is first introduced in the Phaedo dialogue (also known as On the Soul), wherein Socrates refutes the pluralism of the likes of Anaxagoras, then the most popular response to Heraclitus and Parmenides, while giving the "Opposites Argument" in support of the Forms.
76
+
77
+ According to this theory of Forms there are at least two worlds: the apparent world of concrete objects, grasped by the senses, which constantly changes, and an unchanging and unseen world of Forms or abstract objects, grasped by pure reason (λογική). which ground what is apparent.
78
+
79
+ It can also be said there are three worlds, with the apparent world consisting of both the world of material objects and of mental images, with the "third realm" consisting of the Forms. Thus, though there is the term "Platonic idealism", this refers to Platonic Ideas or the Forms, and not to some platonic kind of idealism, an 18th-century view which sees matter as unreal in favour of mind. For Plato, though grasped by the mind, only the Forms are truly real.
80
+
81
+ Plato's Forms thus represent types of things, as well as properties, patterns, and relations, to which we refer as objects. Just as individual tables, chairs, and cars refer to objects in this world, 'tableness', 'chairness', and 'carness', as well as e. g. justice, truth, and beauty refer to objects in another world. One of Plato's most cited examples for the Forms were the truths of geometry, such as the Pythagorean theorem.
82
+
83
+ In other words, the Forms are universals given as a solution to the problem of universals, or the problem of "the One and the Many", e. g. how one predicate "red" can apply to many red objects. For Plato this is because there is one abstract object or Form of red, redness itself, in which the several red things "participate". As Plato's solution is that universals are Forms and that Forms are real if anything is, Plato's philosophy is unambiguously called Platonic realism. According to Aristotle, Plato's best known argument in support of the Forms was the "one over many" argument.
84
+
85
+ Aside from being immutable, timeless, changeless, and one over many, the Forms also provide definitions and the standard against which all instances are measured. In the dialogues Socrates regularly asks for the meaning – in the sense of intensional definitions – of a general term (e. g. justice, truth, beauty), and criticizes those who instead give him particular, extensional examples, rather than the quality shared by all examples.
86
+
87
+ There is thus a world of perfect, eternal, and changeless meanings of predicates, the Forms, existing in the realm of Being outside of space and time; and the imperfect sensible world of becoming, subjects somehow in a state between being and nothing, that partakes of the qualities of the Forms, and is its instantiation.
88
+
89
+ Plato advocates a belief in the immortality of the soul, and several dialogues end with long speeches imagining the afterlife. In the Timaeus, Socrates locates the parts of the soul within the human body: Reason is located in the head, spirit in the top third of the torso, and the appetite in the middle third of the torso, down to the navel.[75][76]
90
+
91
+ Several aspects of epistemology are also discussed by Socrates, such as wisdom. More than one dialogue contrasts knowledge and opinion. Plato's epistemology involves Socrates arguing that knowledge is not empirical, and that it comes from divine insight. The Forms are also responsible for both knowledge or certainty, and are grasped by pure reason.
92
+
93
+ In several dialogues, Socrates inverts the common man's intuition about what is knowable and what is real. Reality is unavailable to those who use their senses. Socrates says that he who sees with his eyes is blind. While most people take the objects of their senses to be real if anything is, Socrates is contemptuous of people who think that something has to be graspable in the hands to be real. In the Theaetetus, he says such people are eu amousoi (εὖ ἄμουσοι), an expression that means literally, "happily without the muses".[77] In other words, such people are willingly ignorant, living without divine inspiration and access to higher insights about reality.
94
+
95
+ In Plato's dialogues, Socrates always insists on his ignorance and humility, that he knows nothing, so called Socratic irony. Several dialogues refute a series of viewpoints, but offer no positive position of its own, ending in aporia.
96
+
97
+ In several of Plato's dialogues, Socrates promulgates the idea that knowledge is a matter of recollection of the state before one is born, and not of observation or study.[78] Keeping with the theme of admitting his own ignorance, Socrates regularly complains of his forgetfulness. In the Meno, Socrates uses a geometrical example to expound Plato's view that knowledge in this latter sense is acquired by recollection. Socrates elicits a fact concerning a geometrical construction from a slave boy, who could not have otherwise known the fact (due to the slave boy's lack of education). The knowledge must be present, Socrates concludes, in an eternal, non-experiential form.
98
+
99
+ In other dialogues, the Sophist, Statesman, Republic, and the Parmenides, Plato himself associates knowledge with the apprehension of unchanging Forms and their relationships to one another (which he calls "expertise" in Dialectic), including through the processes of collection and division.[79] More explicitly, Plato himself argues in the Timaeus that knowledge is always proportionate to the realm from which it is gained. In other words, if one derives one's account of something experientially, because the world of sense is in flux, the views therein attained will be mere opinions. And opinions are characterized by a lack of necessity and stability. On the other hand, if one derives one's account of something by way of the non-sensible forms, because these forms are unchanging, so too is the account derived from them. That apprehension of forms is required for knowledge may be taken to cohere with Plato's theory in the Theaetetus and Meno.[80] Indeed, the apprehension of Forms may be at the base of the "account" required for justification, in that it offers foundational knowledge which itself needs no account, thereby avoiding an infinite regression.[81]
100
+
101
+ Many have interpreted Plato as stating—even having been the first to write—that knowledge is justified true belief, an influential view that informed future developments in epistemology.[82] This interpretation is partly based on a reading of the Theaetetus wherein Plato argues that knowledge is distinguished from mere true belief by the knower having an "account" of the object of her or his true belief.[83] And this theory may again be seen in the Meno, where it is suggested that true belief can be raised to the level of knowledge if it is bound with an account as to the question of "why" the object of the true belief is so.[84][85]
102
+
103
+ Many years later, Edmund Gettier famously demonstrated the problems of the justified true belief account of knowledge. That the modern theory of justified true belief as knowledge which Gettier addresses is equivalent to Plato's is accepted by some scholars but rejected by others.[86] Plato himself also identified problems with the justified true belief definition in the Theaetetus, concluding that justification (or an "account") would require knowledge of difference, meaning that the definition of knowledge is circular.[87][88]
104
+
105
+ Several dialogues discuss ethics including virtue and vice, pleasure and pain, crime and punishment, and justice and medicine. Plato views "The Good" as the supreme Form, somehow existing even "beyond being".
106
+
107
+ Socrates propounded a moral intellectualism which claimed nobody does bad on purpose, and to know what is good results in doing what is good; that knowledge is virtue. In the Protagoras dialogue it is argued that virtue is innate and cannot be learned.
108
+
109
+ Socrates presents the famous Euthyphro dilemma in the dialogue of the same name.
110
+
111
+ The dialogues also discuss politics. Some of Plato's most famous doctrines are contained in the Republic as well as in the Laws and the Statesman. Because these doctrines are not spoken directly by Plato and vary between dialogues, they cannot be straightforwardly assumed as representing Plato's own views.
112
+
113
+ Socrates asserts that societies have a tripartite class structure corresponding to the appetite/spirit/reason structure of the individual soul. The appetite/spirit/reason are analogous to the castes of society.[89]
114
+
115
+ According to this model, the principles of Athenian democracy (as it existed in his day) are rejected as only a few are fit to rule. Instead of rhetoric and persuasion, Socrates says reason and wisdom should govern. As Socrates puts it:
116
+
117
+ Socrates describes these "philosopher kings" as "those who love the sight of truth"[91] and supports the idea with the analogy of a captain and his ship or a doctor and his medicine. According to him, sailing and health are not things that everyone is qualified to practice by nature. A large part of the Republic then addresses how the educational system should be set up to produce these philosopher kings.
118
+
119
+ In addition, the ideal city is used as an image to illuminate the state of one's soul, or the will, reason, and desires combined in the human body. Socrates is attempting to make an image of a rightly ordered human, and then later goes on to describe the different kinds of humans that can be observed, from tyrants to lovers of money in various kinds of cities. The ideal city is not promoted, but only used to magnify the different kinds of individual humans and the state of their soul. However, the philosopher king image was used by many after Plato to justify their personal political beliefs. The philosophic soul according to Socrates has reason, will, and desires united in virtuous harmony. A philosopher has the moderate love for wisdom and the courage to act according to wisdom. Wisdom is knowledge about the Good or the right relations between all that exists.
120
+
121
+ Wherein it concerns states and rulers, Socrates asks which is better—a bad democracy or a country reigned by a tyrant. He argues that it is better to be ruled by a bad tyrant, than by a bad democracy (since here all the people are now responsible for such actions, rather than one individual committing many bad deeds.) This is emphasised within the Republic as Socrates describes the event of mutiny on board a ship.[92] Socrates suggests the ship's crew to be in line with the democratic rule of many and the captain, although inhibited through ailments, the tyrant. Socrates' description of this event is parallel to that of democracy within the state and the inherent problems that arise.
122
+
123
+ According to Socrates, a state made up of different kinds of souls will, overall, decline from an aristocracy (rule by the best) to a timocracy (rule by the honourable), then to an oligarchy (rule by the few), then to a democracy (rule by the people), and finally to tyranny (rule by one person, rule by a tyrant).[93] Aristocracy in the sense of government (politeia) is advocated in Plato's Republic. This regime is ruled by a philosopher king, and thus is grounded on wisdom and reason.
124
+
125
+ The aristocratic state, and the man whose nature corresponds to it, are the objects of Plato's analyses throughout much of the Republic, as opposed to the other four types of states/men, who are discussed later in his work. In Book VIII, Socrates states in order the other four imperfect societies with a description of the state's structure and individual character. In timocracy the ruling class is made up primarily of those with a warrior-like character.[94] Oligarchy is made up of a society in which wealth is the criterion of merit and the wealthy are in control.[95] In democracy, the state bears resemblance to ancient Athens with traits such as equality of political opportunity and freedom for the individual to do as he likes.[96] Democracy then degenerates into tyranny from the conflict of rich and poor. It is characterized by an undisciplined society existing in chaos, where the tyrant rises as popular champion leading to the formation of his private army and the growth of oppression.[97][93][98]
126
+
127
+ Several dialogues tackle questions about art, including rhetoric and rhapsody. Socrates says that poetry is inspired by the muses, and is not rational. He speaks approvingly of this, and other forms of divine madness (drunkenness, eroticism, and dreaming) in the Phaedrus,[99] and yet in the Republic wants to outlaw Homer's great poetry, and laughter as well. In Ion, Socrates gives no hint of the disapproval of Homer that he expresses in the Republic. The dialogue Ion suggests that Homer's Iliad functioned in the ancient Greek world as the Bible does today in the modern Christian world: as divinely inspired literature that can provide moral guidance, if only it can be properly interpreted.
128
+
129
+ For a long time, Plato's unwritten doctrines[100][101][102] had been controversial. Many modern books on Plato seem to diminish its importance; nevertheless, the first important witness who mentions its existence is Aristotle, who in his Physics writes: "It is true, indeed, that the account he gives there [i.e. in Timaeus] of the participant is different from what he says in his so-called unwritten teachings (ἄγραφα δόγματα)."[103] The term "ἄγραφα δόγματα" literally means unwritten doctrines and it stands for the most fundamental metaphysical teaching of Plato, which he disclosed only orally, and some say only to his most trusted fellows, and which he may have kept secret from the public. The importance of the unwritten doctrines does not seem to have been seriously questioned before the 19th century.
130
+
131
+ A reason for not revealing it to everyone is partially discussed in Phaedrus where Plato criticizes the written transmission of knowledge as faulty, favouring instead the spoken logos: "he who has knowledge of the just and the good and beautiful ... will not, when in earnest, write them in ink, sowing them through a pen with words, which cannot defend themselves by argument and cannot teach the truth effectually."[104] The same argument is repeated in Plato's Seventh Letter: "every serious man in dealing with really serious subjects carefully avoids writing."[105] In the same letter he writes: "I can certainly declare concerning all these writers who claim to know the subjects that I seriously study ... there does not exist, nor will there ever exist, any treatise of mine dealing therewith."[106] Such secrecy is necessary in order not "to expose them to unseemly and degrading treatment".[107]
132
+
133
+ It is, however, said that Plato once disclosed this knowledge to the public in his lecture On the Good (Περὶ τἀγαθοῦ), in which the Good (τὸ ἀγαθόν) is identified with the One (the Unity, τὸ ἕν), the fundamental ontological principle. The content of this lecture has been transmitted by several witnesses. Aristoxenus describes the event in the following words: "Each came expecting to learn something about the things that are generally considered good for men, such as wealth, good health, physical strength, and altogether a kind of wonderful happiness. But when the mathematical demonstrations came, including numbers, geometrical figures and astronomy, and finally the statement Good is One seemed to them, I imagine, utterly unexpected and strange; hence some belittled the matter, while others rejected it."[108] Simplicius quotes Alexander of Aphrodisias, who states that "according to Plato, the first principles of everything, including the Forms themselves are One and Indefinite Duality (ἡ ἀόριστος δυάς), which he called Large and Small (τὸ μέγα καὶ τὸ μικρόν)", and Simplicius reports as well that "one might also learn this from Speusippus and Xenocrates and the others who were present at Plato's lecture on the Good".[43]
134
+
135
+ Their account is in full agreement with Aristotle's description of Plato's metaphysical doctrine. In Metaphysics he writes: "Now since the Forms are the causes of everything else, he [i.e. Plato] supposed that their elements are the elements of all things. Accordingly the material principle is the Great and Small [i.e. the Dyad], and the essence is the One (τὸ ἕν), since the numbers are derived from the Great and Small by participation in the One".[109] "From this account it is clear that he only employed two causes: that of the essence, and the material cause; for the Forms are the cause of the essence in everything else, and the One is the cause of it in the Forms. He also tells us what the material substrate is of which the Forms are predicated in the case of sensible things, and the One in that of the Forms—that it is this the duality (the Dyad, ἡ δυάς), the Great and Small (τὸ μέγα καὶ τὸ μικρόν). Further, he assigned to these two elements respectively the causation of good and of evil".[109]
136
+
137
+ The most important aspect of this interpretation of Plato's metaphysics is the continuity between his teaching and the Neoplatonic interpretation of Plotinus[i] or Ficino[j] which has been considered erroneous by many but may in fact have been directly influenced by oral transmission of Plato's doctrine. A modern scholar who recognized the importance of the unwritten doctrine of Plato was Heinrich Gomperz who described it in his speech during the 7th International Congress of Philosophy in 1930.[110] All the sources related to the ἄγραφα δόγματα have been collected by Konrad Gaiser and published as Testimonia Platonica.[111] These sources have subsequently been interpreted by scholars from the German Tübingen School of interpretation such as Hans Joachim Krämer or Thomas A. Szlezák.[k]
138
+
139
+ The trial of Socrates and his death sentence is the central, unifying event of Plato's dialogues. It is relayed in the dialogues Apology, Crito, and Phaedo. Apology is Socrates' defence speech, and Crito and Phaedo take place in prison after the conviction.
140
+
141
+ Apology is among the most frequently read of Plato's works. In the Apology, Socrates tries to dismiss rumours that he is a sophist and defends himself against charges of disbelief in the gods and corruption of the young. Socrates insists that long-standing slander will be the real cause of his demise, and says the legal charges are essentially false. Socrates famously denies being wise, and explains how his life as a philosopher was launched by the Oracle at Delphi. He says that his quest to resolve the riddle of the oracle put him at odds with his fellow man, and that this is the reason he has been mistaken for a menace to the city-state of Athens.
142
+
143
+ In Apology, Socrates is presented as mentioning Plato by name as one of those youths close enough to him to have been corrupted, if he were in fact guilty of corrupting the youth, and questioning why their fathers and brothers did not step forward to testify against him if he was indeed guilty of such a crime.[112] Later, Plato is mentioned along with Crito, Critobolus, and Apollodorus as offering to pay a fine of 30 minas on Socrates' behalf, in lieu of the death penalty proposed by Meletus.[113] In the Phaedo, the title character lists those who were in attendance at the prison on Socrates' last day, explaining Plato's absence by saying, "Plato was ill".[114]
144
+
145
+ If Plato's important dialogues do not refer to Socrates' execution explicitly, they allude to it, or use characters or themes that play a part in it. Five dialogues foreshadow the trial: In the Theaetetus and the Euthyphro Socrates tells people that he is about to face corruption charges.[115][116] In the Meno, one of the men who brings legal charges against Socrates, Anytus, warns him about the trouble he may get into if he does not stop criticizing important people.[117] In the Gorgias, Socrates says that his trial will be like a doctor prosecuted by a cook who asks a jury of children to choose between the doctor's bitter medicine and the cook's tasty treats.[118] In the Republic, Socrates explains why an enlightened man (presumably himself) will stumble in a courtroom situation.[119] Plato's support of aristocracy and distrust of democracy is also taken to be partly rooted in a democracy having killed Socrates. In the Protagoras, Socrates is a guest at the home of Callias, son of Hipponicus, a man whom Socrates disparages in the Apology as having wasted a great amount of money on sophists' fees.
146
+
147
+ Two other important dialogues, the Symposium and the Phaedrus, are linked to the main storyline by characters. In the Apology, Socrates says Aristophanes slandered him in a comic play, and blames him for causing his bad reputation, and ultimately, his death.[120] In the Symposium, the two of them are drinking together with other friends. The character Phaedrus is linked to the main story line by character (Phaedrus is also a participant in the Symposium and the Protagoras) and by theme (the philosopher as divine emissary, etc.) The Protagoras is also strongly linked to the Symposium by characters: all of the formal speakers at the Symposium (with the exception of Aristophanes) are present at the home of Callias in that dialogue. Charmides and his guardian Critias are present for the discussion in the Protagoras. Examples of characters crossing between dialogues can be further multiplied. The Protagoras contains the largest gathering of Socratic associates.
148
+
149
+ In the dialogues Plato is most celebrated and admired for, Socrates is concerned with human and political virtue, has a distinctive personality, and friends and enemies who "travel" with him from dialogue to dialogue. This is not to say that Socrates is consistent: a man who is his friend in one dialogue may be an adversary or subject of his mockery in another. For example, Socrates praises the wisdom of Euthyphro many times in the Cratylus, but makes him look like a fool in the Euthyphro. He disparages sophists generally, and Prodicus specifically in the Apology, whom he also slyly jabs in the Cratylus for charging the hefty fee of fifty drachmas for a course on language and grammar. However, Socrates tells Theaetetus in his namesake dialogue that he admires Prodicus and has directed many pupils to him. Socrates' ideas are also not consistent within or between or among dialogues.
150
+
151
+ Mythos and logos are terms that evolved along classical Greek history. In the times of Homer and Hesiod (8th century BC) they were essentially synonyms, and contained the meaning of 'tale' or 'history'. Later came historians like Herodotus and Thucydides, as well as philosophers like Heraclitus and Parmenides and other Presocratics who introduced a distinction between both terms; mythos became more a nonverifiable account, and logos a rational account.[121] It may seem that Plato, being a disciple of Socrates and a strong partisan of philosophy based on logos, should have avoided the use of myth-telling. Instead he made an abundant use of it. This fact has produced analytical and interpretative work, in order to clarify the reasons and purposes for that use.
152
+
153
+ Plato, in general, distinguished between three types of myth.[l] First there were the false myths, like those based on stories of gods subject to passions and sufferings, because reason teaches that God is perfect. Then came the myths based on true reasoning, and therefore also true. Finally there were those non verifiable because beyond of human reason, but containing some truth in them. Regarding the subjects of Plato's myths they are of two types, those dealing with the origin of the universe, and those about morals and the origin and fate of the soul.[122]
154
+
155
+ It is generally agreed that the main purpose for Plato in using myths was didactic. He considered that only a few people were capable or interested in following a reasoned philosophical discourse, but men in general are attracted by stories and tales. Consequently, then, he used the myth to convey the conclusions of the philosophical reasoning. Some of Plato's myths were based in traditional ones, others were modifications of them, and finally he also invented altogether new myths.[123] Notable examples include the story of Atlantis, the Myth of Er, and the Allegory of the Cave.
156
+
157
+ The theory of Forms is most famously captured in his Allegory of the Cave, and more explicitly in his analogy of the sun and the divided line. The Allegory of the Cave is a paradoxical analogy wherein Socrates argues that the invisible world is the most intelligible ('noeton') and that the visible world ((h)oraton) is the least knowable, and the most obscure.
158
+
159
+ Socrates says in the Republic that people who take the sun-lit world of the senses to be good and real are living pitifully in a den of evil and ignorance. Socrates admits that few climb out of the den, or cave of ignorance, and those who do, not only have a terrible struggle to attain the heights, but when they go back down for a visit or to help other people up, they find themselves objects of scorn and ridicule.
160
+
161
+ According to Socrates, physical objects and physical events are "shadows" of their ideal or perfect forms, and exist only to the extent that they instantiate the perfect versions of themselves. Just as shadows are temporary, inconsequential epiphenomena produced by physical objects, physical objects are themselves fleeting phenomena caused by more substantial causes, the ideals of which they are mere instances. For example, Socrates thinks that perfect justice exists (although it is not clear where) and his own trial would be a cheap copy of it.
162
+
163
+ The Allegory of the Cave is intimately connected to his political ideology, that only people who have climbed out of the cave and cast their eyes on a vision of goodness are fit to rule. Socrates claims that the enlightened men of society must be forced from their divine contemplation and be compelled to run the city according to their lofty insights. Thus is born the idea of the "philosopher-king", the wise person who accepts the power thrust upon him by the people who are wise enough to choose a good master. This is the main thesis of Socrates in the Republic, that the most wisdom the masses can muster is the wise choice of a ruler.[124]
164
+
165
+ A ring which could make one invisible, the Ring of Gyges is considered in the Republic for its ethical consequences.
166
+
167
+ He also compares the soul (Psyche) to a chariot. In this allegory he introduces a triple soul which composed of a Charioteer and two horses. Charioteer is a symbol of intellectual and logical part of the soul (logistikon), and two horses represents moral virtues (thymoeides) and passionate instincts (epithymetikon), Respectively.
168
+
169
+ Socrates employs a dialectic method which proceeds by questioning. The role of dialectic in Plato's thought is contested but there are two main interpretations: a type of reasoning and a method of intuition.[125] Simon Blackburn adopts the first, saying that Plato's dialectic is "the process of eliciting the truth by means of questions aimed at opening out what is already implicitly known, or at exposing the contradictions and muddles of an opponent's position."[125] A similar interpretation has been put forth by Louis Hartz, who suggests that elements of the dialectic are borrowed from Hegel.[126] According to this view, opposing arguments improve upon each other, and prevailing opinion is shaped by the synthesis of many conflicting ideas over time. Each new idea exposes a flaw in the accepted model, and the epistemological substance of the debate continually approaches the truth. Hartz's is a teleological interpretation at the core, in which philosophers will ultimately exhaust the available body of knowledge and thus reach "the end of history." Karl Popper, on the other hand, claims that dialectic is the art of intuition for "visualising the divine originals, the Forms or Ideas, of unveiling the Great Mystery behind the common man's everyday world of appearances."[127]
170
+
171
+ Plato often discusses the father-son relationship and the question of whether a father's interest in his sons has much to do with how well his sons turn out. In ancient Athens, a boy was socially located by his family identity, and Plato often refers to his characters in terms of their paternal and fraternal relationships. Socrates was not a family man, and saw himself as the son of his mother, who was apparently a midwife. A divine fatalist, Socrates mocks men who spent exorbitant fees on tutors and trainers for their sons, and repeatedly ventures the idea that good character is a gift from the gods. Plato's dialogue Crito reminds Socrates that orphans are at the mercy of chance, but Socrates is unconcerned. In the Theaetetus, he is found recruiting as a disciple a young man whose inheritance has been squandered. Socrates twice compares the relationship of the older man and his boy lover to the father-son relationship,[128][129] and in the Phaedo, Socrates' disciples, towards whom he displays more concern than his biological sons, say they will feel "fatherless" when he is gone.
172
+
173
+ Though Plato agreed with Aristotle that women were inferior to men, he thought because of this women needed an education. Plato thought weak men who live poor lives would be reincarnated as women. "Humans have a twofold nature, the superior kind should be such as would from then on be called "man".'
174
+
175
+ Plato never presents himself as a participant in any of the dialogues, and with the exception of the Apology, there is no suggestion that he heard any of the dialogues firsthand. Some dialogues have no narrator but have a pure "dramatic" form (examples: Meno, Gorgias, Phaedrus, Crito, Euthyphro), some dialogues are narrated by Socrates, wherein he speaks in first person (examples: Lysis, Charmides, Republic). One dialogue, Protagoras, begins in dramatic form but quickly proceeds to Socrates' narration of a conversation he had previously with the sophist for whom the dialogue is named; this narration continues uninterrupted till the dialogue's end.
176
+
177
+ Two dialogues Phaedo and Symposium also begin in dramatic form but then proceed to virtually uninterrupted narration by followers of Socrates. Phaedo, an account of Socrates' final conversation and hemlock drinking, is narrated by Phaedo to Echecrates in a foreign city not long after the execution took place.[m] The Symposium is narrated by Apollodorus, a Socratic disciple, apparently to Glaucon. Apollodorus assures his listener that he is recounting the story, which took place when he himself was an infant, not from his own memory, but as remembered by Aristodemus, who told him the story years ago.
178
+
179
+ The Theaetetus is a peculiar case: a dialogue in dramatic form embedded within another dialogue in dramatic form. In the beginning of the Theaetetus,[131] Euclides says that he compiled the conversation from notes he took based on what Socrates told him of his conversation with the title character. The rest of the Theaetetus is presented as a "book" written in dramatic form and read by one of Euclides' slaves.[132] Some scholars take this as an indication that Plato had by this date wearied of the narrated form.[133] With the exception of the Theaetetus, Plato gives no explicit indication as to how these orally transmitted conversations came to be written down.
180
+
181
+ Thirty-five dialogues and thirteen letters (the Epistles) have traditionally been ascribed to Plato, though modern scholarship doubts the authenticity of at least some of these. Plato's writings have been published in several fashions; this has led to several conventions regarding the naming and referencing of Plato's texts.
182
+
183
+ The usual system for making unique references to sections of the text by Plato derives from a 16th-century edition of Plato's works by Henricus Stephanus known as Stephanus pagination.
184
+
185
+ One tradition regarding the arrangement of Plato's texts is according to tetralogies. This scheme is ascribed by Diogenes Laërtius to an ancient scholar and court astrologer to Tiberius named Thrasyllus.
186
+
187
+ No one knows the exact order Plato's dialogues were written in, nor the extent to which some might have been later revised and rewritten. The works are usually grouped into Early (sometimes by some into Transitional), Middle, and Late period.[134][135] This choice to group chronologically is thought worthy of criticism by some (Cooper et al),[136] given that it is recognized that there is no absolute agreement as to the true chronology, since the facts of the temporal order of writing are not confidently ascertained.[137] Chronology was not a consideration in ancient times, in that groupings of this nature are virtually absent (Tarrant) in the extant writings of ancient Platonists.[138]
188
+
189
+ Whereas those classified as "early dialogues" often conclude in aporia, the so-called "middle dialogues" provide more clearly stated positive teachings that are often ascribed to Plato such as the theory of Forms. The remaining dialogues are classified as "late" and are generally agreed to be difficult and challenging pieces of philosophy. This grouping is the only one proven by stylometric analysis.[139] Among those who classify the dialogues into periods of composition, Socrates figures in all of the "early dialogues" and they are considered the most faithful representations of the historical Socrates.[140]
190
+
191
+ The following represents one relatively common division.[141] It should, however, be kept in mind that many of the positions in the ordering are still highly disputed, and also that the very notion that Plato's dialogues can or should be "ordered" is by no means universally accepted. Increasingly in the most recent Plato scholarship, writers are sceptical of the notion that the order of Plato's writings can be established with any precision,[142] though Plato's works are still often characterized as falling at least roughly into three groups.[6]
192
+
193
+ Early: Apology, Charmides, Crito, Euthyphro, Gorgias, (Lesser) Hippias (minor), (Greater) Hippias (major), Ion, Laches, Lysis, Protagoras
194
+
195
+ Middle: Cratylus, Euthydemus, Meno, Parmenides, Phaedo, Phaedrus, Republic, Symposium, Theaetetus
196
+
197
+ Late: Critias, Sophist, Statesman / Politicus, Timaeus, Philebus, Laws.[140]
198
+
199
+ A significant distinction of the early Plato and the later Plato has been offered by scholars such as E.R. Dodds and has been summarized by Harold Bloom in his book titled Agon: "E.R. Dodds is the classical scholar whose writings most illuminated the Hellenic descent (in) The Greeks and the Irrational ... In his chapter on Plato and the Irrational Soul ... Dodds traces Plato's spiritual evolution from the pure rationalist of the Protagoras to the transcendental psychologist, influenced by the Pythagoreans and Orphics, of the later works culminating in the Laws."[143]
200
+
201
+ Lewis Campbell was the first[144] to make exhaustive use of stylometry to prove objectively that the Critias, Timaeus, Laws, Philebus, Sophist, and Statesman were all clustered together as a group, while the Parmenides, Phaedrus, Republic, and Theaetetus belong to a separate group, which must be earlier (given Aristotle's statement in his Politics[145] that the Laws was written after the Republic; cf. Diogenes Laërtius Lives 3.37). What is remarkable about Campbell's conclusions is that, in spite of all the stylometric studies that have been conducted since his time, perhaps the only chronological fact about Plato's works that can now be said to be proven by stylometry is the fact that Critias, Timaeus, Laws, Philebus, Sophist, and Statesman are the latest of Plato's dialogues, the others earlier.[139]
202
+
203
+ Protagoras is often considered one of the last of the "early dialogues". Three dialogues are often considered "transitional" or "pre-middle": Euthydemus, Gorgias, and Meno. Proponents of dividing the dialogues into periods often consider the Parmenides and Theaetetus to come late in the middle period and be transitional to the next, as they seem to treat the theory of Forms critically (Parmenides) or only indirectly (Theaetetus).[146] Ritter's stylometric analysis places Phaedrus as probably after Theaetetus and Parmenides,[147] although it does not relate to the theory of Forms in the same way. The first book of the Republic is often thought to have been written significantly earlier than the rest of the work, although possibly having undergone revisions when the later books were attached to it.[146]
204
+
205
+ While looked to for Plato's "mature" answers to the questions posed by his earlier works, those answers are difficult to discern. Some scholars[140] indicate that the theory of Forms is absent from the late dialogues, its having been refuted in the Parmenides, but there isn't total consensus that the Parmenides actually refutes the theory of Forms.[148]
206
+
207
+ Jowett mentions in his Appendix to Menexenus, that works which bore the character of a writer were attributed to that writer even when the actual author was unknown.[149]
208
+
209
+ For below:
210
+
211
+ (*) if there is no consensus among scholars as to whether Plato is the author, and (‡) if most scholars agree that Plato is not the author of the work.[150]
212
+
213
+ First Alcibiades (*), Second Alcibiades (‡), Clitophon (*), Epinomis (‡), Epistles (*), Hipparchus (‡), Menexenus (*), Minos (‡), (Rival) Lovers (‡), Theages (‡)
214
+
215
+ The following works were transmitted under Plato's name, most of them already considered spurious in antiquity, and so were not included by Thrasyllus in his tetralogical arrangement. These works are labelled as Notheuomenoi ("spurious") or Apocrypha.
216
+
217
+ Some 250 known manuscripts of Plato survive.[151] The texts of Plato as received today apparently represent the complete written philosophical work of Plato and are generally good by the standards of textual criticism.[152] No modern edition of Plato in the original Greek represents a single source, but rather it is reconstructed from multiple sources which are compared with each other. These sources are medieval manuscripts written on vellum (mainly from 9th to 13th century AD Byzantium), papyri (mainly from late antiquity in Egypt), and from the independent testimonia of other authors who quote various segments of the works (which come from a variety of sources). The text as presented is usually not much different from what appears in the Byzantine manuscripts, and papyri and testimonia just confirm the manuscript tradition. In some editions however the readings in the papyri or testimonia are favoured in some places by the editing critic of the text. Reviewing editions of papyri for the Republic in 1987, Slings suggests that the use of papyri is hampered due to some poor editing practices.[153]
218
+
219
+ In the first century AD, Thrasyllus of Mendes had compiled and published the works of Plato in the original Greek, both genuine and spurious. While it has not survived to the present day, all the extant medieval Greek manuscripts are based on his edition.[154]
220
+
221
+ The oldest surviving complete manuscript for many of the dialogues is the Clarke Plato (Codex Oxoniensis Clarkianus 39, or Codex Boleianus MS E.D. Clarke 39), which was written in Constantinople in 895 and acquired by Oxford University in 1809.[155] The Clarke is given the siglum B in modern editions. B contains the first six tetralogies and is described internally as being written by "John the Calligrapher" on behalf of Arethas of Caesarea. It appears to have undergone corrections by Arethas himself.[156] For the last two tetralogies and the apocrypha, the oldest surviving complete manuscript is Codex Parisinus graecus 1807, designated A, which was written nearly contemporaneously to B, circa 900 AD.[157] A must be a copy of the edition edited by the patriarch, Photios, teacher of Arethas.[158][159][160]A probably had an initial volume containing the first 7 tetralogies which is now lost, but of which a copy was made, Codex Venetus append. class. 4, 1, which has the siglum T. The oldest manuscript for the seventh tetralogy is Codex Vindobonensis 54. suppl. phil. Gr. 7, with siglum W, with a supposed date in the twelfth century.[161] In total there are fifty-one such Byzantine manuscripts known, while others may yet be found.[162]
222
+
223
+ To help establish the text, the older evidence of papyri and the independent evidence of the testimony of commentators and other authors (i.e., those who quote and refer to an old text of Plato which is no longer extant) are also used. Many papyri which contain fragments of Plato's texts are among the Oxyrhynchus Papyri. The 2003 Oxford Classical Texts edition by Slings even cites the Coptic translation of a fragment of the Republic in the Nag Hammadi library as evidence.[163] Important authors for testimony include Olympiodorus the Younger, Plutarch, Proclus, Iamblichus, Eusebius, and Stobaeus.
224
+
225
+ During the early Renaissance, the Greek language and, along with it, Plato's texts were reintroduced to Western Europe by Byzantine scholars. In September or October 1484 Filippo Valori and Francesco Berlinghieri printed 1025 copies of Ficino's translation, using the printing press at the Dominican convent S.Jacopo di Ripoli.[164][165] Cosimo had been influenced toward studying Plato by the many Byzantine Platonists in Florence during his day, including George Gemistus Plethon.
226
+
227
+ The 1578 edition[166] of Plato's complete works published by Henricus Stephanus (Henri Estienne) in Geneva also included parallel Latin translation and running commentary by Joannes Serranus (Jean de Serres). It was this edition which established standard Stephanus pagination, still in use today.[167]
228
+
229
+ The Oxford Classical Texts offers the current standard complete Greek text of Plato's complete works. In five volumes edited by John Burnet, its first edition was published 1900–1907, and it is still available from the publisher, having last been printed in 1993.[168][169] The second edition is still in progress with only the first volume, printed in 1995, and the Republic, printed in 2003, available. The Cambridge Greek and Latin Texts and Cambridge Classical Texts and Commentaries series includes Greek editions of the Protagoras, Symposium, Phaedrus, Alcibiades, and Clitophon, with English philological, literary, and, to an extent, philosophical commentary.[170][171] One distinguished edition of the Greek text is E. R. Dodds' of the Gorgias, which includes extensive English commentary.[172][173]
230
+
231
+ The modern standard complete English edition is the 1997 Hackett Plato, Complete Works, edited by John M. Cooper.[174][175] For many of these translations Hackett offers separate volumes which include more by way of commentary, notes, and introductory material. There is also the Clarendon Plato Series by Oxford University Press which offers English translations and thorough philosophical commentary by leading scholars on a few of Plato's works, including John McDowell's version of the Theaetetus.[176] Cornell University Press has also begun the Agora series of English translations of classical and medieval philosophical texts, including a few of Plato's.[177]
232
+
233
+ The most famous criticism of Platonism is the Third Man Argument. Plato actually considered this objection with "large" rather than man in the Parmenides dialogue.
234
+
235
+ Many recent philosophers have diverged from what some would describe as the ontological models and moral ideals characteristic of traditional Platonism. A number of these postmodern philosophers have thus appeared to disparage Platonism from more or less informed perspectives. Friedrich Nietzsche notoriously attacked Plato's "idea of the good itself" along with many fundamentals of Christian morality, which he interpreted as "Platonism for the masses" in one of his most important works, Beyond Good and Evil (1886). Martin Heidegger argued against Plato's alleged obfuscation of Being in his incomplete tome, Being and Time (1927), and the philosopher of science Karl Popper argued in The Open Society and Its Enemies (1945) that Plato's alleged proposal for a utopian political regime in the Republic was prototypically totalitarian.
236
+
237
+ The Dutch historian of science Eduard Jan Dijksterhuis criticizes Plato, stating that he was guilty of "constructing an imaginary nature by reasoning from preconceived principles and forcing reality more or less to adapt itself to this construction."[178] Dijksterhuis adds that one of the errors into which Plato had "fallen in an almost grotesque manner, consisted in an over-estimation of what unaided thought, i.e. without recourse to experience, could achieve in the field of natural science."[179]
238
+
239
+ Plato's Academy mosaic was created in the villa of T. Siminius Stephanus in Pompeii, around 100 BC to 100 CE. The School of Athens fresco by Raphael features Plato also as a central figure. The Nuremberg Chronicle depicts Plato and other as anachronistic schoolmen.
240
+
241
+ Plato's thought is often compared with that of his most famous student, Aristotle, whose reputation during the Western Middle Ages so completely eclipsed that of Plato that the Scholastic philosophers referred to Aristotle as "the Philosopher". However, in the Byzantine Empire, the study of Plato continued.
242
+
243
+ The only Platonic work known to western scholarship was Timaeus, until translations were made after the fall of Constantinople, which occurred during 1453.[180] George Gemistos Plethon brought Plato's original writings from Constantinople in the century of its fall. It is believed that Plethon passed a copy of the Dialogues to Cosimo de' Medici when in 1438 the Council of Ferrara, called to unify the Greek and Latin Churches, was adjourned to Florence, where Plethon then lectured on the relation and differences of Plato and Aristotle, and fired Cosimo with his enthusiasm;[181] Cosimo would supply Marsilio Ficino with Plato's text for translation to Latin. During the early Islamic era, Persian and Arab scholars translated much of Plato into Arabic and wrote commentaries and interpretations on Plato's, Aristotle's and other Platonist philosophers' works (see Al-Farabi, Avicenna, Averroes, Hunayn ibn Ishaq). Many of these comments on Plato were translated from Arabic into Latin and as such influenced Medieval scholastic philosophers.[182]
244
+
245
+ During the Renaissance, with the general resurgence of interest in classical civilization, knowledge of Plato's philosophy would become widespread again in the West. Many of the greatest early modern scientists and artists who broke with Scholasticism and fostered the flowering of the Renaissance, with the support of the Plato-inspired Lorenzo (grandson of Cosimo), saw Plato's philosophy as the basis for progress in the arts and sciences. His political views, too, were well-received: the vision of wise philosopher-kings of the Republic matched the views set out in works such as Machiavelli's The Prince.[citation needed] More problematic was Plato's belief in metempsychosis as well as his ethical views (on polyamory and euthanasia in particular), which did not match those of Christianity. It was Plethon's student Bessarion who reconciled Plato with Christian theology, arguing that Plato's views were only ideals, unattainable due to the fall of man.[183] The Cambridge Platonists were around in the 17th century.
246
+
247
+ By the 19th century, Plato's reputation was restored, and at least on par with Aristotle's. Notable Western philosophers have continued to draw upon Plato's work since that time. Plato's influence has been especially strong in mathematics and the sciences. Plato's resurgence further inspired some of the greatest advances in logic since Aristotle, primarily through Gottlob Frege and his followers Kurt Gödel, Alonzo Church, and Alfred Tarski. Albert Einstein suggested that the scientist who takes philosophy seriously would have to avoid systematization and take on many different roles, and possibly appear as a Platonist or Pythagorean, in that such a one would have "the viewpoint of logical simplicity as an indispensable and effective tool of his research."[184]
248
+
249
+ The political philosopher and professor Leo Strauss is considered by some as the prime thinker involved in the recovery of Platonic thought in its more political, and less metaphysical, form. Strauss' political approach was in part inspired by the appropriation of Plato and Aristotle by medieval Jewish and Islamic political philosophers, especially Maimonides and Al-Farabi, as opposed to the Christian metaphysical tradition that developed from Neoplatonism. Deeply influenced by Nietzsche and Heidegger, Strauss nonetheless rejects their condemnation of Plato and looks to the dialogues for a solution to what all three latter day thinkers acknowledge as 'the crisis of the West.[citation needed]
250
+
251
+ W. V. O. Quine dubbed the problem of negative existentials "Plato's beard". Noam Chomsky dubbed the problem of knowledge Plato's problem. One author calls the definist fallacy the Socratic fallacy[citation needed].[relevant? – discuss]
252
+
253
+ More broadly, platonism (sometimes distinguished from Plato's particular view by the lowercase) refers to the view that there are many abstract objects. Still to this day, platonists take number and the truths of mathematics as the best support in favour of this view. Most mathematicians think, like platonists, that numbers and the truths of mathematics are perceived by reason rather than the senses yet exist independently of minds and people, that is to say, they are discovered rather than invented.[citation needed]
254
+
255
+ Contemporary platonism is also more open to the idea of there being infinitely many abstract objects, as numbers or propositions might qualify as abstract objects, while ancient Platonism seemed to resist this view, possibly because of the need to overcome the problem of "the One and the Many". Thus e. g. in the Parmenides dialogue, Plato denies there are Forms for more mundane things like hair and mud. However, he repeatedly does support the idea that there are Forms of artifacts, e. g. the Form of Bed. Contemporary platonism also tends to view abstract objects as unable to cause anything, but it is unclear whether the ancient Platonists felt this way.[citation needed]
256
+
257
+ Primary sources (Greek and Roman)
258
+
259
+ Secondary sources
en/4668.html.txt ADDED
@@ -0,0 +1,177 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Titus Maccius Plautus (/ˈplɔːtəs/; c. 254 – 184 BC), commonly known as Plautus, was a Roman playwright of the Old Latin period. His comedies are the earliest Latin literary works to have survived in their entirety. He wrote Palliata comoedia, the genre devised by the innovator of Latin literature, Livius Andronicus. The word Plautine /ˈplɔːtaɪn/ refers to both Plautus's own works and works similar to or influenced by his.
2
+
3
+ Not much is known about Titus Maccius Plautus' early life. It is believed that he was born in Sarsina, a small town in Emilia Romagna in northern Italy, around 254 BC.[1] According to Morris Marples, Plautus worked as a stage-carpenter or scene-shifter in his early years.[2] It is from this work, perhaps, that his love of the theater originated. His acting talent was eventually discovered; and he adopted the names "Maccius" (a clownish stock-character in popular farces) and "Plautus" (a term meaning either "flat-footed" or "flat-eared", like the ears of a hound).[3] Tradition holds that he made enough money to go into the nautical business, but that the venture collapsed. He is then said to have worked as a manual laborer and to have studied Greek drama—particularly the New Comedy of Menander—in his leisure. His studies allowed him to produce his plays, which were released between c. 205 and 184 BC. Plautus attained such a popularity that his name alone became a hallmark of theatrical success.
4
+
5
+ Plautus's comedies are mostly adapted from Greek models for a Roman audience, and are often based directly on the works of the Greek playwrights. He reworked the Greek texts to give them a flavour that would appeal to the local Roman audiences. They are the earliest surviving intact works in Latin literature.
6
+
7
+ Plautus's epitaph read:
8
+
9
+ postquam est mortem aptus Plautus, Comoedia luget,
10
+ scaena est deserta, dein Risus, Ludus Iocusque
11
+ et Numeri innumeri simul omnes conlacrimarunt.
12
+
13
+ Since Plautus is dead, Comedy mourns,
14
+ The stage is deserted; then Laughter, Jest and Wit,
15
+ And all Melody's countless numbers wept together.
16
+
17
+ Plautus wrote around 130 plays,[4] of which 20 have survived intact, making him the most prolific ancient dramatist in terms of surviving work. Only short fragments, mostly quotations by later writers of antiquity, survive from 31 other plays. Despite this, the manuscript tradition of Plautus is poorer than that of any other ancient dramatist, something not helped by the failure of scholia on Plautus to survive. The chief manuscript of Plautus is a palimpsest, known as the Ambrosian palimpsest (A), in which Plautus' plays had been scrubbed out to make way for Augustine's Commentary on the Psalms. The monk who performed this was more successful in some places than others. He seems to have begun furiously, scrubbing out Plautus' alphabetically arranged plays with zest before growing lazy, then finally regaining his vigor at the end of the manuscript to ensure not a word of Plautus was legible. Although modern technology has allowed classicists to view much of the effaced material, plays beginning in letters early in the alphabet have very poor texts (e.g. the end of Aulularia and start of Bacchides are lost), plays with letters in the middle of the alphabet have decent texts, while only traces survive of the play Vidularia.
18
+
19
+ A second manuscript tradition is represented by manuscripts of the Palatine family, so called because two of its most important manuscripts were once kept in the library of the Elector Palatine in Heidelberg in Germany.[5] The archetype of this family is now lost but it can be reconstructed from various later manuscripts, some of them containing either only the first half or the second half of the plays. The most important manuscript of this group is "B", of the 10th or early 11th century, now kept in the Vatican library.
20
+
21
+ Only the titles and various fragments of these plays have survived.
22
+
23
+ The historical context within which Plautus wrote can be seen, to some extent, in his comments on contemporary events and persons. Plautus was a popular comedic playwright while Roman theatre was still in its infancy and still largely undeveloped. At the same time, the Roman Republic was expanding in power and influence.[citation needed]
24
+
25
+ Plautus was sometimes accused of teaching the public indifference and mockery of the gods. Any character in his plays could be compared to a god. Whether to honour a character or to mock him, these references were demeaning to the gods. These references to the gods include a character comparing a mortal woman to a god, or saying he would rather be loved by a woman than by the gods. Pyrgopolynices from Miles Gloriosus (vs. 1265), in bragging about his long life, says he was born one day later than Jupiter. In Curculio, Phaedrome says "I am a God" when he first meets with Planesium. In Pseudolus, Jupiter is compared to Ballio the pimp. It is not uncommon, too, for a character to scorn the gods, as seen in Poenulus and Rudens.
26
+
27
+ However, when a character scorns a god, it is usually a character of low standing, such as a pimp. Plautus perhaps does this to demoralize the characters.[original research?] Soldiers often bring ridicule among the gods. Young men, meant to represent the upper social class, often belittle the gods in their remarks. Parasites, pimps, and courtesans often praise the gods with scant ceremony.
28
+
29
+ Tolliver argues that drama both reflects and foreshadows social change. It is likely that there was already much skepticism about the gods in Plautus' era. Plautus did not make up or encourage irreverence to the gods, but reflected ideas of his time. The state controlled stage productions, and Plautus' plays would have been banned, had they been too risqué.[6]
30
+
31
+ The Second Punic War occurred from 218–201 BC; its central event was Hannibal's invasion of Italy. M. Leigh has devoted an extensive chapter about Plautus and Hannibal in his 2004 book, Comedy and the Rise of Rome. He says that "the plays themselves contain occasional references to the fact that the state is at arms...".[7] One good example is a piece of verse from the Miles Gloriosus, the composition date of which is not clear but which is often placed in the last decade of the 3rd century BC.[8] A. F. West believes that this is inserted commentary on the Second Punic War. In his article "On a Patriotic Passage in the Miles Gloriosus of Plautus", he states that the war "engrossed the Romans more than all other public interests combined".[9] The passage seems intended to rile up the audience, beginning with hostis tibi adesse, or "the foe is near at hand".[10]
32
+
33
+ At the time, the general Scipio Africanus wanted to confront Hannibal, a plan "strongly favored by the plebs".[11] Plautus apparently pushes for the plan to be approved by the senate, working his audience up with the thought of an enemy in close proximity and a call to outmaneuver him. Therefore, it is reasonable to say that Plautus, according to P. B. Harvey, was "willing to insert [into his plays] highly specific allusions comprehensible to the audience".[12] M. Leigh writes in his chapter on Plautus and Hannibal that "the Plautus who emerges from this investigation is one whose comedies persistently touch the rawest nerves in the audience for whom he writes".[13]
34
+
35
+ Later, coming off the heels of the conflict with Hannibal, Rome was preparing to embark on another military mission, this time in Greece. While they would eventually move on Philip V in the Second Macedonian War, there was considerable debate beforehand about the course Rome should take in this conflict. But starting this war would not be an easy task considering those recent struggles with Carthage—many Romans were too tired of conflict to think of embarking on another campaign. As W. M. Owens writes in his article "Plautus' Stichus and the Political Crisis of 200 B.C.", "There is evidence that antiwar feeling ran deep and persisted even after the war was approved."[14] Owens contends that Plautus was attempting to match the complex mood of the Roman audience riding the victory of the Second Punic War but facing the beginning of a new conflict.[15] For instance, the characters of the dutiful daughters and their father seem obsessed over the idea of officium, the duty one has to do what is right. Their speech is littered with words such as pietas and aequus, and they struggle to make their father fulfill his proper role.[16] The stock parasite in this play, Gelasimus, has a patron-client relationship with this family and offers to do any job in order to make ends meet; Owens puts forward that Plautus is portraying the economic hardship many Roman citizens were experiencing due to the cost of war.[17]
36
+
37
+ With the repetition of responsibility to the desperation of the lower class, Plautus establishes himself firmly on the side of the average Roman citizen. While he makes no specific reference to the possible war with Greece or the previous war (that might be too dangerous), he does seem to push the message that the government should take care of its own people before attempting any other military actions.
38
+
39
+ In order to understand the Greek New Comedy of Menander and its similarities to Plautus, it is necessary to discuss, in juxtaposition with it, the days of Greek Old Comedy and its evolution into New Comedy. The ancient Greek playwright who best embodies Old Comedy is Aristophanes. A playwright of 5th century Athens, he wrote works of political satire such as The Wasps, The Birds, and The Clouds. Aristophanes' work is noted for its critical commentary on politics and societal values,[18] which is the key component of Old Comedy: consciousness of the world in which it is written, and analysis of this world. Comedy and theater were means for the political commentary of the time—the public conscience.
40
+
41
+ Unlike Aristophanes, Plautus avoided discussion of current events (in a narrow sense of the term) in his comedies.[19]
42
+
43
+ Greek New Comedy greatly differs from those plays of Aristophanes. The most notable difference, according to Dana F. Sutton, is that New Comedy, in comparison to Old Comedy, is "devoid of a serious political, social or intellectual content" and "could be performed in any number of social and political settings without risk of giving offense".[20] The risk-taking for which Aristophanes is known is noticeably lacking in the New Comedy plays of Menander. Instead, there is much more of a focus on the home and the family unit—something that the Romans, including Plautus, could easily understand and adopt for themselves later in history.
44
+
45
+ One main theme of Greek New Comedy is the father–son relationship. For example, in Menander's Dis Exapaton there is a focus on the betrayal between age groups and friends. The father-son relationship is very strong and the son remains loyal to the father. The relationship is always a focus, even if it's not the focus of every action taken by the main characters. In Plautus, on the other hand, the focus is still on the relationship between father and son, but we see betrayal between the two men that wasn't seen in Menander. There is a focus on the proper conduct between a father and son that, apparently, was so important to Roman society at the time of Plautus.
46
+
47
+ This becomes the main difference and, also, similarity between Menander and Plautus. They both address "situations that tend to develop in the bosom of the family".[20] Both authors, through their plays, reflect a patriarchal society in which the father-son relationship is essential to proper function and development of the household.[21] It is no longer a political statement, as in Old Comedy, but a statement about household relations and proper behavior between a father and his son. But the attitudes on these relationships seem much different—a reflection of how the worlds of Menander and Plautus differed.
48
+
49
+ There are differences not just in how the father-son relationship is presented, but also in the way in which Menander and Plautus write their poetry. William S. Anderson discusses the believability of Menander versus the believability of Plautus and, in essence, says that Plautus' plays are much less believable than those plays of Menander because they seem to be such a farce in comparison. He addresses them as a reflection of Menander with some of Plautus' own contributions. Anderson claims that there is unevenness in the poetry of Plautus that results in "incredulity and refusal of sympathy of the audience."[22]
50
+
51
+ The poetry of Menander and Plautus is best juxtaposed in their prologues. Robert B. Lloyd makes the point that "albeit the two prologues introduce plays whose plots are of essentially different types, they are almost identical in form..."[23] He goes on to address the specific style of Plautus that differs so greatly from Menander. He says that the "verbosity of the Plautine prologues has often been commented upon and generally excused by the necessity of the Roman playwright to win his audience."[23] However, in both Menander and Plautus, word play is essential to their comedy. Plautus might seem more verbose, but where he lacks in physical comedy he makes up for it with words, alliteration and paronomasia (punning).[24] See also "jokes and wordplay" below.
52
+
53
+ Plautus is well known for his devotion to puns, especially when it comes to the names of his characters. In Miles Gloriosus, for instance, the female concubine's name, Philocomasium, translates to "lover of a good party"—which is quite apt when we learn about the tricks and wild ways of this prostitute.
54
+
55
+ Plautus' characters—many of which seem to crop up in quite a few of his plays—also came from Greek stock, though they too received some Plautine innovations. Indeed, since Plautus was adapting these plays it would be difficult not to have the same kinds of characters—roles such as slaves, concubines, soldiers, and old men. By working with the characters that were already there but injecting his own creativity, as J.C.B. Lowe wrote in his article "Aspects of Plautus' Originality in the Asinaria", "Plautus could substantially modify the characterization, and thus the whole emphasis of a play."[25]
56
+
57
+ One of the best examples of this method is the Plautine slave, a form that plays a major role in quite a few of Plautus' works. The "clever slave" in particular is a very strong character; he not only provides exposition and humor, but also often drives the plot in Plautus' plays. C. Stace argues that Plautus took the stock slave character from New Comedy in Greece and altered it for his own purposes. In New Comedy, he writes, "the slave is often not much more than a comedic turn, with the added purpose, perhaps, of exposition".[26] This shows that there was precedent for this slave archetype, and obviously some of its old role continues in Plautus (the expository monologues, for instance). However, because Plautus found humor in slaves tricking their masters or comparing themselves to great heroes, he took the character a step further and created something distinct.[27]
58
+
59
+ Of the approximate 270 proper names in the surviving plays of Plautus, about 250 names are Greek.[28] William M. Seaman proposes that these Greek names would have delivered a comic punch to the audience because of its basic understanding of the Greek language.[29] This previous understanding of Greek language, Seaman suggests, comes from the "experience of Roman soldiers during the first and second Punic wars. Not only did men billeted in Greek areas have opportunity to learn sufficient Greek for the purpose of everyday conversation, but they were also able to see plays in the foreign tongue."[30] Having an audience with knowledge of the Greek language, whether limited or more expanded, allowed Plautus more freedom to use Greek references and words. Also, by using his many Greek references and showing that his plays were originally Greek, "It is possible that Plautus was in a way a teacher of Greek literature, myth, art and philosophy; so too was he teaching something of the nature of Greek words to people, who, like himself, had recently come into closer contact with that foreign tongue and all its riches."[31]
60
+
61
+ At the time of Plautus, Rome was expanding, and having much success in Greece. W.S. Anderson has commented that Plautus "is using and abusing Greek comedy to imply the superiority of Rome, in all its crude vitality, over the Greek world, which was now the political dependent of Rome, whose effete comic plots helped explain why the Greeks proved inadequate in the real world of the third and second centuries, in which the Romans exercised mastery".[32]
62
+
63
+ Plautus was known for the use of Greek style in his plays, as part of the tradition of the variation on a theme. This has been a point of contention among modern scholars. One argument states that Plautus writes with originality and creativity—the other, that Plautus is a copycat of Greek New Comedy and that he makes no original contribution to playwriting.[citation needed]
64
+
65
+ A single reading of the Miles Gloriosus leaves the reader with the notion that the names, place, and play are all Greek, but one must look beyond these superficial interpretations. W.S. Anderson would steer any reader away from the idea that Plautus' plays are somehow not his own or at least only his interpretation. Anderson says that, "Plautus homogenizes all the plays as vehicles for his special exploitation. Against the spirit of the Greek original, he engineers events at the end... or alter[s] the situation to fit his expectations."[33] Anderson's vehement reaction to the co-opting of Greek plays by Plautus seems to suggest that they are in no way like their originals were. It seems more likely that Plautus was just experimenting putting Roman ideas in Greek forms.
66
+
67
+ One idea that is important to recognize is that of contaminatio, which refers to the mixing of elements of two or more source plays. Plautus, it seems, is quite open to this method of adaptation, and quite a few of his plots seem stitched together from different stories. One excellent example is his Bacchides and its supposed Greek predecessor, Menander's Dis Exapaton. The original Greek title translates as "The Man Deceiving Twice", yet the Plautine version has three tricks.[34] V. Castellani commented that:
68
+
69
+ Plautus' attack on the genre whose material he pirated was, as already stated, fourfold. He deconstructed many of the Greek plays' finely constructed plots; he reduced some, exaggerated others of the nicely drawn characters of Menander and of Menander's contemporaries and followers into caricatures; he substituted for or superimposed upon the elegant humor of his models his own more vigorous, more simply ridiculous foolery in action, in statement, even in language.[35]
70
+
71
+ By exploring ideas about Roman loyalty, Greek deceit, and differences in ethnicity, "Plautus in a sense surpassed his model."[36] He was not content to rest solely on a loyal adaptation that, while amusing, was not new or engaging for Rome. Plautus took what he found but again made sure to expand, subtract, and modify. He seems to have followed the same path that Horace did, though Horace is much later, in that he is putting Roman ideas in Greek forms. He not only imitated the Greeks, but in fact distorted, cut up, and transformed the plays into something entirely Roman. In essence it is Greek theater colonized by Rome and its playwrights.
72
+
73
+ In Ancient Greece during the time of New Comedy, from which Plautus drew so much of his inspiration, there were permanent theaters that catered to the audience as well as the actor. The greatest playwrights of the day had quality facilities in which to present their work and, in a general sense, there was always enough public support to keep the theater running and successful. However, this was not the case in Rome during the time of the Republic, when Plautus wrote his plays. While there was public support for theater and people came to enjoy tragedy and comedy alike, no permanent theater existed in Rome until Pompey dedicated one in 55 BC in the Campus Martius.[37]
74
+
75
+ The lack of a permanent space was a key factor in Roman theater and Plautine stagecraft. In their introduction to the Miles Gloriosus, Hammond, Mack and Moskalew say that "the Romans were acquainted with the Greek stone theater, but, because they believed drama to be a demoralizing influence, they had a strong aversion to the erection of permanent theaters".[38] This worry rings true when considering the subject matter of Plautus' plays. The unreal becomes reality on stage in his work. T. J. Moore notes that, "all distinction between the play, production, and 'real life' has been obliterated [Plautus' play Curculio]".[39] A place where social norms were upended was inherently suspect. The aristocracy was afraid of the power of the theater. It was merely by their good graces and unlimited resources that a temporary stage would have been built during specific festivals.
76
+
77
+ Roman drama, specifically Plautine comedy, was acted out on stage during the ludi or festival games. In his discussion of the importance of the ludi Megalenses in early Roman theater, John Arthur Hanson says that this particular festival "provided more days for dramatic representations than any of the other regular festivals, and it is in connection with these ludi that the most definite and secure literary evidence for the site of scenic games has come down to us".[40] Because the ludi were religious in nature, it was appropriate for the Romans to set up this temporary stage close to the temple of the deity being celebrated. S.M. Goldberg notes that "ludi were generally held within the precinct of the particular god being honored."[41]
78
+
79
+ T. J. Moore notes that "seating in the temporary theaters where Plautus' plays were first performed was often insufficient for all those who wished to see the play, that the primary criterion for determining who was to stand and who could sit was social status".[42] This is not to say that the lower classes did not see the plays; but they probably had to stand while watching. Plays were performed in public, for the public, with the most prominent members of the society in the forefront.
80
+
81
+ The wooden stages on which Plautus' plays appeared were shallow and long with three openings in respect to the scene-house. The stages were significantly smaller than any Greek structure familiar to modern scholars. Because theater was not a priority during Plautus' time, the structures were built and dismantled within a day. Even more practically, they were dismantled quickly due to their potential as fire-hazards.[43]
82
+
83
+ Often the geography of the stage and more importantly the play matched the geography of the city so that the audience would be well oriented to the locale of the play. Moore says that, "references to Roman locales must have been stunning for they are not merely references to things Roman, but the most blatant possible reminders that the production occurs in the city of Rome".[44] So, Plautus seems to have choreographed his plays somewhat true-to-life. To do this, he needed his characters to exit and enter to or from whatever area their social standing would befit.
84
+
85
+ Two scholars, V. J. Rosivach and N. E. Andrews, have made interesting observations about stagecraft in Plautus: V. J. Rosivach writes about identifying the side of the stage with both social status and geography. He says that, for example, "the house of the medicus lies offstage to the right. It would be in the forum or thereabouts that one would expect to find a medicus."[45] Moreover, he says that characters that oppose one another always have to exit in opposite directions. In a slightly different vein, N.E. Andrews discusses the spatial semantics of Plautus; she has observed that even the different spaces of the stage are thematically charged. She states:
86
+
87
+ Plautus' Casina employs these conventional tragic correlations between
88
+ male/outside and female/inside, but then inverts them in order to establish an even more complex relationship among genre, gender and dramatic space. In the Casina, the struggle for control between men and women... is articulated by characters' efforts to control stage movement into and out of the house.[46]
89
+
90
+ Andrews makes note of the fact that power struggle in the Casina is evident in the verbal comings and goings. The words of action and the way that they are said are important to stagecraft. The words denoting direction or action such as abeo ("I go off"), transeo ("I go over"), fores crepuerunt ("the doors creak"), or intus ("inside"), which signal any character's departure or entrance, are standard in the dialogue of Plautus' plays. These verbs of motion or phrases can be taken as Plautine stage directions since no overt stage directions are apparent. Often, though, in these interchanges of characters, there occurs the need to move on to the next act. Plautus then might use what is known as a "cover monologue". About this S.M. Goldberg notes that, "it marks the passage of time less by its length than by its direct and immediate address to the audience and by its switch from senarii in the dialogue to iambic septenarii. The resulting shift of mood distracts and distorts our sense of passing time."[47]
91
+
92
+ The small stages had a significant effect on the stagecraft of ancient Roman theater. Because of this limited space, there was also limited movement. Greek theater allowed for grand gestures and extensive action to reach the audience members who were in the very back of the theater. However the Romans would have had to depend more on their voices than large physicality. There was not an orchestra available as there was for the Greeks and this is reflected in the notable lack of a chorus in Roman drama. The replacement character that acts as the chorus would in Greek drama is often called the "prologue".[48]
93
+
94
+ Goldberg says that "these changes fostered a different relationship between actors and the space in which they performed and also between them and their audiences".[49] Actors were thrust into much closer audience interaction. Because of this, a certain acting style became required that is more familiar to modern audiences. Because they would have been in such close proximity to the actors, ancient Roman audiences would have wanted attention and direct acknowledgement from the actors.[50]
95
+
96
+ Because there was no orchestra, there was no space separating the audience from the stage. The audience could stand directly in front of the elevated wooden platform. This gave them the opportunity to look at the actors from a much different perspective. They would have seen every detail of the actor and hear every word he said. The audience member would have wanted that actor to speak directly to them. It was a part of the thrill of the performance, as it is to this day.[51]
97
+
98
+ Plautus' range of characters was created through his use of various techniques, but probably the most important is his use of stock characters and situations in his various plays. He incorporates the same stock characters constantly, especially when the character type is amusing to the audience. As Walter Juniper wrote, "Everything, including artistic characterization and consistency of characterization, were sacrificed to humor, and character portrayal remained only where it was necessary for the success of the plot and humor to have a persona who stayed in character, and where the persona by his portrayal contributed to humor."[52]
99
+
100
+ For example, in Miles Gloriosus, the titular "braggart soldier" Pyrgopolynices only shows his vain and immodest side in the first act, while the parasite Artotrogus exaggerates Pyrgopolynices' achievements, creating more and more ludicrous claims that Pyrgopolynices agrees to without question. These two are perfect examples of the stock characters of the pompous soldier and the desperate parasite that appeared in Plautine comedies. In disposing of highly complex individuals, Plautus was supplying his audience with what it wanted, since "the audience to whose tastes Plautus catered was not interested in the character play,"[53] but instead wanted the broad and accessible humor offered by stock set-ups. The humor Plautus offered, such as "puns, word plays, distortions of meaning, or other forms of verbal humor he usually puts them in the mouths of characters belonging to the lower social ranks, to whose language and position these varieties of humorous technique are most suitable,"[54] matched well with the stable of characters.
101
+
102
+ In his article "The Intriguing Slave in Greek Comedy," Philip Harsh gives evidence to show that the clever slave is not an invention of Plautus. While previous critics such as A. W. Gomme believed that the slave was "[a] truly comic character, the devisor of ingenious schemes, the controller of events, the commanding officer of his young master and friends, is a creation of Latin comedy," and that Greek dramatists such as Menander did not use slaves in such a way that Plautus later did, Harsh refutes these beliefs by giving concrete examples of instances where a clever slave appeared in Greek comedy.[55] For instance, in the works of Athenaeus, Alciphron, and Lucian there are deceptions that involve the aid of a slave, and in Menander's Dis Exapaton there was an elaborate deception executed by a clever slave that Plautus mirrors in his Bacchides. Evidence of clever slaves also appears in Menander's Thalis, Hypobolimaios, and from the papyrus fragment of his Perinthia. Harsh acknowledges that Gomme's statement was probably made before the discovery of many of the papyri that we now have. While it was not necessarily a Roman invention, Plautus did develop his own style of depicting the clever slave. With larger, more active roles, more verbal exaggeration and exuberance, the slave was moved by Plautus further into the front of the action.[56] Because of the inversion of order created by a devious or witty slave, this stock character was perfect for achieving a humorous response and the traits of the character worked well for driving the plot forward.
103
+
104
+ Another important Plautine stock character, discussed by K.C. Ryder, is the senex amator. A senex amator is classified as an old man who contracts a passion for a young girl and who, in varying degrees, attempts to satisfy this passion. In Plautus these men are Demaenetus (Asinaria), Philoxenus and Nicobulus (Bacchides), Demipho (Cistellaria), Lysidamus (Casina), Demipho (Mercator), and Antipho (Stichus). Periplectomenos (Miles Gloriosus) and Daemones (Rudens) are regarded as senes lepidi because they usually keep their feelings within a respectable limit. All of these characters have the same goal, to be with a younger woman, but all go about it in different ways, as Plautus could not be too redundant with his characters despite their already obvious similarities. What they have in common is the ridicule with which their attempts are viewed, the imagery that suggests that they are motivated largely by animal passion, the childish behavior, and the reversion to the love-language of their youth.[57]
105
+
106
+ In examining the female role designations of Plautus's plays, Z.M. Packman found that they are not as stable as their male counterparts: a senex will usually remain a senex for the duration of the play but designations like matrona, mulier, or uxor at times seem interchangeable. Most free adult women, married or widowed, appear in scene headings as mulier, simply translated as "woman". But in Plautus' Stichus the two young women are referred to as sorores, later mulieres, and then matronae, all of which have different meanings and connotations. Although there are these discrepancies, Packman tries to give a pattern to the female role designations of Plautus. Mulier is typically given to a woman of citizen class and of marriageable age or who has already been married. Unmarried citizen-class girls, regardless of sexual experience, were designated virgo. Ancilla was the term used for female household slaves, with Anus reserved for the elderly household slaves. A young woman who is unwed due to social status is usually referred to as meretrix or "courtesan". A lena, or adoptive mother, may be a woman who owns these girls.[58]
107
+
108
+ Like Packman, George Duckworth uses the scene headings in the manuscripts to support his theory about unnamed Plautine characters. There are approximately 220 characters in the 20 plays of Plautus. Thirty are unnamed in both the scene headings and the text and there are about nine characters who are named in the ancient text but not in any modern one. This means that about 18% of the total number of characters in Plautus are nameless. Most of the very important characters have names while most of the unnamed characters are of less importance. However, there are some abnormalities—the main character in Casina is not mentioned by name anywhere in the text. In other instances, Plautus will give a name to a character that only has a few words or lines. One explanation is that some of the names have been lost over the years; and for the most part, major characters do have names.[59]
109
+
110
+ The language and style of Plautus are not easy or simple. He wrote in a colloquial style far from the codified form of Latin that is found in Ovid or Virgil. This colloquial style is the everyday speech that Plautus would have been familiar with, yet that means that most students of Latin are unfamiliar with it. Adding to the unfamiliarity of Plautine language is the inconsistency of the irregularities that occur in the texts. In one of his prolific word-studies, A.W. Hodgman noted that:
111
+
112
+ the statements that one meets with, that this or that form is "common," or "regular," in Plautus, are frequently misleading, or even incorrect, and are usually unsatisfying.... I have gained an increasing respect for the manuscript tradition, a growing belief that the irregularities are, after all, in a certain sense regular. The whole system of inflexion—and, I suspect, of syntax also and of versification—was less fixed and stable in Plautus' time than it became later.[60]
113
+
114
+ The diction of Plautus, who used the colloquial speech of his own day, is distinctive and non-standard from the point of view of the later, classical period. M. Hammond, A.H. Mack, and W. Moskalew have noted in the introduction to their edition of the Miles Gloriosus that Plautus was "free from convention... [and] sought to reproduce the easy tone of daily speech rather than the formal regularity of oratory or poetry. Hence, many of the irregularities which have troubled scribes and scholars perhaps merely reflect the everyday usages of the careless and untrained tongues which Plautus heard about him."[61] Looking at the overall use of archaic forms in Plautus, one notes that they commonly occur in promises, agreements, threats, prologues, or speeches. Plautus's archaic forms are metrically convenient, but may also have had a stylistic effect on his original audience.
115
+
116
+ These forms are frequent and of too great a number for a complete list here,[62] but some of the most noteworthy features which from the classical perspective will be considered irregular or obsolete are:
117
+
118
+ These are the most common linguistic peculiarities (from the later perspective) in the plays of Plautus, some of them being also found in Terence, and noting them helps in the reading of his works and gives insight into early Roman language and interaction.
119
+
120
+ There are certain ways in which Plautus expressed himself in his plays, and these individual means of expression give a certain flair to his style of writing. The means of expression are not always specific to the writer, i.e., idiosyncratic, yet they are characteristic of the writer. Two examples of these characteristic means of expression are the use of proverbs and the use of Greek language in the plays of Plautus.
121
+
122
+ Plautus employed the use of proverbs in many of his plays. Proverbs would address a certain genre such as law, religion, medicine, trades, crafts, and seafaring. Plautus' proverbs and proverbial expressions number into the hundreds. They sometimes appear alone or interwoven within a speech. The most common appearance of proverbs in Plautus appears to be at the end of a soliloquy. Plautus does this for dramatic effect to emphasize a point.
123
+
124
+ Further interwoven into the plays of Plautus and just as common as the use of proverbs is the use of Greek within the texts of the plays. J. N. Hough suggests that Plautus's use of Greek is for artistic purposes and not simply because a Latin phrase will not fit the meter. Greek words are used when describing foods, oils, perfumes, etc. This is similar to the use of French terms in the English language such as garçon or rendezvous. These words give the language a French flair just as Greek did to the Latin-speaking Romans. Slaves or characters of low standing speak much of the Greek. One possible explanation for this is that many Roman slaves were foreigners of Greek origin.
125
+
126
+ Plautus would sometimes incorporate passages in other languages as well in places where it would suit his characters. A noteworthy example is the use of two prayers in Punic in Poenulus, spoken by the Carthaginian elder Hanno, which are significant to Semitic linguistics because they preserve the Carthaginian pronunciation of the vowels. Unlike Greek, Plautus most probably did not speak Punic himself, nor was the audience likely to understand it. The text of the prayers themselves was probably provided by a Carthaginian informant, and Plautus incorporated it to emphasize the authenticity and foreignness of Hanno's character.[66]
127
+
128
+ Plautus also used more technical means of expression in his plays. One tool that Plautus used for the expression of his servus callidus stock character was alliteration. Alliteration is the repetition of sounds in a sentence or clause; those sounds usually come at the beginning of words. In the Miles Gloriosus, the servus callidus is Palaestrio. As he speaks with the character, Periplectomenus, he uses a significant amount of alliteration in order to assert his cleverness and, therefore, his authority. Plautus uses phrases such as "falsiloquom, falsicum, falsiiurium" (MG l. 191). These words express the deep and respectable knowledge that Palaestrio has of the Latin language. Alliteration can also happen at the endings of words as well. For example, Palaestrio says, "linguam, perfidiam, malitiam atque audaciam, confidentiam,
129
+ confirmitatem, fraudulentiam" (MG ll. 188-9). Also used, as seen above, is the technique of
130
+ assonance, which is the repetition of similar-sounding syllables.
131
+
132
+ Plautus' comedies abound in puns and word play, which is an important component of his poetry. One well known instance in the Miles Gloriosus is Sceledre, scelus. Some examples stand in the text in order to accentuate and emphasize whatever is being said, and others to elevate the artistry of the language. But a great number are made for jokes, especially riddle jokes, which feature a "knock knock - who's there?" pattern. Plautus is especially fond of making up and changing the meaning of words, as Shakespeare does later.[67]
133
+
134
+ Further emphasizing and elevating the artistry of the language of the plays of Plautus is the use of meter, which simply put is the rhythm of the play. There seems to be great debate over whether Plautus found favor in strong word accent or verse ictus, stress. Plautus did not follow the meter of the Greek originals that he adapted for the Roman audience. Plautus used a great number of meters, but most frequently he used the trochaic septenarius. Iambic words, though common in Latin, are difficult to fit in this meter, and naturally occur at the end of verses. G.B. Conte has noted that Plautus favors the use of cantica instead of Greek meters. This vacillation between meter and word stress highlights the fact that Latin literature was still in its infancy, and that there was not yet a standard way to write verse.
135
+
136
+ The servus callidus functions as the exposition in many of Plautus' plays. According to C. Stace, "slaves in Plautus account for almost twice as much monologue as any other character... [and] this is a significant statistic; most of the monologues being, as they are, for purposes of humor, moralizing, or exposition of some kind, we can now begin to see the true nature of the slave's importance."[68] Because humor, vulgarity,[69] and "incongruity" are so much a part of the Plautine comedies, the slave becomes the essential tool to connect the audience to the joke through his monologue and direct connection to the audience. He is, then, not only a source for exposition and understanding, but connection—specifically, connection to the humor of the play, the playfulness of the play. The servus callidus is a character that, as McCarthy says, "draws the complete attention of the audience, and, according to C. Stace, 'despite his lies and abuse, claims our complete sympathy'".[70] He does this, according to some scholarship, using monologue, the imperative mood and alliteration—all of which are specific and effective linguistic tools in both writing and speaking.
137
+
138
+ The specific type of monologue (or soliloquy) in which a Plautine slave engages is the prologue. As opposed to simple exposition, according to N.W. Slater, "these...prologues...have a far more important function than merely to provide information."[71] Another way in which the servus callidus asserts his power over the play—specifically the other characters in the play—is through his use of the imperative mood. This type of language is used, according to E. Segal, for "the forceful inversion, the reduction of the master to an abject position of supplication ... the master-as-suppliant is thus an extremely important feature of the Plautine comic finale".[72] The imperative mood is therefore used in the complete role-reversal of the normal relationship between slave and master, and "those who enjoy authority and respect in the ordinary Roman world are unseated, ridiculed, while the lowliest members of society mount to their pedestals...the humble are in face exalted".[73]
139
+
140
+ Intellectual and academic critics have often judged Plautus's work as crude; yet his influence on later literature is impressive—especially on two literary giants, Shakespeare and Molière.
141
+
142
+ Playwrights throughout history have looked to Plautus for character, plot, humor, and other elements of comedy. His influence ranges from similarities in idea to full literal translations woven into plays. The playwright's apparent familiarity with the absurdity of humanity and both the comedy and tragedy that stem from this absurdity have inspired succeeding playwrights centuries after his death. The most famous of these successors is Shakespeare—Plautus had a major influence on the Bard's early comedies.
143
+
144
+ Plautus was apparently read in the 9th century. His form was too complex to be fully understood, however, and, as indicated by the Terentius et delusor, it was unknown at the time if Plautus was writing in prose or verse.
145
+
146
+ W. B. Sedgwick has provided a record of the Amphitruo, perennially one of Plautus' most famous works. It was the most popular Plautine play in the Middle Ages, and publicly performed at the Renaissance; it was the first Plautine play to be translated into English.
147
+
148
+ The influence of Plautus's plays was felt in the early 16th century. Limited records suggest that the first known university production of Plautus in England was of Miles Gloriosus at Oxford in 1522–3. The magnum jornale of Queens College contains a reference to a comoedia Plauti in either 1522 or 1523. This fits directly with comments made in the poems of Leland about the date of the production. The next production of Miles Gloriosus that is known from limited records was given by the Westminster School in 1564.[74] Other records also tell us about performances of the Menaechmi. From our knowledge, performances were given in the house of Cardinal Wolsey by boys of St. Paul's School as early as 1527.[75]
149
+
150
+ Shakespeare borrowed from Plautus as Plautus borrowed from his Greek models. C.L. Barber says that "Shakespeare feeds Elizabethan life into the mill of Roman farce, life realized with his distinctively generous creativity, very different from Plautus' tough, narrow, resinous genius."[76]
151
+
152
+ The Plautine and Shakespearean plays that most parallel each other are, respectively, The Menaechmi and The Comedy of Errors. According to Marples, Shakespeare drew directly from Plautus "parallels in plot, in incident, and in character,"[77] and was undeniably influenced by the classical playwright's work. H. A. Watt stresses the importance of recognizing the fact that the "two plays were written under conditions entirely different and served audiences as remote as the poles."[78]
153
+
154
+ The differences between The Menaechmi and The Comedy of Errors are clear. In The Menaechmi, Plautus uses only one set of twins—twin brothers. Shakespeare, on the other hand, uses two sets of twins, which, according to William Connolly, "dilutes the force of [Shakespeare's] situations".[78] One suggestion is that Shakespeare got this idea from Plautus' Amphitruo, in which both twin masters and twin slaves appear.
155
+
156
+ It can be noted that the doubling is a stock situation of Elizabethan comedy. On the fusion between Elizabethan and Plautine techniques, T. W. Baldwin writes, "...Errors does not have the miniature unity of Menaechmi, which is characteristic of classic structure for comedy".[79] Baldwin notes that Shakespeare covers a much greater area in the structure of the play than Plautus does. Shakespeare was writing for an audience whose minds weren't restricted to house and home, but looked toward the greater world beyond and the role that they might play in that world.
157
+
158
+ Another difference between the audiences of Shakespeare and Plautus is that Shakespeare's audience was Christian. At the end of Errors, the world of the play is returned to normal when a Christian abbess interferes with the feuding. Menaechmi, on the other hand, "is almost completely lacking in a supernatural dimension".[80] A character in Plautus' play would never blame an inconvenient situation on witchcraft—something that is quite common in Shakespeare.
159
+
160
+ The relationship between a master and a clever servant is also a common element in Elizabethan comedy. Shakespeare often includes foils for his characters to have one set off the other. In Elizabethan romantic comedy, it is common for the plays to end with multiple marriages and couplings of pairs. This is something that is not seen in Plautine comedy. In the Comedy of Errors, Aegeon and Aemilia are separated, Antipholus and Adriana are at odds, and Antipholus and Luciana have not yet met. At the end, all the couples are happily together. By writing his comedies in a combination of Elizabethan and Plautine styles, Shakespeare helps to create his own brand of comedy, one that uses both styles.[78]
161
+
162
+ Also, Shakespeare uses the same kind of opening monologue so common in Plautus's plays. He even uses a "villain" in The Comedy of Errors of the same type as the one in Menaechmi, switching the character from a doctor to a teacher but keeping the character a shrewd, educated man.[78] Watt also notes that some of these elements appear in many of his works, such as Twelfth Night or A Midsummer Night's Dream, and had a deep impact on Shakespeare's writing.[78]
163
+
164
+ Later playwrights also borrowed Plautus's stock characters. One of the most important echoes of Plautus is the stock character of the parasite. Certainly the best example of this is Falstaff, Shakespeare's portly and cowardly knight. As J. W. Draper notes, the gluttonous Falstaff shares many characteristics with a parasite such as Artotrogus from Miles Gloriosus. Both characters seem fixated on food and where their next meal is coming from. But they also rely on flattery in order to gain these gifts, and both characters are willing to bury their patrons in empty praise.[81] Of course, Draper notes that Falstaff is also something of a boastful military man, but notes, "Falstaff is so complex a character that he may well be, in effect, a combination of interlocking types."[81]
165
+
166
+ As well as appearing in Shakespearean comedy, the Plautine parasite appears in one of the first English comedies. In Ralph Roister Doister, the character of Matthew Merrygreeke follows in the tradition of both Plautine Parasite and Plautine slave, as he both searches and grovels for food and also attempts to achieve his master's desires.[81] Indeed, the play itself is often seen as borrowing heavily from or even being based on the Plautine comedy Miles Gloriosus.[82]
167
+
168
+ H. W. Cole discusses the influence of Plautus and Terence on the Stonyhurst Pageants. The Stonyhurst Pageants are manuscripts of Old Testament plays that were probably composed after 1609 in Lancashire. Cole focuses on Plautus' influence on the particular Pageant of Naaman. The playwright of this pageant breaks away from the traditional style of religious medieval drama and relies heavily on the works of Plautus. Overall, the playwright cross-references eighteen of the twenty surviving plays of Plautus and five of the six extant plays by Terence. It is clear that the author of the Stonyhurst Pageant of Naaman had a great knowledge of Plautus and was significantly influenced by this.[83]
169
+
170
+ There is evidence of Plautine imitation in Edwardes' Damon and Pythias and Heywood's Silver Age as well as in Shakespeare's Errors. Heywood sometimes translated whole passages of Plautus. By being translated as well as imitated, Plautus was a major influence on comedy of the Elizabethan era.
171
+ In terms of plot, or perhaps more accurately plot device, Plautus served as a source of inspiration and also provided the possibility of adaptation for later playwrights. The many deceits that Plautus layered his plays with, giving the audience the feeling of a genre bordering on farce, appear in much of the comedy written by Shakespeare and Molière. For instance, the clever slave has important roles in both L'Avare and L'Etourdi, two plays by Molière, and in both drives the plot and creates the ruse just like Palaestrio in Miles Gloriosus.[84] These similar characters set up the same kind of deceptions in which many of Plautus' plays find their driving force, which is not a simple coincidence.
172
+
173
+ 20th century musicals based on Plautus include A Funny Thing Happened on the Way to the Forum (Larry Gelbart and Burt Shevelove, book, Stephen Sondheim, music and lyrics).
174
+
175
+ Roman Laughter: The Comedy of Plautus, a 1968 book by Erich Segal, is a scholarly study of Plautus' work.
176
+
177
+ The British TV sitcom Up Pompeii uses situations and stock characters from Plautus's plays. In the first series Willie Rushton plays Plautus who pops up on occasion to provide comic comments on what is going on in the episode.
en/4669.html.txt ADDED
@@ -0,0 +1,93 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+
5
+ The PlayStation 2 (officially branded as PS2) is a home video game console developed and marketed by Sony Computer Entertainment. It was first released in Japan on March 4, 2000, in North America on October 26, 2000, and in Europe and Australia on November 24, 2000, and is the successor to the original PlayStation, as well as the second installment in the PlayStation console line-up. A sixth-generation console, it competed with Sega's Dreamcast, Nintendo's GameCube, and Microsoft's original Xbox.
6
+
7
+ Announced in 1999, the PS2 offered backward-compatibility for its predecessor's DualShock controller, as well as its games. The PS2 is the best-selling video game console of all time, having sold over 155 million units worldwide, as confirmed by Sony.[13] Over 3,800 game titles have been released for the PS2, with over 1.5 billion copies sold.[14] Sony later manufactured several smaller, lighter revisions of the console known as Slimline models in 2004.
8
+
9
+ Even with the release of its successor, the PlayStation 3, the PS2 remained popular well into the seventh generation, and continued to be produced until 2013, when Sony finally announced it had been discontinued after over twelve years of production – one of the longest lifespans of a video game console. Despite the announcement, new games for the console continued to be produced until the end of 2013, including Final Fantasy XI: Seekers of Adoulin for Japan, FIFA 13 for North America, and Pro Evolution Soccer 2014 for Europe. Repair services for the system in Japan ended on September 7, 2018.
10
+
11
+ Though Sony has kept details of the PlayStation 2's development secret, work on the console began around the time that the original PlayStation was released (in late 1994).[15] Insiders stated that it was developed in the U.S. West Coast by former members of Argonaut Software.[16] By 1997 word had leaked to the press that the console would have backward-compatibility with the original PlayStation, a built-in DVD player, and Internet connectivity.[16][17] Sony announced the PlayStation 2 (PS2) on March 1, 1999. The video game console was positioned as a competitor to Sega's Dreamcast, the first sixth-generation console to be released, although ultimately the main rivals of the PS2 were Nintendo's GameCube and Microsoft's Xbox.[18][19] The Dreamcast itself launched very successfully in North America later that year, selling over 500,000 units within two weeks.[20]
12
+
13
+ Soon after the Dreamcast's North American launch, Sony unveiled the PlayStation 2 at the Tokyo Game Show on September 20, 1999.[21] Sony showed fully playable demos of upcoming PlayStation 2 games including Gran Turismo 2000 (later released as Gran Turismo 3: A-Spec) and Tekken Tag Tournament – which showed the console's graphic abilities and power.[22]
14
+
15
+ The PS2 was launched in March 2000 in Japan, October in North America, and November in Europe. Sales of the console, games and accessories pulled in $250 million on the first day, beating the $97 million made on the first day of the Dreamcast.[23] Directly after its release, it was difficult to find PS2 units on retailer shelves[24] due to manufacturing delays.[25] Another option was purchasing the console online through auction websites such as eBay, where people paid over a thousand dollars for the console.[26] The PS2 initially sold well partly on the basis of the strength of the PlayStation brand and the console's backward-compatibility, selling over 980,000 units in Japan by March 5, 2000, one day after launch.[27] This allowed the PS2 to tap the large install base established by the PlayStation – another major selling point over the competition. Later, Sony added new development kits for game developers and more PS2 units for consumers. The PS2's built-in functionality also expanded its audience beyond the gamer,[5] as its debut pricing was the same or less than a standalone DVD player. This made the console a low cost entry into the home theater market.[28]
16
+
17
+ The success of the PS2 at the end of 2000 caused Sega problems both financially and competitively, and Sega announced the discontinuation of the Dreamcast in March 2001, just 18 months after its successful Western launch. Despite the Dreamcast still receiving support through 2001, the PS2 remained the only sixth generation console for over 6 months before it faced competition from new rivals: Nintendo's GameCube and Microsoft's Xbox. Many analysts predicted a close three-way matchup among the three consoles. The Xbox had the most powerful hardware, while the GameCube was the least expensive console, and Nintendo changed its policy to encourage third-party developers. While the PlayStation 2 theoretically had the weakest specification of the three, it had a head start due to its installed base plus strong developer commitment, as well as a built-in DVD player (the Xbox required an adapter, while the GameCube lacked support entirely).[29] While the PlayStation 2's initial games lineup was considered mediocre, this changed during the 2001 holiday season with the release of several blockbuster games that maintained the PS2's sales momentum and held off its newer rivals. Sony also countered the Xbox by temporarily securing PlayStation 2 exclusives for highly anticipated games such as the Grand Theft Auto series and Metal Gear Solid 2: Sons of Liberty.[30]
18
+
19
+ Sony cut the price of the console in May 2002 from US$299 to $199 in North America,[31] making it the same price as the GameCube and $100 less than the Xbox. It also planned to cut the price in Japan around that time.[32] It cut the price twice in Japan in 2003.[33] In 2006, Sony cut the cost of the console in anticipation of the release of the PlayStation 3.[33]
20
+
21
+ Sony, unlike Sega with its Dreamcast, originally placed little emphasis on online gaming during its first few years, although that changed upon the launch of the online-capable Xbox. Coinciding with the release of Xbox Live, Sony released the PlayStation Network Adapter in late 2002, with several online first–party titles released alongside it, such as SOCOM: U.S. Navy SEALs to demonstrate its active support for Internet play.[34] Sony also advertised heavily, and its online model had the support of Electronic Arts (EA); EA did not offer online Xbox titles until 2004. Although Sony and Nintendo both started out late, and although both followed a decentralized model of online gaming where the responsibility is up to the developer to provide the servers, Sony's moves made online gaming a major selling point of the PS2.
22
+
23
+ In September 2004, in time for the launch of Grand Theft Auto: San Andreas, Sony revealed a newer, slimmer PS2. In preparation for the launch of the new models (SCPH-700xx-9000x), Sony stopped making the older models (SCPH-3000x-500xx) to let the distribution channel empty its stock of the units.[citation needed] After an apparent manufacturing issue – Sony reportedly underestimated demand – caused some initial slowdown in producing the new unit caused in part by shortages between the time the old units were cleared out and the new units were ready. The issue was compounded in Britain when a Russian oil tanker became stuck in the Suez Canal, blocking a ship from China carrying PS2s bound for the UK. During one week in November, British sales totalled 6,000 units – compared to 70,000 units a few weeks prior.[35] There were shortages in more than 1,700 stores in North America on the day before Christmas.[36]
24
+
25
+
26
+
27
+ Software for the PlayStation 2 was distributed primarily on DVD-ROM, with some titles being published on CD-ROM. In addition, the console can play audio CDs and DVD movies and is backward-compatible with almost all original PlayStation games. The PlayStation 2 also supports PlayStation memory cards and controllers, although original PlayStation memory cards will only work with original PlayStation games[37] and the controllers may not support all functions (such as analog buttons) for PlayStation 2 games.
28
+
29
+ The standard PlayStation 2 memory card has an 8 MB capacity.[38] There are a variety of non-Sony manufactured memory cards available for the PlayStation 2, allowing for a memory capacity larger than the standard 8 MB.
30
+
31
+ The console also features 2 USB ports, and 1 IEEE 1394 (Firewire) port (SCPH-10000 to 3900x only). A hard disk drive can be installed in an expansion bay on the back of the console, and is required to play certain games, notably the popular Final Fantasy XI.[39] This is only available on certain models.
32
+
33
+ The console uses the Emotion Engine CPU, custom-designed by Sony and Toshiba and based on the MIPS architecture with a floating point performance of 6.2 GFLOPS.[40] The GPU is likewise custom-designed for the console and called the Graphics Synthesizer, with a fillrate of 2.4 gigapixels/second, capable of rendering up to 75 million polygons per second.[41] When accounting for features such as lighting, texture mapping, artificial intelligence, and game physics, it has a real-world performance of 3 million to 16 million polygons per second.[41][42]
34
+
35
+ The PlayStation 2 may natively output video resolutions on SDTV and HDTV from 480i to 480p while other games, such as Gran Turismo 4 and Tourist Trophy are known to support up-scaled 1080i resolution[43] using any of the following standards: composite video[44] (480i), S-Video[45] (480i), RGB[46] (480i/p), VGA[47] (for progressive scan games and PS2 Linux only), YPBPR component video[48] (which display most original PlayStation games in their native 240p mode which most HDTV sets do not support[49]), and D-Terminal[50]. Cables are available for all of these signal types; these cables also output analog stereo audio. Additionally, an RF modulator is available for the system to connect to older TVs[51].
36
+
37
+ Digital (S/PDIF) audio may also be output by the console via its TOSLINK connector[52] which outputs 2.0 PCM, 5.1, and 6.1 channel sound in Dolby Digital, Dolby Digital Surround EX, DTS, And DTS-ES formats.
38
+
39
+
40
+
41
+
42
+
43
+ PlayStation 2 users had the option to play select games over the Internet, using dial-up or a broadband Internet connection. The PlayStation 2 Network Adaptor was required for the original models, while the slim models included networking ports on the console. Instead of having a unified, subscription-based online service like Xbox Live as competitor Microsoft later chose for its Xbox console, online multiplayer functionality on the PlayStation 2 was the responsibility of the game publisher and ran on third-party servers. Many games that supported online play exclusively supported broadband Internet access.
44
+
45
+ The PS2 has undergone many revisions[53], some only of internal construction and others involving substantial external changes.
46
+
47
+ The PS2 is primarily differentiated between models featuring the original "fat" case design and "slimline" models, which were introduced at the end of 2004. In 2010, the Sony Bravia KDL-22PX300 was made available to consumers. It was a 22" HD-Ready television which incorporated a built-in PlayStation 2.[54][55]
48
+
49
+ The PS2 standard color is matte black. Several variations in color were produced in different quantities and regions, including ceramic white, light yellow, metallic blue (aqua), metallic silver, navy (star blue), opaque blue (astral blue), opaque black (midnight black), pearl white, sakura purple, satin gold, satin silver, snow white, super red, transparent blue (ocean blue), and also Limited Edition color Pink, which was distributed in some regions such as Oceania, and parts of Asia.[56][57][58]
50
+
51
+ In September 2004, Sony unveiled its third major hardware revision. Available in late October 2004, it was smaller, thinner, and quieter than the original versions and included a built-in Ethernet port (in some markets it also had an integrated modem). Due to its thinner profile, it did not contain the 3.5" expansion bay and therefore did not support the internal hard disk drive. It also lacked an internal power supply until a later revision (excluding the Japan version), similar to the GameCube, and had a modified Multitap expansion. The removal of the expansion bay was criticized as a limitation due to the existence of titles such as Final Fantasy XI, which required the use of the HDD.
52
+
53
+ Sony also manufactured a consumer device called the PSX that can be used as a digital video recorder and DVD burner in addition to playing PS2 games. The device was released in Japan on December 13, 2003, and was the first Sony product to include the XrossMediaBar interface. It did not sell well in the Japanese market and was not released anywhere else.[59]
54
+
55
+ A class-action lawsuit was filed against Sony Computer Entertainment America Inc. on July 16, 2002, in the Superior Court of California, County of San Mateo. The lawsuit addresses consumer reports of inappropriate "no disc" error (disc read error) messages and other problems associated with playing DVDs and CDs on the PlayStation 2.
56
+
57
+ Sony settled its "disc read error" lawsuit by compensating the affected customers with US$25, a free game from a specified list, and the reduced cost repair or replacement (at SCEA's discretion) of the damaged system. This settlement was subject to the courts' approval, and hearings began in the US and Canada on April 28, 2006, and May 11, 2006, respectively.[60]
58
+
59
+ PlayStation 2 software is distributed on CD-ROM and DVD-ROM; the two formats are differentiated by the color of their discs' bottoms, with CD-ROMs being blue and DVD-ROMs being silver. The PlayStation 2 offered some particularly high-profile exclusive games. Most main entries in the Grand Theft Auto, Final Fantasy, and Metal Gear Solid series were released exclusively for the console. Several prolific series got their start on the PlayStation 2, including God of War, Ratchet & Clank, Jak and Daxter, Devil May Cry, Kingdom Hearts, and Sly Cooper. Grand Theft Auto: San Andreas was the best-selling game on the console.
60
+
61
+ Game releases peaked in 2004, but declined with the release of the PlayStation 3 in 2006. The last new games for the console were Final Fantasy XI: Seekers of Adoulin in Asia, FIFA 13 in North America, and Pro Evolution Soccer 2014 in Europe. As of June 30, 2007, a total of 10,035 software titles had been released worldwide including games released in multiple regions as separate titles.[61]
62
+
63
+ Initial reviews in 2000 of the PlayStation 2 acclaimed the console, with reviewers commending its hardware and graphics capabilities, its ability to play DVDs, and the system's backwards compatibility with games and hardware for the original PlayStation. Early points of criticism included the lack of online support at the time, its inclusion of only two controller ports, and the system's price at launch compared to the Dreamcast in 2000.[62][63] PC Magazine in 2001 called the console "outstanding", praising its "noteworthy components" such as the Emotion Engine CPU, 32MB of RAM, support for IEEE 1394 (branded as "i.LINK" by Sony and "FireWire" by Apple), and the console's two USB ports while criticizing its "expensive" games and its support for only two controllers without the multitap accessory.[64]
64
+
65
+ Later reviews, especially after the launch of the competing GameCube and Xbox systems, continued to praise the PlayStation 2's large game library and DVD playback, while routinely criticizing the PlayStation 2's lesser graphics performance compared to the newer systems and its rudimentary online service compared to Xbox Live. In 2002, CNET rated the console 7.3 out of 10, calling it a "safe bet" despite not being the "newest or most powerful", noting that the console "yields in-game graphics with more jagged edges". CNET also criticized the DVD playback functionality, claiming that the console's video quality was "passable" and that the playback controls were "rudimentary", recommending users to purchase a remote control. The console's two controller ports and expensiveness of its memory cards were also a point of criticism.[65]
66
+
67
+ The slim model of the PlayStation 2 received positive reviews, especially for its incredibly small size and built-in networking. The slim console's requirement for a separate power adapter was often criticized while the top-loading disc drive was often noted as being far less likely to break compared to the tray-loading drive of the original model.[66][67]
68
+
69
+ Demand for the PlayStation 2 remained strong throughout much of its lifespan, selling over 1.4 million units in Japan by March 31, 2000. Over 10.6 million units were sold worldwide by March 31, 2001.[68] In 2005, the PlayStation 2 became the fastest game console to reach 100 million units shipped, accomplishing the feat within 5 years and 9 months from its launch; this was surpassed 4 years later when the Nintendo DS reached 100 million shipments in 4 years and 5 months from its launch.[69] By July 2009, the system had sold 138.8 million units worldwide, with 51 million of those units sold in PAL regions.[70]
70
+
71
+ Overall, over 155 million PlayStation 2 units were sold worldwide by March 31, 2012, the year Sony officially stopped supplying updated sales numbers of the system.[71]
72
+
73
+ The PlayStation 2's DualShock 2 controller is largely identical to the PlayStation's DualShock, with the same basic functionality. However, it includes analog pressure sensitivity on the face, shoulder and D-pad buttons, replacing the digital buttons of the original.[72] (These buttons later became digital again with the release of the DualShock 4.[73]) Like its predecessor, the DualShock 2 controller has force feedback, or "vibration" functionality. It is lighter and includes two more levels of vibration.
74
+
75
+ Optional hardware includes additional DualShock or DualShock 2 controllers, a PS2 DVD remote control, an internal or external hard disk drive (HDD), a network adapter, horizontal and vertical stands, PlayStation or PS2 memory cards, the multitap for PlayStation or PS2, a USB motion camera (EyeToy), a USB keyboard and mouse, and a headset.
76
+
77
+ The original PS2 multitap (SCPH-10090) cannot be plugged into the newer slim models, as the multitap connects to the memory card slot as well as the controller slot and the memory card slot on the slimline is shallower. New slim-design multitaps (SCPH-70120) were manufactured for these models, however third-party adapters also exist to permit original multitaps to be used.
78
+
79
+ Early versions of the PS2 could be networked via an i.LINK port, though this had little game support and was dropped. Some third party manufacturers have created devices that allow disabled people to access the PS2 through ordinary switches, etc.
80
+
81
+ Some third-party companies, such as JoyTech, have produced LCD monitor and speaker attachments for the PS2, which attach to the back of the console. These allow users to play games without access to a television as long as there is access to mains electricity or a similar power source. These screens can fold down onto the PS2 in a similar fashion to laptop screens.
82
+
83
+ There are many accessories for musical games, such as dance pads for Dance Dance Revolution, In the Groove, and Pump It Up titles and High School Musical 3: Senior Year Dance. Konami microphones for use with the Karaoke Revolution games, dual microphones (sold with and used exclusively for SingStar games), various "guitar" controllers (for the Guitar Freaks series and Guitar Hero series), the drum set controller (sold in a box set (or by itself) with a "guitar" controller and a USB microphone (for use with Rock Band and Guitar Hero series, World Tour and newer), and a taiko drum controller for Taiko: Drum Master.
84
+
85
+ Specialized controllers include light guns (GunCon), fishing rod and reel controllers, a Dragon Quest VIII "slime" controller, a Final Fantasy X-2 "Tiny Bee" dual pistol controller, an Onimusha 3 katana controller, and a Resident Evil 4 chainsaw controller.
86
+
87
+ Unlike the PlayStation, which requires the use of an official Sony PlayStation Mouse to play mouse-compatible games, the few PS2 games with mouse support work with a standard USB mouse as well as a USB trackball.[74] In addition, some of these games also support the usage of a USB keyboard for text input, game control (in lieu of a DualShock or DualShock 2 gamepad, in tandem with a USB mouse), or both.
88
+
89
+ Using homebrew programs, it is possible to play various audio and video file formats on a PS2. Homebrew programs can also be used to play patched backups of original PS2 DVD games on unmodified consoles, and to install retail discs to an installed hard drive on older models. Homebrew emulators of older computer and gaming systems have been developed for the PS2.[75]
90
+
91
+ Sony released a Linux-based operating system, Linux for PlayStation 2, for the PS2 in a package that also includes a keyboard, mouse, Ethernet adapter and HDD. In Europe and Australia, the PS2 comes with a free Yabasic interpreter on the bundled demo disc. This allows users to create simple programs for the PS2. A port of the NetBSD project and BlackRhino GNU/Linux, an alternative Debian-based distribution, are also available for the PS2.
92
+
93
+ A successor, the PlayStation 3 was released in Japan and North America in November 2006 and Europe in March 2007.
en/467.html.txt ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ Australopithecus (/ˌɒstrələˈpɪθɪkəs/, OS-trə-lə-PITH-i-kəs);[1] from Latin australis, meaning 'southern', and Greek πίθηκος (pithekos), meaning 'ape'; singular: australopith) is a genus of hominins that existed in Africa from around 4.2[2] to 1.9 million years ago and from which the genus Homo, including modern humans, is considered to be descended. Australopithecus is a member of the subtribe Australopithecina,[3][4] which includes Paranthropus, Kenyanthropus,[5] Ardipithecus[5] and Praeanthropus,[6] though the term "australopithecine" is sometimes used to refer only to members of Australopithecus. Species include: A. garhi, A. africanus, A. sediba, A. afarensis, A. anamensis, A. bahrelghazali and A. deyiremeda. Debate exists as to whether other hominid species of this time, such as Paranthropus ('robust australopithecines'), belong to a separate genus or Australopithecus ('gracile australopiths)', or whether some Australopithecus species should be reclassified into new genera.[2]
4
+
5
+ From palaeontological and archaeological evidence, Australopithecus apparently evolved in eastern Africa around 4.2 million years ago before spreading throughout the continent and eventually becoming extinct 1.9 million years ago (or 1.2 million years ago if Paranthropus is included).[7] While none of the groups normally directly assigned to this group survived, Australopithecus does not appear to be literally extinct (in the sense of having no living descendants), as the genus Homo probably emerged from an Australopithecus species[2][8][9][10][11] at some time between 3 and 2 million years ago.[12]
6
+
7
+ Australopithecus possessed two of three duplicated genes derived from SRGAP2 roughly 3.4 and 2.4 million years ago (SRGAP2B and SRGAP2C), the second of which contributed to the increase in number and migration of neurons in the human brain.[13][14] Significant changes to the hand first appear in the fossil record of later A. afarensis about 3 million years ago (fingers shortened relative to thumb and changes to the joints between the index finger and the trapezium and capitate).[15]
8
+
9
+ The first Australopithecus specimen, the type specimen, was discovered in 1924 in a lime quarry by workers at Taung, South Africa. The specimen was studied by the Australian anatomist Raymond Dart, who was then working at the University of the Witwatersrand in Johannesburg. The fossil skull was from a three-year-old bipedal primate that he named Australopithecus africanus. The first report was published in Nature in February 1925. Dart realised that the fossil contained a number of humanoid features, and so he came to the conclusion that this was an early human ancestor.[16] Later, Scottish paleontologist Robert Broom and Dart set out to search for more early hominin specimens, and several more A. africanus remains from various sites. Initially, anthropologists were largely hostile to the idea that these discoveries were anything but apes, though this changed during the late 1940s.[16] In 1950, evolutionary biologist Ernst Walter Mayr said that all bipedal apes should be classified into the genus Homo, and considered renaming Australopithecus to Homo transvaalensis.[17] However, the contra view taken by Robinson in 1954, excluding australopiths from Homo, became the prevalent view.[17] The first australopithecine fossil discovered in eastern Africa was an A. boisei skull excavated by Mary Leakey in 1959 in Olduvai Gorge, Tanzania. Since then, the Leakey family has continued to excavate the gorge, uncovering further evidence for australopithecines, as well as for Homo habilis and Homo erectus.[16] The scientific community took 20 more years to widely accept Australopithecus as a member of the human family tree.
10
+
11
+ In 1997, an almost complete Australopithecus skeleton with skull was found in the Sterkfontein caves of Gauteng, South Africa. It is now called "Little Foot" and it is around 3.7 million years old. It was named Australopithecus prometheus[18][19] which has since been placed within A. africanus. Other fossil remains found in the same cave in 2008 were named Australopithecus sediba, which lived 1.9 million years ago. A. africanus probably evolved into A. sediba, which some scientists think may have evolved into H. erectus,[20] though this is heavily disputed.
12
+
13
+ A. afarensis, A. anamensis, and A. bahrelghazali were split off into the genus Praeanthropus, but this genus been largely dismissed.[21]
14
+
15
+ The genus Australopithecus is considered to be a wastebasket taxon, whose members are united by their similar physiology rather than close relations with each other over other hominin genera. As such, the genus is paraphyletic, not consisting of a common ancestor and all of its descendents, and is considered an ancestor to Homo, Kenyanthropus, and Paranthropus.[22][23][24][25] Resolving this problem would cause major ramifications in the nomenclature of all descendent species. Possibilities suggested have been to rename Homo sapiens to Australopithecus sapiens[26] (or even Pan sapiens[27][28]), or to move some Australopithecus species into new genera.[29]
16
+
17
+ Opinions differ as to whether the Paranthropus should be included within Australopithecus,[30] and Paranthropus is suggested along with Homo to have developed as part of a clade with A. africanus as its basal root.[17] The members of Paranthropus appear to have a distinct robustness compared to the gracile australopiths, but it is unclear if this indicates all members stemmed from a common ancestor or independently evolved similar traits from occupying a similar niche.[31]
18
+
19
+ Occasional suggestions have been made (by Cele-Conde et al. 2002 and 2007) that A. africanus should also be moved to Paranthropus.[2] On the basis of craniodental evidence, Strait and Grine (2004) suggest that A. anamensis and A. garhi should be assigned to new genera.[32] It is debated whether or not A. bahrelghazali is simply a western version of A. afarensis and not a separate species.[33][34]
20
+
21
+ A taxonomy of the Australopithecus within the great apes is assessed as follows, with Paranthropus and Homo emerging among the Australopithecus.[35] The genus Australopithecus with conventional definitions is assessed to be highly paraphyletic, i.e. it is not a natural group, and the genera Kenyanthropus, Paranthropus and Homo are included.[36][37][38]
22
+
23
+ A. anamensis may have descended from or was closely related to Ardipithecus ramidus.[39] A. anamensis shows some similarities to both Ar. ramidus and Sahelanthropus.[39]
24
+
25
+ Australopiths shared several traits with modern apes and humans, and were widespread throughout Eastern and Northern Africa by 3.5 million years ago (mya). The earliest evidence of fundamentally bipedal hominins is a 3.6 Ma fossil trackway in Laetoli, Tanzania, which bears a remarkable similarity to those of modern humans. The footprints have generally been classified as australopith, as they are the only form of prehuman hominins known to have existed in that region at that time.[40]
26
+
27
+ Australopithecus anamensis, A. afarensis, and A. africanus are among the most famous of the extinct hominins. A. africanus was once considered to be ancestral to the genus Homo (in particular Homo erectus). However, fossils assigned to the genus Homo have been found that are older than A. africanus.[citation needed] Thus, the genus Homo either split off from the genus Australopithecus at an earlier date (the latest common ancestor being either A. afarensis[citation needed] or an even earlier form, possibly Kenyanthropus[citation needed]), or both developed from a yet possibly unknown common ancestor independently.[citation needed]
28
+
29
+ According to the Chimpanzee Genome Project, the human–chimpanzee last common ancestor existed about five to six million years ago, assuming a constant rate of mutation. However, hominin species dated to earlier than the date could call this into question.[41] Sahelanthropus tchadensis, commonly called "Toumai", is about seven million years old and Orrorin tugenensis lived at least six million years ago. Since little is known of them, they remain controversial among scientists since the molecular clock in humans has determined that humans and chimpanzees had a genetic split at least a million years later.[citation needed] One theory suggests that the human and chimpanzee lineages diverged somewhat at first, then some populations interbred around one million years after diverging.[41]
30
+
31
+ The brains of most species of Australopithecus were roughly 35% of the size of a modern human brain[42] with an endocranial volume average of 466 cc (28.4 cu in).[12] Although this is more than the average endocranial volume of chimpanzee brains at 360 cc (22 cu in)[12] the earliest australopiths (A. anamensis) appear to have been within the chimpanzee range,[39] whereas some later australopith specimens have a larger endocranial volume than that of some early Homo fossils.[12]
32
+
33
+ Most species of Australopithecus were diminutive and gracile, usually standing 1.2 to 1.4 m (3 ft 11 in to 4 ft 7 in) tall. It is possible that they exhibited a considerable degree of sexual dimorphism, males being larger than females.[43] In modern populations, males are on average a mere 15% larger than females, while in Australopithecus, males could be up to 50% larger than females by some estimates. However, the degree of sexual dimorphism is debated due to the fragmentary nature of australopith remains.[43]
34
+
35
+ According to A. Zihlman, Australopithecus body proportions closely resemble those of bonobos (Pan paniscus),[44] leading evolutionary biologist Jeremy Griffith to suggest that bonobos may be phenotypically similar to Australopithecus.[45] Furthermore, thermoregulatory models suggest that australopiths were fully hair covered, more like chimpanzees and bonobos, and unlike humans.[46]
36
+
37
+ The fossil record seems to indicate that Australopithecus is ancestral to Homo and modern humans. It was once assumed that large brain size had been a precursor to bipedalism, but the discovery of Australopithecus with a small brain but developed bipedality upset this theory. Nonetheless, it remains a matter of controversy as to how bipedalism first emerged. The advantages of bipedalism were that it left the hands free to grasp objects (e.g., carry food and young), and allowed the eyes to look over tall grasses for possible food sources or predators, but it is also argued that these advantages were not significant enough to cause the emergence of bipedalism.[citation needed] Earlier fossils, such as Orrorin tugenensis, indicate bipedalism around six million years ago, around the time of the split between humans and chimpanzees indicated by genetic studies. This suggests that erect, straight-legged walking originated as an adaptation to tree-dwelling.[47] Major changes to the pelvis and feet had already taken place before Australopithecus.[48] It was once thought that humans descended from a knuckle-walking ancestor,[49] but this is not well-supported.[50]
38
+
39
+ Australopithecines have thirty two teeth, like modern humans. Their molars were parallel, like those of great apes, and they had a slight pre-canine gap (diastema). Their canines were smaller, like modern humans, and with the teeth less interlocked than in previous hominins. In fact, in some australopithecines, the canines are shaped more like incisors.[51] The molars of Australopithicus fit together in much the same way those of humans do, with low crowns and four low, rounded cusps used for crushing. They have cutting edges on the crests.[51] However, australopiths generally evolved a larger postcanine dentition with thicker enamel.[52] Australopiths in general had thick enamel, like Homo, while other great apes have markedly thinner enamel.[51] Robust australopiths wore their molar surfaces down flat, unlike the more gracile species, who kept their crests.[51]
40
+
41
+ In a 1979 preliminary microwear study of Australopithecus fossil teeth, anthropologist Alan Walker theorized that robust australopiths ate predominantly fruit (frugivory).[53] Australopithecus species are thought to have eaten mainly fruit, vegetables, and tubers, and perhaps easy to catch animals such as small lizards. Much research has focused on a comparison between the South African species A. africanus and Paranthropus robustus. Early analyses of dental microwear in these two species showed, compared to P. robustus, A. africanus had fewer microwear features and more scratches as opposed to pits on its molar wear facets.[54] Microwear patterns on the cheek teeth of A. afarensis and A. anamensis indicate that A. afarensis predominantly ate fruits and leaves, whereas A. anamensis included grasses and seeds (in addition to fruits and leaves).[55] The thickening of enamel in australopiths may have been a response to eating more ground-bound foods such as tubers, nuts, and cereal grains with gritty dirt and other small particulates which would wear away enamel. Gracile australopiths had larger incisors, which indicates tearing food was important, perhaps eating scavenged meat. Nonetheless, the wearing patterns on the teeth support a largely herbivorous diet.[51]
42
+
43
+ In 1992, trace-element studies of the strontium/calcium ratios in robust australopith fossils suggested the possibility of animal consumption, as they did in 1994 using stable carbon isotopic analysis.[56] In 2005, fossil animal bones with butchery marks dating to 2.6 million years old were found at the site of Gona, Ethiopia. This implies meat consumption by at least one of three species of hominins occurring around that time: A. africanus, A. garhi, and/or P. aethiopicus.[57] In 2010, fossils of butchered animal bones dated 3.4 million years old were found in Ethiopia, close to regions where australopith fossils were found.[58]
44
+
45
+ Robust australopithecines (Paranthropus) had larger cheek teeth than gracile australopiths, possibly because robust australopithecines had more tough, fibrous plant material in their diets, whereas gracile australopiths ate more hard and brittle foods.[51] However, such divergence in chewing adaptations may instead have been a response to fallback food availability. In leaner times, robust and gracile australopithecines may have turned to different low-quality foods (fibrous plants for the former, and hard food for the latter), but in more bountiful times, they had more variable and overlapping diets.[59][60]
46
+
47
+ A study in 2018 found non-carious cervical lesions, caused by acid erosion, on the teeth of A. africanus, probably caused by consumption of acidic fruit.[61]
48
+
49
+ It was once thought that Australopithecus could not produce tools like Homo, but the discovery of A. garhi associated with large mammal bones bearing evidence of processing by stone tools showed this to not have been the case.[62][63] Discovered in 1994, this was the oldest evidence of manufacturing at the time[64][65] until the 2010 discovery of cut marks dating to 3.4 mya attributed to A. afarensis,[66] and the 2015 discovery of the Lomekwi culture from Lake Turkana dating to 3.3 mya possibly attributed to Kenyanthropus.[67] More stone tools dating to about 2.6 mya in Ledi-Geraru in the Afar Region were found in 2019, though these may be attributed to Homo.[68]
50
+
51
+ The spot where the first Australopithecus boisei was discovered in Tanzania.
52
+
53
+ Original skull of Mrs. Ples, a female A. africanus
54
+
55
+ Taung Child by Cicero Moraes, Arc-Team, Antrocom NPO, Museum of the University of Padua.
56
+
57
+ Cast of the skeleton of Lucy, an A. afarensis
58
+
59
+ Skull of the Taung child
en/4670.html.txt ADDED
@@ -0,0 +1,260 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+
5
+ PlayStation (Japanese: プレイステーション, Hepburn: Pureisutēshon, officially abbreviated as PS) is a Japanese video game brand that consists of five home video game consoles, as well as a media center, an online service, a line of controllers, two handhelds and a phone, as well as multiple magazines. The brand is produced by Sony Interactive Entertainment, a division of Sony, with the first console releasing as the PlayStation in Japan released in December 1994, and worldwide the following year.[1]
6
+
7
+ The original console in the series was the first console of any type to ship over 100 million units, doing so in under a decade.[2] Its successor, the PlayStation 2, was released in 2000. The PlayStation 2 is the best-selling home console to date, having reached over 155 million units sold by the end of 2012.[3] Sony's next console, the PlayStation 3, was released in 2006, selling over 87.4 million units by March 2017.[4] Sony's latest console, the PlayStation 4, was released in 2013, selling a million units within a day, becoming the fastest selling console in history.[5] The next console in the series, the PlayStation 5, is expected to be released by the end of 2020.[6]
8
+
9
+ The first handheld game console in the series, the PlayStation Portable or PSP, sold a total of 80 million units worldwide by November 2013.[7] Its successor, the PlayStation Vita, which launched in Japan in December 2011 and in most other major territories in February 2012, selling over four million units by January 2013.[8] PlayStation TV is a microconsole and a non-portable variant of the PlayStation Vita handheld game console.[9] Other hardware released as part of the PlayStation series includes the PSX, a digital video recorder which was integrated with the PlayStation and PlayStation 2, though it was short lived due to its high price and was never released outside Japan, as well as a Sony Bravia television set which has an integrated PlayStation 2. The main series of controllers utilized by the PlayStation series is the DualShock, which is a line of vibration-feedback gamepad having sold 28 million controllers by June 2008.[10]
10
+
11
+ The PlayStation Network is an online service with about 110 million registered users[11] (as of June 2013) and over 103 million active users monthly[12] (as of December 2019). It comprises an online virtual market, the PlayStation Store, which allows the purchase and download of games and various forms of multimedia, a subscription-based online service known as PlayStation Plus and a social gaming networking service called PlayStation Home, which had over 41 million users worldwide at the time of its closure in March 2015.[13] PlayStation Mobile (formerly PlayStation Suite) is a software framework that provides PlayStation content on mobile devices. Version 1.xx supports both PlayStation Vita, PlayStation TV and certain devices that run the Android operating system, whereas version 2.00 released in 2014 only targeted PlayStation Vita and PlayStation TV.[14] Content set to be released under the framework consist of only original PlayStation games currently.[15]
12
+
13
+ 7th generation PlayStation products also use the XrossMediaBar, which is an Technology & Engineering Emmy Award-winning graphical user interface.[16] A touch screen-based user interface called LiveArea was launched for the PlayStation Vita, which integrates social networking elements into the interface. Additionally, the PlayStation 2 and PlayStation 3 consoles also featured support for Linux-based operating systems; Linux for PlayStation 2 and OtherOS respectively, though this has since been discontinued. The series has also been known for its numerous marketing campaigns, the latest of which being the "Greatness Awaits" commercials in the United States.
14
+
15
+ The series also has a strong line-up of first-party games due to Sony Interactive Entertainment Worldwide Studios, a group of many studios owned by Sony Interactive Entertainment that exclusively developed them for PlayStation consoles. In addition, the series features various budget re-releases of games by Sony with different names for each region; these include the Greatest Hits, Platinum, Essentials, and The Best selection of games.
16
+
17
+ PlayStation was the brainchild of Ken Kutaragi, a Sony executive who managed one of the company's hardware engineering divisions and was later dubbed "The Father of the PlayStation".[17][18]
18
+
19
+ The console's origins date back to 1988 where it was originally a joint project between Nintendo and Sony to create a CD-ROM for the Super Famicom.[19] Although Nintendo denied the existence of the Sony deal as late as March 1991,[20] Sony revealed a Super Famicom with a built-in CD-ROM drive, that incorporated Green Book technology or CD-i, called "Play Station" (also known as SNES-CD) at the Consumer Electronics Show in June 1991. However, a day after the announcement at CES, Nintendo announced that it would be breaking its partnership with Sony, opting to go with Philips instead but using the same technology.[21] The deal was broken by Nintendo after they were unable to come to an agreement on how revenue would be split between the two companies.[21] The breaking of the partnership infuriated Sony President Norio Ohga, who responded by appointing Kutaragi with the responsibility of developing the PlayStation project to rival Nintendo.[21]
20
+
21
+ At that time, negotiations were still on-going between Nintendo and Sony, with Nintendo offering Sony a "non-gaming role" regarding their new partnership with Philips. This proposal was swiftly rejected by Kutaragi who was facing increasing criticism over his work with regard to entering the video game industry from within Sony. Negotiations officially ended in May 1992 and in order to decide the fate of the PlayStation project, a meeting was held in June 1992, consisting of Sony President Ohga, PlayStation Head Kutaragi and several senior members of Sony's board. At the meeting, Kutaragi unveiled a proprietary CD-ROM-based system he had been working on which involved playing video games with 3D graphics to the board. Eventually, Sony President Ohga decided to retain the project after being reminded by Kutaragi of the humiliation he suffered from Nintendo. Nevertheless, due to strong opposition from a majority present at the meeting as well as widespread internal opposition to the project by the older generation of Sony executives, Kutaragi and his team had to be shifted from Sony's headquarters to Sony Music, a completely separate financial entity owned by Sony, so as to retain the project and maintain relationships with Philips for the MMCD development project (which helped lead to the creation of the DVD).[21]
22
+
23
+ According to SCE's producer Ryoji Akagawa and chairman Shigeo Maruyama, there was uncertainty over whether the console should primarily focus on 2D sprite graphics or 3D polygon graphics. Eventually, after witnessing the success of Sega's Virtua Fighter in Japanese arcades, that Sony realized "the direction of the PlayStation became instantly clear" and 3D polygon graphics became the console's primary focus.[22]
24
+
25
+ The PlayStation logo was designed by Manabu Sakamoto. He wanted the logo to capture the 3D support of the console, but instead of just adding apparent depth to the letters "P" and "S", he created an optical illusion that suggested the letters in depth of space. Sakamoto also stuck with four bright principle colors, red, yellow, green, and blue, only having to tune the green color for better harmony across the logo. Sakamoto also designed the black and white logo based on the same design, reserved for times where colors could not be used.[23]
26
+
27
+ At Sony Music Entertainment, Kutaragi worked closely with Shigeo Maruyama, the CEO of Sony Music, and with Akira Sato to form Sony Computer Entertainment Inc. (SCEI) on November 16, 1993.[24] A building block of SCEI was its initial partnership with Sony Music which helped SCEI attract creative talent to the company as well as assist SCEI in manufacturing, marketing and producing discs, something that Sony Music had been doing with Music Discs. The final two key members of SCEI were Terry Tokunaka, the President of SCEI from Sony's headquarters, and Olaf Olafsson. Olafsson was CEO and president of New York-based Sony Interactive Entertainment[25] which was the parent company for the 1994-founded Sony Computer Entertainment of America (SCEA).
28
+
29
+ The PlayStation project, SCEI's first official project, was finally given the green light by Sony executives in 1993 after a few years of development. Also in 1993, Phil Harrison, who later became President of Sony Computer Entertainment Worldwide Studios, was recruited into SCEI to attract developers and publishers to produce games for their new PlayStation platform.[21]
30
+
31
+ Computer Gaming World in March 1994 reported a rumor that the "Sony PS-X" would be released in Japan "before the end of this year and will retail for less than $400".[26] After a demonstration of Sony's distribution plan as well as tech demos of its new console to game publishers and developers in a hotel in Tokyo in 1994, numerous developers began to approach PlayStation. Two of whom later became major partners were Electronic Arts in the West and Namco in Japan. One of the factors which attracted developers to the platform was the use of a 3D-capable, CD-ROM-based console which was much cheaper and easier to manufacture for in comparison to Nintendo's rival console, which used cartridge systems. The project eventually hit Japanese stores in December 1994 and gained massive sales due to its lower price point than its competitor, the Sega Saturn. The popularity of the console spread after its release worldwide in North America and Europe.[21]
32
+
33
+ The original PlayStation, released in Japan on December 3, 1994, was the first of the ubiquitous PlayStation series of console and hand-held game devices. It has included successor consoles and upgrades including the Net Yaroze (a special black PlayStation with tools and instructions to program PlayStation games and applications), "PS one" (a smaller version of the original) and the PocketStation (a handheld which enhances PlayStation games and also acts as a memory card). It was part of the fifth generation of video game consoles competing against the Sega Saturn and the Nintendo 64. By December 2003, the PlayStation and PS one had shipped a combined total of 102.49 million units,[27] eventually becoming the first video game console to sell 120 million units.[2]
34
+
35
+ Released on July 7, 2000,[28] concurrently with its successor the PlayStation 2, the PS One (stylized as PS one) was a considerably smaller, redesigned version of the original PlayStation video game console.[29] The PS one went on to outsell all other consoles, including its successor, throughout the remainder of the year.[29] It featured two main changes from its predecessor, the first being a cosmetic change to the console and the second being the home menu's Graphical User Interface; a variation of the GUI previously used only on PAL consoles up to that point.
36
+
37
+ Released in 2000, 15 months after the Dreamcast and a year before its other competitors, the Xbox and the Nintendo GameCube, the PlayStation 2 is part of the sixth generation of video game consoles, and is backwards-compatible with most original PlayStation games. Like its predecessor, it has received a slimmer redesign. It is the most successful console in the world,[30] having sold over 155 million units as of December 28, 2012.[3] On November 29, 2005, the PS2 became the fastest game console to reach 100 million units shipped, accomplishing the feat within 5 years and 9 months from its launch. This achievement occurred faster than its predecessor, the PlayStation, which took "9 years and 6 months since launch" to reach the same figure.[2] PlayStation 2 shipments in Japan ended on December 28, 2012.[31] The Guardian reported on January 4, 2013 that PS2 production had ended worldwide, but studies showed that many people all around the world still own one even if it is no longer in use. PlayStation 2 has been ranked as the best selling console of all time as of 2015.[32]
38
+
39
+ Released in 2004, four years after the launch of the original PlayStation 2, the PlayStation 2 Slimline was the first major redesign of the PlayStation 2. Compared to its predecessor, the Slimline was smaller, thinner, quieter and also included a built-in Ethernet port (in some markets it also has an integrated modem). In 2007, Sony began shipping a revision of the Slimline which was lighter than the original Slimline together with a lighter AC adapter.[33] In 2008, Sony released yet another revision of the Slimline which had an overhauled internal design incorporating the power supply into the console itself like the original PlayStation 2 resulting in a further reduced total weight of the console.[34]
40
+
41
+ Released on November 11, 2006 in Japan, the PlayStation 3 is a seventh generation game console from Sony. It competes with the Microsoft Xbox 360 and the Nintendo Wii. The PS3 is the first console in the series to introduce the use of motion-sensing technology through its Sixaxis wireless controller. The console also incorporates a Blu-ray Disc player and features high-definition resolution. The PS3 was originally offered with either a 20 GB or 60 GB hard drive, but over the years its capacity increased in increments available up to 500 GB. The PlayStation 3 has sold over 80 million consoles worldwide as of November 2013.[35]
42
+
43
+ Like its predecessors, the PlayStation 3 was re-released in 2009 as a "slim" model. The redesigned model is 33% smaller, 36% lighter, and consumes 34% to 45% less power than previous models.[36][37] In addition, it features a redesigned cooling system and a smaller Cell processor which was moved to a 45nm manufacturing process.[38] It sold in excess of a million units within its first 3 weeks on sale.[39] The redesign also features support for CEC (more commonly referred to by its manufacturer brandings of BraviaSync, VIERA Link, EasyLink and others) which allows control of the console over HDMI by using the remote control as the controller. The PS3 slim also runs quieter and is cooler than previous models due to its 45 nm Cell. The PS3 Slim no longer has the "main power" switch (similar to PlayStation 2 slim), like the previous PS3 models, which was located at the back of the console.[36] It was officially released on September 1, 2009 in North America and Europe and on September 3, 2009 in Japan, Australia and New Zealand.[36][40][41]
44
+
45
+ In 2012, Sony revealed a new "Super Slim" PlayStation 3. The new console, with a completely redesigned case that has a sliding door covering the disc drive (which has been moved to the top of the console), is 4.3 pounds, almost three pounds lighter than the previous "slim" model. The console comes with either 12GB flash memory or a 250GB, 500GB hard drive. Several bundles which include a Super Slim PS3 and a selection of games are available.
46
+
47
+ The PlayStation 4 (PS4) is the latest video game console from Sony Computer Entertainment announced at a press conference on February 20, 2013. In the meeting, Sony revealed some hardware specifications of the new console.[42][43] The eighth-generation system, launched in the fourth quarter of 2013, introduced the x86 architecture to the PlayStation series. According to lead system architect, Mark Cerny, development on the PlayStation 4 began as early as 2008.[44] PlayStation Europe CEO Jim Ryan emphasized in 2011 that Sony wanted to avoid launching the next-generation console behind the competition.[45]
48
+
49
+ Among the new applications and services, Sony introduced the PlayStation App, allowing PS4 owners to turn smartphones and tablets into a second screen to enhance gameplay.[46] The company also planned to debut PlayStation Now game streaming service, powered by technology from Gaikai.[47][48] By incorporating a share button on the new controller and making it possible to view in-game content being streamed live from friends, Sony planned to place more focus on social gameplay as well.[46] The PlayStation 4 was first released in North America on November 15, 2013.
50
+
51
+ PlayStation 4 Slim (officially marketed simply as PlayStation 4 or PS4) was unveiled on September 7, 2016. It is a revision of the original PS4 hardware with a streamlined form factor. The new casing is 40% smaller and carries a rounded body with a matte finish on the top of the console rather than a two-tone finish. The two USB ports on the front have a larger gap between them, and the optical audio port was also removed.[168] It ships with a minor update to the DualShock 4 controller, with the light bar visible through the top of the touchpad and dark matte grey coloured exterior instead of a partially shiny black. The PS4 Slim was released on September 15, 2016, with a 500 GB model at the same price point as the original PS4 model.[169] Its model number is CUH-2000.[170]
52
+
53
+ PlayStation 4 Pro or PS4 Pro for short (originally announced under the codename Neo)[35] was unveiled on September 7, 2016. Its model number is CUH-7000.[170] It is an updated version of the PlayStation 4 with improved hardware, including an upgraded GPU with 4.2 teraflops of processing power, and higher CPU clock. It is designed primarily to enable selected games to be playable at 4K resolution, and improved quality for PlayStation VR. All games are backwards and forward compatible between PS4 and PS4 Pro, but games with optimizations will have improved graphics performance on PS4 Pro. Although capable of streaming 4K video from online sources, PS4 Pro does not support Ultra HD Blu-ray.[171] [172] [173] Additionally the PS4 Pro is the only PS4 model which can remote play at 1080p. The other models are limited to 720p.[174]
54
+
55
+ The first news of the PlayStation 5 (PS5)[49] came from Mark Cerny in an interview with Wired in April 2019.[50] Sony intends for the PlayStation 5 to be its next-generation console and to ship worldwide by the end of 2020.[51] In early 2019, Sony's financial report for the quarter ending March 31, 2019, affirmed that new next-generation hardware was in development but would ship no earlier than April 2020.[52]
56
+
57
+ The current specifications were released in October 2019.[53] The console is slated to use an 8-core, 16-thread CPU based on AMD's Zen 2 microarchitecture, manufactured on the 7 nanometer process node. The graphics processor is a custom variant of AMD's Navi family using the RDNA microarchitecture, which includes support for hardware acceleration of ray-tracing rendering, enabling real-time ray-traced graphics.[53] The new console will ship with a custom SSD storage, as Cerny emphasized the need for fast loading times and larger bandwidth to make games more immersive, as well as to support the required content streaming from disc for 8K resolution.[50] In a second interview with Wired in October 2019, further details of the new hardware were revealed: the console's integrated Blu-ray drive would support 100GB Blu-ray discs[51] and Ultra HD Blu-ray;[citation needed] whilst game installation from a disc is mandatory as to take advantage of the SSD, the user will have some fine-grain control of how much they want to have installed, such as only installing the multiplayer components of a game.[51] Sony is developing an improved suspended gameplay state for the PlayStation 5 to consume less energy than the PlayStation 4.[54]
58
+
59
+ The system's new controller will have adaptive triggers that can change the resistance to the player as necessary, such as changing the resistance during the action of pulling an arrow back in a bow in-game.[51] The controller will also have strong haptic feedback through voice coil actuators, which together with an improved controller speaker is intended to give better in-game feedback.[51] USB-C connectivity, together with a higher rated battery are other improvements to the new controller.[51]
60
+
61
+ The PlayStation 5 will feature a completely revamped user interface.[49] The PlayStation 5 is set to be backwards-compatible with PlayStation 4 and PlayStation VR games, with Cerny stating that the transition to the new console is meant to be a soft one.[50][53] However, in later interviews Sony was unwilling to commit to backward-compatibility.[55] At CES 2020, Sony unveiled the official logo for the platform.[56]
62
+
63
+
64
+
65
+ Top: PS
66
+
67
+ Bottom: PS One
68
+
69
+ Left: PS2
70
+
71
+ Right: PS2 Slim
72
+
73
+
74
+
75
+ Top: PS3 (left) and PS3 Slim (right)
76
+
77
+ Bottom: PS3 Super Slim
78
+
79
+
80
+
81
+ Top: PS4
82
+
83
+ Bottom: PS4 Pro (PS4 slim not shown)
84
+
85
+ ¥39,800[1]US$299[57]£299[58]
86
+
87
+ PS One
88
+
89
+ unknown
90
+
91
+ ¥39,800[1]US$299[57]£299[58]
92
+
93
+ PS2 Slim
94
+
95
+ unknown
96
+
97
+ ¥49,980 (20 GB)[1]US$499 (20 GB)
98
+
99
+ US$599 (60 GB)[57]£425 (60 GB)[59]€599 (60 GB)[58]
100
+
101
+ PS3 Slim
102
+
103
+ unknown
104
+
105
+ PS3 Super Slim
106
+
107
+ unknown
108
+
109
+ ¥38,980 (500 GB)US$399 (500 GB)€399 (500 GB)£349 (500 GB)
110
+
111
+ PS4 Slim
112
+
113
+ US$299 (500 GB)
114
+
115
+ US$349 (1 TB)
116
+
117
+ €299 (500 GB)
118
+
119
+ €349 (1 TB)
120
+
121
+ PS4 Pro
122
+
123
+ US$399 (1 TB)
124
+
125
+ €399 (1 TB)
126
+
127
+ Resolution: 256x224 – 640x480
128
+ Sprite/BG drawing
129
+ Adjustable frame buffer
130
+ No line restriction
131
+ Unlimited CLUTs (Color Look-Up Tables)
132
+ 4,000 8x8 pixel sprites with individual scaling and rotation
133
+ Simultaneous backgrounds (Parallax scrolling)
134
+ 620,000 polygons/sec
135
+
136
+ Capable of multi-pass rendering;
137
+
138
+ Connected to VU1 on CPU (a vector only for visual style coding things with 3.2 GFLOPS) to deliver enhanced shader graphics and other enhanced graphics
139
+
140
+ DVD Playback
141
+
142
+ Audio file playback (ATRAC3, AAC, MP3, WAV, WMA)
143
+ Video file playback (MPEG1, MPEG2, MPEG4, H.264-AVC, DivX)
144
+
145
+ Image editing and slideshows (JPEG, GIF, PNG, TIFF, BMP)
146
+ Mouse and keyboard support
147
+ Folding@Home client with visualizations from the RSX
148
+
149
+ DVD playback
150
+ Audio playback from inserted USB flash drive
151
+
152
+ The PocketStation was a miniature game console created by SCE as a peripheral for the original PlayStation.[81] Released exclusively in Japan on December 23, 1999,[82] it featured a monochrome LCD, a speaker, a real-time clock and infrared communication capability. It could also be used as a standard PlayStation memory card by connecting it to a PlayStation memory card slot.[81] It was extremely popular in Japan and Sony originally had plans to release it in the United States but the plan was ultimately scrapped due to various manufacturing and supply-and-demand problems.[83][84]
153
+
154
+ The PlayStation Portable (PSP) was Sony's first handheld console to compete with Nintendo's DS console. The original model (PSP-1000) was released in December 2004 and March 2005,[85] The console is the first to utilize a new proprietary optical storage medium known as Universal Media Disc (UMD), which can store both games and movies.[86][87] It contains 32 MB of internal flash memory storage, expandable via Memory Stick PRO Duo cards.[88] It has a similar control layout to the PS3 with its PlayStation logo button and its ('Triangle'), ('Circle/O'), ('Cross/X') and ('Square') buttons in their white-colored forms.
155
+
156
+ The PSP-2000 (also known as the Slim & Lite in PAL territories) was the first major hardware revision of the PlayStation Portable, released in September 2007. The 2000 series was 33% lighter and 19% slimmer than the original PlayStation Portable.[89][90] The capacity of the battery was also reduced by ⅓ but the run time remained the same as the previous model due to lower power consumption. Older model batteries will still work and they extend the amount of playing time.[91] The PSP Slim & Lite has a new gloss finish. Its serial port was also modified in order to accommodate a new video-out feature (while rendering older PSP remote controls incompatible). On a PSP-2000, PSP games will only output to external monitors or TVs in progressive scan mode, so that televisions incapable of supporting progressive scan will not display PSP games; non-game video will output in either progressive or interlaced mode. USB charging was also made possible.[92] Buttons are also reportedly more responsive on the PSP-2000.[93] In 2008, Sony released a second hardware revision called the PSP-3000 which included several features that were not present in the PSP-2000, such as a built-in microphone and upgraded screen, as well as the ability to output PSP games in interlaced mode.
157
+
158
+ Released in October 2009, the PSP Go is the biggest redesign of the PlayStation Portable to date. Unlike previous PSP models, the PSP Go does not feature a UMD drive but instead has 16 GB of internal flash memory to store games, videos and other media.[94] This can be extended by up to 32GB with the use of a Memory Stick Micro (M2) flash card. Also unlike previous PSP models, the PSP Go's rechargeable battery is not removable or replaceable by the user. The unit is 43% lighter and 56% smaller than the original PSP-1000,[95] and 16% lighter and 35% smaller than the PSP-3000.[96] It has a 3.8" 480 × 272 LCD[97] (compared to the larger 4.3" 480 × 272 pixel LCD on previous PSP models).[98] The screen slides up to reveal the main controls. The overall shape and sliding mechanism are similar to that of Sony's mylo COM-2 internet device.[99] The PSP Go was produced and sold concurrently with its predecessor the PSP-3000 although it did not replace it.[95] All games on the PSP Go must be purchased and downloaded from the PlayStation Store as the handheld is not compatible with the original PSP's physical media, the Universal Media Disc. The handheld also features connectivity with the PlayStation 3's controllers the Sixaxis and DualShock 3 via Bluetooth connection.[96]
159
+
160
+ The PSP-E1000 is a budget-focused PSP model which, unlike previous PSP models, does not feature Wi-Fi or stereo speakers (replaced by a single mono speaker)[100] and has a matte "charcoal black" finish similar to the slim PlayStation 3.[101] The E1000 was announced at Gamescom 2011 and available across the PAL region for an RRP of €99.99.[101]
161
+
162
+ Released in Japan on December 17, 2011 and North America on February 22, 2012,[102] the PlayStation Vita[103] was previously codenamed Next Generation Portable (NGP). It was officially unveiled by Sony on January 27, 2011 at the PlayStation Meeting 2011.[104] The original model of the handheld, the PCH-1000 series features a 5-inch OLED touchscreen,[105] two analog sticks, a rear touchpad, Sixaxis motion sensing and a 4 core ARM Cortex-A9 MPCore processor.
163
+
164
+ The new PCH-2000 series system is a lighter redesign of the device that was announced at the SCEJA Press Conference in September 2013 prior to the Tokyo Game Show. This model is 20% thinner and 15% lighter compared to the original model, has an additional hour of battery life, an LCD instead of OLED, includes a micro USB Type B port, 1GB of internal storage memory. It was released in Japan on October 10, 2013 in six colors: white, black, pink, yellow, blue, and olive green, and in North America on May 6, 2014.[106]
165
+
166
+ The Vita was discontinued in March 2019. SIE president Jim Ryan said that while the Vita was a great device, they have moved away from portable consoles, "clearly it's a business that we're no longer in now".[23]
167
+
168
+ Released solely in Japan in 2003, the Sony PSX was a fully integrated DVR and PlayStation 2 video game console. It was the first Sony product to utilize the XrossMediaBar (XMB)[107] and can be linked with a PlayStation Portable to transfer videos and music via USB.[108] It also features software for video, photo and audio editing.[107] PSX supports online game compatibility using an internal broadband adapter. Games that utilize the PS2 HDD (for example, Final Fantasy XI) are supported as well.[109] It was the first product released by Sony under the PlayStation brand that did not include a controller with the device itself.[110]
169
+
170
+ Released in 2010, the Sony BRAVIA KDL22PX300 is a 22-inch 720p television which incorporates a PlayStation 2 console, along with 4 HDMI ports.[111]
171
+
172
+ A 24-inch 1080p PlayStation branded 3D television, officially called the PlayStation 3D Display, was released in late 2011. A feature of this 3D television is SimulView. During multiplayer games, each player will only see their respective screen (in full HD) appear on the television through their respective 3D glasses, instead of seeing a split screen (e.g. player 1 will only see player 1's screen displayed through their 3D glasses).
173
+
174
+ The Xperia Play is an Android-powered smartphone with a slide-up gamepad resembling the PSP Go developed by Sony Ericsson aimed at gamers and is the first to be PlayStation Certified.
175
+
176
+ Sony Tablets are PlayStation Certified Android tablets, released in 2011, 2012, and 2013. They offer connectivity with PlayStation 3 controllers and integrate with the PlayStation network using a proprietary application. The following models were released between 2011 and 2013: S, Sony Tablet S, Sony Tablet P, Xperia Tablet S and Xperia Tablet Z.
177
+
178
+ PlayStation TV, known in Asia as PlayStation Vita TV, is a microconsole and a non-portable variant of the PlayStation Vita handheld. It was announced on September 9, 2013 at a Sony Computer Entertainment Japan presentation. Instead of featuring a display screen, the console connects to a television via HDMI. Users can play using a DualShock 3 controller, although due to the difference in features between the controller and the handheld, certain games are not compatible with PS TV, such as those that are dependent on the system's touch-screen, rear touchpad, microphone or camera. The device is said to be compatible with over 100 Vita games, as well as various digital PlayStation Portable, PlayStation and PC Engine titles. The system supports Remote Play compatibility with the PlayStation 4, allowing players to stream games from the PS4 to a separate TV connected to PS TV, and also allows users to stream content from video services such as Hulu and Niconico, as well as access the PlayStation Store. The system was released in Japan on November 14, 2013, in North America on October 14, 2014, and in Europe and Australasia on November 14, 2014.[112]
179
+
180
+ PlayStation VR is a virtual reality device that is produced by Sony Computer Entertainment. It features a 5.7 inch 1920x1080 resolution OLED display, and operates at 120 Hz which can eliminate blur and produce a smooth image; the device also has a low latency of less than 18ms.[113] Additionally, it produces two sets of images, one being visible on a TV and one for the headset, and includes 3D audio technology so the player can hear from all angles. The PlayStation VR was released in October 2016.[114]
181
+
182
+ The PlayStation Classic is a miniature version of the original 1994 Model SCPH-1001 PlayStation console, that comes preloaded with 20 games, and two original style controllers. It was launched on the 24th anniversary of the original console on December 3, 2018.[115]
183
+
184
+ Each console has a variety of games. The PlayStation 2, PSX and PlayStation 3 exhibit backwards compatibility and can play most of the games released on the original PlayStation. Some of these games can also be played on the PlayStation Portable but they must be purchased and downloaded from a list of PS one Classics from the PlayStation Store. Games released on the PlayStation 2 can currently only be played on the original console as well as the PSX and the early models of the PlayStation 3 which are backwards compatible. The PlayStation 3 has two types of games, those released on Blu-ray Discs and downloadable games from the PlayStation Store. The PlayStation Portable consists of numerous games available on both its physical media, the Universal Media Disc and the Digital Download from the PlayStation Store. However, some games are only available on the UMD while others are only available on the PlayStation Store. The PlayStation Vita consists of games available on both its physical media, the PlayStation Vita card and digital download from the PlayStation Store.
185
+
186
+ Sony Computer Entertainment Worldwide Studios is a group of video game developers owned by Sony Computer Entertainment. It is dedicated to developing video games exclusively for the PlayStation series of consoles. The series has produced several best-selling franchises such as the Gran Turismo series of racing video games as well as critically acclaimed titles such as the Uncharted series. Other notable franchises include God of War, Twisted Metal and more recently, LittleBigPlanet (series), InFAMOUS, and MotorStorm.
187
+
188
+ Greatest Hits (North America), Platinum Range (PAL territories) and The Best (Japan and Asia) are video games for the Sony PlayStation, PlayStation 2, PlayStation 3, and PlayStation Portable consoles that have been officially re-released at a lower price by Sony. Each region has its own qualifications to enter the re-release program. Initially, during the PlayStation era, a game had to sell at least 150,000 copies (later 250,000)[116] and be on the market for at least a year[117] to enter the Greatest Hits range. During the PlayStation 2 era, the requirements increased with the minimum number of copies sold increasing to 400,000 and the game had to be on the market for at least 9 months.[116] For the PlayStation Portable, games had to be on the market for at least 9 months with 250,000 copies or more sold.[118] Currently, a PlayStation 3 game must be on the market for 10 months and sell at least 500,000 copies to meet the Greatest Hits criteria.[119] PS one Classics were games that were released originally on the PlayStation and have been re-released on the PlayStation Store for the PlayStation 3 and PlayStation Portable. Classics HD are compilations of PlayStation 2 games that have been remastered for the PlayStation 3 on a single disc with additional features such as upscaled graphics, PlayStation Move support, 3D support and PlayStation Network trophies. PlayStation Mobile (formerly PlayStation Suite) is a cross-platform, cross-device software framework aimed at providing PlayStation content, currently original PlayStation games, across several devices including PlayStation Certified Android devices as well as the PlayStation Vita.
189
+
190
+ Sony has generally supported indie game development since incorporating the digital distribution storefront in the PlayStation 3, though initially required developers to complete multiple steps to get an indie game certified on the platform. Sony improved and simplified the process in transitioning to the PlayStation 4.[120]
191
+
192
+ As Sony prepared to transition from the PlayStation 4 to PlayStation 5, they introduced a new PlayStation Indies program led by Shuhei Yoshida in July 2020. The program's goals are to spotlight new and upcoming indie titles for the PlayStation 4 and 5, focusing on those that are more innovative and novel, akin to past titles such as PaRappa the Rapper, Katamari Damacy, LittleBigPlanet, and Journey. Sony also anticipates bringing more indie titles to the PlayStation Now series as part of this program.[121]
193
+
194
+ Online gaming on PlayStation consoles first started in July 2001 with the release of PlayStation 2's unnamed online service in Japan. Later in August 2002 saw its release in North America, followed by the European release in June 2003. This service was shut down on March 31, 2016.
195
+
196
+ Released in 2006, the PlayStation Network is an online service[122] focusing on online multiplayer gaming and digital media delivery. The service is provided and run by Sony Computer Entertainment for use with the PlayStation 3, and was later implemented on the PlayStation Portable, PlayStation Vita and PlayStation 4 video game consoles.[123] The service has over 103 million active users monthly (as of December 2019).[12] The Sony Entertainment Network provides other features for users like PlayStation Home, PlayStation Store, and Trophies.
197
+
198
+ The PlayStation Store is an online virtual market available to users of the PlayStation 3, PlayStation 4 and PlayStation Portable game consoles via the PlayStation Network. The store uses both physical currency and PlayStation Network Cards. The PlayStation Store's gaming content is updated every Tuesday and offers a range of downloadable content both for purchase and available free of charge. Available content includes full games, add-on content, playable demos, themes and game and movie trailers. The service is accessible through an icon on the XMB on the PS3 and PSP. The PS3 store can also be accessed on the PSP via a Remote Play connection to the PS3. The PSP store is also available via the PC application, Media Go. As of September 24, 2009, there have been more than 600 million downloads from the PlayStation Store worldwide.[124]
199
+
200
+ Video content such as films and television shows are also available from the PlayStation Store on the PlayStation 3 and PSP and will be made available on some new Sony BRAVIA televisions, VAIO laptop computers and Sony Blu-ray Disc players from February 2010.[125]
201
+
202
+ Life with PlayStation was a Folding@home application available for PlayStation 3 which connected to Stanford University’s Folding@home distributed computer network and allowed the user to donate their console's spare processing cycles to the project.[126] Folding@home is supported by Stanford University and volunteers make a contribution to society by donating computing power to this project. Research made by the project may eventually contribute to the creation of vital cures. The Folding@home client was developed by Sony Computer Entertainment in collaboration with Stanford University.[127] Life with PlayStation also consisted of a 3D virtual view of the Earth and contained current weather and news information of various cities and countries from around the world, as well as a World Heritage channel which offered information about historical sites, and the United Village channel which is a project designed to share information about communities and cultures worldwide.[128][129] As of PlayStation 3 system software update version 4.30 on October 24, 2012, the Life With PlayStation project has ended.
203
+
204
+ PlayStation Plus, a subscription-based service on the PlayStation Network, complements the standard PSN services.[130] It enables an auto-download feature for game patches and system software updates. Subscribers gain early or exclusive access to some betas, game demos, premium downloadable content (such as full game trials of retail games like Infamous, and LittleBigPlanet) and other PlayStation Store items, as well as a free subscription to Qore. Other downloadable items include PlayStation Store discounts and free PlayStation Network games, PS one Classics, PlayStation Minis, themes and avatars.[131] It offers a 14-day free trial.
205
+
206
+ PlayStation Blog (stylized as PlayStation.Blog) is an online PlayStation focused gaming blog, part of the PlayStation Network. It was launched on June 11, 2007[132] and has featured in numerous interviews with third-party companies such as Square Enix.[133] It also has posts from high-ranking Sony Computer Entertainment executives such as Jack Tretton, former President and Chief Executive Officer of Sony Computer Entertainment, and Shawn Layden, current President, SIEA, and Chairman, SIE Worldwide Studios.[134][135] A sub-site of the blog called PlayStation Blog Share was launched on March 17, 2010 and allowed readers of the blog as well as users of the PlayStation Blog to submit ideas to the PlayStation team about anything PlayStation-related and vote on the ideas of other submissions.[136][137]
207
+
208
+ The PlayStation App is an application that was released on January 11, 2011 in several European countries for iOS (version 4 and above) and for Android (version 1.6 and above),[138] and has been installed more than 3.6 million times as of March 2, 2014.[139] It allows users to view their trophies, see which of their PSN friends are online and read up to date information about PlayStation.[138] It does not feature any gaming functionality.[138]
209
+
210
+ The PlayStation Mobile (formerly PlayStation Suite) is a software framework that will be used to provide downloadable PlayStation content to devices running Android 2.3 and above as well as the PlayStation Vita. The framework will be cross-platform and cross-device, which is what Sony calls "hardware-neutral". It was set to release before the end of calendar year 2011. In addition, Android devices that have been certified to be able to playback PlayStation Suite content smoothly will be certified with the PlayStation Certified certification.[15]
211
+
212
+ PlayStation Now (PS Now) is a Gaikai-based video game streaming service used to provide PlayStation gaming content to PlayStation 3 (PS3), PlayStation 4 (PS4), PlayStation Vita, PlayStation TV and BRAVIA televisions.[140] The service currently allows users to pay for access to a selection of original PlayStation 3 titles on either a per-game basis or via a subscription. PlayStation Now was announced on January 7, 2014 at the 2014 Consumer Electronic Show. At CES, Sony presented demos of The Last of Us, God of War: Ascension, Puppeteer and Beyond: Two Souls, playable through PS Now on Bravia TVs and PlayStation Vitas. PlayStation Now was launched in Open Beta in the United States and Canada on PS4 on July 31, 2014, on PS3 on September 18, 2014, on PS Vita and PS TV on October 14, 2014, with support for select 2014 Bravia TVs coming later in the year.[141]
213
+
214
+ PlayStation Home is a community-based social gaming networking service for the PlayStation 3 on the PlayStation Network (PSN). It is available directly from the PlayStation 3 XrossMediaBar. Membership is free, and only requires a PSN account. Home has been in development since early 2005 and started an open public beta test on December 11, 2008.[142] Home allows users to create a custom avatar, which can be made to suit the user's preference.[143] Users can decorate their avatar's personal apartment ("HomeSpace") with default, bought, or won items. They can travel throughout the Home world (except cross region), which is constantly updated by Sony and partners. Each part of the world is known as a space. Public spaces can just be for display, fun, or for meeting people. Home features many mini-games which can be single player or multiplayer. Users can shop for new items to express themselves more through their avatars or HomeSpace.[144] Home features video screens in many places for advertising, but the main video content is shown at the theatre for entertainment. Home plays host to a variety of special events which range from prize-giving events to entertaining events. Users can also use Home to connect with friends and customize content.[142] Xi, a once notable feature of Home, is the world's first console based Alternate Reality Game that took place in secret areas in Home and was created by nDreams.[145][146]
215
+
216
+ "Room" (officially spelled as R∞M with capital letters and the infinity symbol in place of the "oo") was being beta tested in Japan from October 2009 to April 2010. Development of Room has been halted on April 15, 2010 due to negative feedback from the community.[147] Announced at TGS 2009, it was supposed to be a similar service to the PlayStation Home and was being developed for the PSP.[148] Launching directly from the PlayStation Network section of the XMB was also to be enabled. Just like in Home, PSP owners would have been able to invite other PSP owners into their rooms to "enjoy real time communication."[149] A closed beta test had begun in Q4 2009 in Japan.[150]
217
+
218
+ The XrossMediaBar, originally used on the PSX, is a graphical user interface used for the PlayStation 3 and PlayStation Portable, as well as a variety of other Sony devices. The interface features icons that are spread horizontally across the screen. Navigation moves the icons instead of a cursor. These icons are used as categories to organize the options available to the user. When an icon is selected on the horizontal bar, several more appear vertically, above and below it (selectable by the up and down directions on a directional pad).[151] The XMB can also be accessed in-game albeit with restrictions, it allows players to access certain areas of the XMB menu from within the game and is only available for the PlayStation 3.[152] Although the capacity to play users' own music in-game was added with this update, the feature is dependent on game developers who must either enable the feature in their games or update existing games.[153]
219
+
220
+ LiveArea, designed to be used on the PlayStation Vita, is a graphical user interface set to incorporate various social networking features via the PlayStation Network. It has been designed specifically as a touchscreen user interface for users.[154]
221
+
222
+ In 2002, Sony released the first useful and fully functioning operating system for a video game console, after the Net Yaroze experiment for the original PlayStation. The kit, which included an internal hard disk drive and the necessary software tools, turned the PlayStation 2 into a full-fledged computer system running Linux. Users can utilize a network adapter to connect the PlayStation 2 to the internet, a monitor cable adaptor to connect the PlayStation 2 to computer monitors as well as a USB Keyboard and Mouse which can be used to control Linux on the PlayStation 2.[155][156]
223
+
224
+ The PlayStation 3 (excluding PlayStation 3 Slim) also supports running Linux OS on firmware versions prior to 3.21 without the need for buying additional hardware purchase. Yellow Dog Linux provides an official distribution that can be downloaded, and other distributions such as Fedora, Gentoo and Ubuntu have been successfully installed and operated on the console.[38] The use of Linux on the PlayStation 3 allowed users to access 6 of the 7 Synergistic Processing Elements; Sony implemented a hypervisor restricting access to the RSX. The feature to install a second operating system on a PlayStation 3 was removed in a firmware update released in 2010.[157]
225
+
226
+ Released in 1994, the PlayStation control pad was the first controller made for the original PlayStation. It featured a basic design of a D-pad, 4 main select buttons ( ('Green Triangle'), ('Red Circle/Red O')), ('Blue Cross/Blue X') and ('Pink Square'), and start and select buttons on the face. 'Shoulder buttons' are also featured on the top [L1, L2, R1, R2] (named by the side [L=Left, R=Right] and 1 and 2 [top and bottom]). In 1996, Sony released the PlayStation Analog Joystick for use with flight simulation games.[158] The original digital controller was then replaced by the Dual Analog in 1997, which added two analog sticks based on the same potentiometer technology as the Analog Joystick.[159] This controller was then also succeeded by the DualShock controller.
227
+
228
+ Released in 1998, the DualShock controller for the PlayStation succeeded its predecessor, the Dual Analog, and became the longest running series of controllers for the PlayStation brand. In addition to the inputs of the original, digital, controller (, , , , L1, L2, R1, R2, Start, Select and a D-pad), the DualShock featured two analog sticks in a similar fashion to the previous Dual Analog controller, which can also be depressed to activate the L3 and R3 buttons.[160]
229
+
230
+ The DualShock series consists of four controllers: the DualShock which was the fourth controller released for the PlayStation; the DualShock 2, the only standard controller released for the PlayStation 2, and the DualShock 3, the second and current controller released for the PlayStation 3, and the DualShock 4, which went through a massive redesign and is the default input of the PlayStation 4, and upon release was compatible with the PS3 originally only via USB and eventually with a firmware update, Bluetooth connectivity was enabled. The Sixaxis was the first official controller for the PlayStation 3, and is based on the same design as the DualShock series (but lacking the vibration motors of the DualShock series of controllers).
231
+
232
+ Like the Dual Analog, the DualShock and DualShock 2 feature an "Analog" button between the analog sticks that toggles the analog sticks on and off (for use with games which support only the digital input of the original controller). On the PlayStation 3 Sixaxis and DualShock 3 controllers, the analog sticks are always enabled. Beginning with the Sixaxis, a 'PlayStation button' (which featured the incorporated PS logo and is similar in function to the Xbox 360 "Guide" button) was included on controllers. The PlayStation button replaces the "Analog" button of the DualShock and DualShock 2 controllers. Pressing the PS button on the PS3 brings up the XMB, while holding it down brings up system options (such as quit the game, change controller settings, turn off the system, and turn off the controller).[161]
233
+
234
+ PlayStation Move is a motion-sensing game controller platform for the PlayStation 3 video game console by Sony Computer Entertainment (SCE). Based on the handheld motion controller wand, PlayStation Move uses the PlayStation Eye webcam to track the wand's position and the inertial sensors in the wand to detect its motion. First revealed on June 2, 2009, PlayStation Move was launched in Q3/Q4 2010. Hardware available at launch included the main PlayStation Move motion controller and an optional PlayStation Move sub-controller.[162]
235
+ Although PlayStation Move is implemented on the existing PlayStation 3 console, Sony states that it is treating Move's debut as its own major "platform launch", planning an aggressive marketing campaign to support it. In addition to selling the controllers individually,[163] Sony also plans to provide several different bundle options for PlayStation Move hardware; including a starter kit with a PS Eye, a Move motion controller, and a demo/sampler disc, priced under US$100;[164] a full console pack with a PS3 console, DualShock 3 gamepad, PS Eye, and Move motion controller; and bundles of a Move motion controller with select games.[163]
236
+
237
+ The PlayStation brand has a wide series of magazines, from across different continents, covering PlayStation related articles and stories. Many of these magazines work closely with Sony and thus often come with demo discs for PlayStation games. Currently there are three magazines still in circulation namely PlayStation: The Official Magazine,[165] PlayStation Official Magazine,[166] Official PlayStation Magazine (Australia).[167] However, over the years, many PlayStation magazines have spawned while a few have also become defunct, these include the Official U.S. PlayStation Magazine,[168] Official UK PlayStation Magazine,[169] Official UK PlayStation 2 Magazine.[170]
238
+
239
+ PlayStation Underground was a non-traditional magazine that Sony Computer Entertainment America produced and published between Spring 1997 to Spring 2001. Subscribers received two PlayStation CDs, along with a booklet and colorful packaging every quarter.[171] The CDs contained interviews, cheats, programmers moves, game demos and one-of-a-kind Memory Card saves. Several issues showed how a game was created from basic design to final product. Since the CDs could only be run on a PlayStation, it proved a useful marketing tool which spawned a line of PlayStation Underground JamPacks Demo CDs and which contained highlights from recent issues of PlayStation Underground, along with seemingly as many game demos that could be packed on a single CD. Unlike PlayStation Underground these were available in most stores for $4.95, were published twice a year in Summer and Winter and usually spotlighted newly released or coming soon games. By 2001, Sony had decided to phase out Underground to focus on the JamPacks with the release of the PlayStation 2. PlayStation Underground CDs are mainly in the hands of collectors these days.[172]
240
+
241
+ Advertising slogans used for each PlayStation console iteration:
242
+
243
+ The most notable of recent PlayStation commercials is the series of "It Only Does Everything" commercials featuring a fictional character called Kevin Butler who is a Vice President at PlayStation. These commercials usually advertise the PlayStation 3 and its games through a series of comedic answers to "Dear PlayStation" queries.[183] These commercials garnered popularity among gamers, though its debut commercial received criticism from the Nigerian government due to a reference to the common 419 scams originating in Nigeria. Sony issued an apology and a new version of the advert with the offending line changed was produced.[191]
244
+
245
+ A spin-off of the campaign has been created for the PlayStation Portable which features similar campaign commercials called the "Step Your Game Up" campaign featuring a fictional teenage character named Marcus Rivers acting in a similar fashion to Kevin Butler but answering the "Dear PlayStation" queries about the PSP.[180]
246
+
247
+ In July 2006, an advertising campaign in the Netherlands was released in which a white model dressed entirely in white and a black model dressed entirely in black was used to compare Sony's new Ceramic White PSP and the original Piano Black PSP. This series of ads depicted both models fighting with each other[192] and drew criticism from the media for being racist, though Sony maintains that the ad did not feature any racist message.[193]
248
+
249
+ In November 2006, a marketing company employed by Sony's American division created a website entitled "All I want for Xmas is a PSP", designed to promote the PSP virally. The site contained a blog which was purportedly written by "Charlie", a teenage boy attempting to get his friend Jeremy's parents to buy him a PSP, and providing a "music video" of either Charlie or Jeremy "rapping" about the PSP. Visitors to the website quickly recognized that the website was registered to a marketing company, exposing the campaign on sites such as YouTube and digg. Sony was forced to admit that the site was in fact a marketing campaign and in an interview with next-gen.biz, Sony admitted that the idea was "poorly executed".[194]
250
+
251
+ In 2005, Australian newspaper The Age wrote an article about the PlayStation brand. Among the numerous interviews conducted with various people in the industry was an interview with Dr Jeffrey Brand, associate professor in communication and media at Bond University who said, "PlayStation re-ignited our imagination with video games". Game designers Yoshiki Okamoto called the brand "revolutionary — PlayStation has changed gaming, distribution, sales, image and more", while Evan Wells of Naughty Dog said "PlayStation is responsible for making playing games cool."[195]
252
+
253
+ In 2009, ViTrue, Inc. listed the PlayStation brand as number 13 on their "The Vitrue 100: Top Social Brands of 2009". The ranking was based on various aspects mainly dealing with popular social media sites in aspects such as Social Networking, Video Sharing, Photo Sharing and Blogs.[196]
254
+
255
+ In 2010, Gizmodo stated that the PlayStation brand was one of the last Sony products to completely stand apart from its competitors, stating that "If you ask the average person on the street what their favorite Sony product is, more often than not you'll hear PlayStation".[197] As of April 2012, the PlayStation brand is the "most followed" brand on social networking site, Facebook, with over 22 million fans and followers in total which is more than any other brand in the entertainment industry. A study by Greenlight's Entertainment Retail has also shown that the PlayStation brand is the most interactive making 634 posts and tweets on social networking sites Facebook and Twitter.[198]
256
+
257
+ In July 2014, Sony boasted in a company release video that the PlayStation 3, PlayStation 4 and PlayStation Vita sold a combined total of 100 million units.[199] It was announced at Tokyo Game Show on September 1, 2014, that PlayStation home game consoles claim 78% market share of all home consoles in Japan.[200]
258
+
259
+ As of 2015[update], PlayStation is the strongest selling console brand worldwide.[201]
260
+
en/4671.html.txt ADDED
@@ -0,0 +1,260 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+
5
+ PlayStation (Japanese: プレイステーション, Hepburn: Pureisutēshon, officially abbreviated as PS) is a Japanese video game brand that consists of five home video game consoles, as well as a media center, an online service, a line of controllers, two handhelds and a phone, as well as multiple magazines. The brand is produced by Sony Interactive Entertainment, a division of Sony, with the first console releasing as the PlayStation in Japan released in December 1994, and worldwide the following year.[1]
6
+
7
+ The original console in the series was the first console of any type to ship over 100 million units, doing so in under a decade.[2] Its successor, the PlayStation 2, was released in 2000. The PlayStation 2 is the best-selling home console to date, having reached over 155 million units sold by the end of 2012.[3] Sony's next console, the PlayStation 3, was released in 2006, selling over 87.4 million units by March 2017.[4] Sony's latest console, the PlayStation 4, was released in 2013, selling a million units within a day, becoming the fastest selling console in history.[5] The next console in the series, the PlayStation 5, is expected to be released by the end of 2020.[6]
8
+
9
+ The first handheld game console in the series, the PlayStation Portable or PSP, sold a total of 80 million units worldwide by November 2013.[7] Its successor, the PlayStation Vita, which launched in Japan in December 2011 and in most other major territories in February 2012, selling over four million units by January 2013.[8] PlayStation TV is a microconsole and a non-portable variant of the PlayStation Vita handheld game console.[9] Other hardware released as part of the PlayStation series includes the PSX, a digital video recorder which was integrated with the PlayStation and PlayStation 2, though it was short lived due to its high price and was never released outside Japan, as well as a Sony Bravia television set which has an integrated PlayStation 2. The main series of controllers utilized by the PlayStation series is the DualShock, which is a line of vibration-feedback gamepad having sold 28 million controllers by June 2008.[10]
10
+
11
+ The PlayStation Network is an online service with about 110 million registered users[11] (as of June 2013) and over 103 million active users monthly[12] (as of December 2019). It comprises an online virtual market, the PlayStation Store, which allows the purchase and download of games and various forms of multimedia, a subscription-based online service known as PlayStation Plus and a social gaming networking service called PlayStation Home, which had over 41 million users worldwide at the time of its closure in March 2015.[13] PlayStation Mobile (formerly PlayStation Suite) is a software framework that provides PlayStation content on mobile devices. Version 1.xx supports both PlayStation Vita, PlayStation TV and certain devices that run the Android operating system, whereas version 2.00 released in 2014 only targeted PlayStation Vita and PlayStation TV.[14] Content set to be released under the framework consist of only original PlayStation games currently.[15]
12
+
13
+ 7th generation PlayStation products also use the XrossMediaBar, which is an Technology & Engineering Emmy Award-winning graphical user interface.[16] A touch screen-based user interface called LiveArea was launched for the PlayStation Vita, which integrates social networking elements into the interface. Additionally, the PlayStation 2 and PlayStation 3 consoles also featured support for Linux-based operating systems; Linux for PlayStation 2 and OtherOS respectively, though this has since been discontinued. The series has also been known for its numerous marketing campaigns, the latest of which being the "Greatness Awaits" commercials in the United States.
14
+
15
+ The series also has a strong line-up of first-party games due to Sony Interactive Entertainment Worldwide Studios, a group of many studios owned by Sony Interactive Entertainment that exclusively developed them for PlayStation consoles. In addition, the series features various budget re-releases of games by Sony with different names for each region; these include the Greatest Hits, Platinum, Essentials, and The Best selection of games.
16
+
17
+ PlayStation was the brainchild of Ken Kutaragi, a Sony executive who managed one of the company's hardware engineering divisions and was later dubbed "The Father of the PlayStation".[17][18]
18
+
19
+ The console's origins date back to 1988 where it was originally a joint project between Nintendo and Sony to create a CD-ROM for the Super Famicom.[19] Although Nintendo denied the existence of the Sony deal as late as March 1991,[20] Sony revealed a Super Famicom with a built-in CD-ROM drive, that incorporated Green Book technology or CD-i, called "Play Station" (also known as SNES-CD) at the Consumer Electronics Show in June 1991. However, a day after the announcement at CES, Nintendo announced that it would be breaking its partnership with Sony, opting to go with Philips instead but using the same technology.[21] The deal was broken by Nintendo after they were unable to come to an agreement on how revenue would be split between the two companies.[21] The breaking of the partnership infuriated Sony President Norio Ohga, who responded by appointing Kutaragi with the responsibility of developing the PlayStation project to rival Nintendo.[21]
20
+
21
+ At that time, negotiations were still on-going between Nintendo and Sony, with Nintendo offering Sony a "non-gaming role" regarding their new partnership with Philips. This proposal was swiftly rejected by Kutaragi who was facing increasing criticism over his work with regard to entering the video game industry from within Sony. Negotiations officially ended in May 1992 and in order to decide the fate of the PlayStation project, a meeting was held in June 1992, consisting of Sony President Ohga, PlayStation Head Kutaragi and several senior members of Sony's board. At the meeting, Kutaragi unveiled a proprietary CD-ROM-based system he had been working on which involved playing video games with 3D graphics to the board. Eventually, Sony President Ohga decided to retain the project after being reminded by Kutaragi of the humiliation he suffered from Nintendo. Nevertheless, due to strong opposition from a majority present at the meeting as well as widespread internal opposition to the project by the older generation of Sony executives, Kutaragi and his team had to be shifted from Sony's headquarters to Sony Music, a completely separate financial entity owned by Sony, so as to retain the project and maintain relationships with Philips for the MMCD development project (which helped lead to the creation of the DVD).[21]
22
+
23
+ According to SCE's producer Ryoji Akagawa and chairman Shigeo Maruyama, there was uncertainty over whether the console should primarily focus on 2D sprite graphics or 3D polygon graphics. Eventually, after witnessing the success of Sega's Virtua Fighter in Japanese arcades, that Sony realized "the direction of the PlayStation became instantly clear" and 3D polygon graphics became the console's primary focus.[22]
24
+
25
+ The PlayStation logo was designed by Manabu Sakamoto. He wanted the logo to capture the 3D support of the console, but instead of just adding apparent depth to the letters "P" and "S", he created an optical illusion that suggested the letters in depth of space. Sakamoto also stuck with four bright principle colors, red, yellow, green, and blue, only having to tune the green color for better harmony across the logo. Sakamoto also designed the black and white logo based on the same design, reserved for times where colors could not be used.[23]
26
+
27
+ At Sony Music Entertainment, Kutaragi worked closely with Shigeo Maruyama, the CEO of Sony Music, and with Akira Sato to form Sony Computer Entertainment Inc. (SCEI) on November 16, 1993.[24] A building block of SCEI was its initial partnership with Sony Music which helped SCEI attract creative talent to the company as well as assist SCEI in manufacturing, marketing and producing discs, something that Sony Music had been doing with Music Discs. The final two key members of SCEI were Terry Tokunaka, the President of SCEI from Sony's headquarters, and Olaf Olafsson. Olafsson was CEO and president of New York-based Sony Interactive Entertainment[25] which was the parent company for the 1994-founded Sony Computer Entertainment of America (SCEA).
28
+
29
+ The PlayStation project, SCEI's first official project, was finally given the green light by Sony executives in 1993 after a few years of development. Also in 1993, Phil Harrison, who later became President of Sony Computer Entertainment Worldwide Studios, was recruited into SCEI to attract developers and publishers to produce games for their new PlayStation platform.[21]
30
+
31
+ Computer Gaming World in March 1994 reported a rumor that the "Sony PS-X" would be released in Japan "before the end of this year and will retail for less than $400".[26] After a demonstration of Sony's distribution plan as well as tech demos of its new console to game publishers and developers in a hotel in Tokyo in 1994, numerous developers began to approach PlayStation. Two of whom later became major partners were Electronic Arts in the West and Namco in Japan. One of the factors which attracted developers to the platform was the use of a 3D-capable, CD-ROM-based console which was much cheaper and easier to manufacture for in comparison to Nintendo's rival console, which used cartridge systems. The project eventually hit Japanese stores in December 1994 and gained massive sales due to its lower price point than its competitor, the Sega Saturn. The popularity of the console spread after its release worldwide in North America and Europe.[21]
32
+
33
+ The original PlayStation, released in Japan on December 3, 1994, was the first of the ubiquitous PlayStation series of console and hand-held game devices. It has included successor consoles and upgrades including the Net Yaroze (a special black PlayStation with tools and instructions to program PlayStation games and applications), "PS one" (a smaller version of the original) and the PocketStation (a handheld which enhances PlayStation games and also acts as a memory card). It was part of the fifth generation of video game consoles competing against the Sega Saturn and the Nintendo 64. By December 2003, the PlayStation and PS one had shipped a combined total of 102.49 million units,[27] eventually becoming the first video game console to sell 120 million units.[2]
34
+
35
+ Released on July 7, 2000,[28] concurrently with its successor the PlayStation 2, the PS One (stylized as PS one) was a considerably smaller, redesigned version of the original PlayStation video game console.[29] The PS one went on to outsell all other consoles, including its successor, throughout the remainder of the year.[29] It featured two main changes from its predecessor, the first being a cosmetic change to the console and the second being the home menu's Graphical User Interface; a variation of the GUI previously used only on PAL consoles up to that point.
36
+
37
+ Released in 2000, 15 months after the Dreamcast and a year before its other competitors, the Xbox and the Nintendo GameCube, the PlayStation 2 is part of the sixth generation of video game consoles, and is backwards-compatible with most original PlayStation games. Like its predecessor, it has received a slimmer redesign. It is the most successful console in the world,[30] having sold over 155 million units as of December 28, 2012.[3] On November 29, 2005, the PS2 became the fastest game console to reach 100 million units shipped, accomplishing the feat within 5 years and 9 months from its launch. This achievement occurred faster than its predecessor, the PlayStation, which took "9 years and 6 months since launch" to reach the same figure.[2] PlayStation 2 shipments in Japan ended on December 28, 2012.[31] The Guardian reported on January 4, 2013 that PS2 production had ended worldwide, but studies showed that many people all around the world still own one even if it is no longer in use. PlayStation 2 has been ranked as the best selling console of all time as of 2015.[32]
38
+
39
+ Released in 2004, four years after the launch of the original PlayStation 2, the PlayStation 2 Slimline was the first major redesign of the PlayStation 2. Compared to its predecessor, the Slimline was smaller, thinner, quieter and also included a built-in Ethernet port (in some markets it also has an integrated modem). In 2007, Sony began shipping a revision of the Slimline which was lighter than the original Slimline together with a lighter AC adapter.[33] In 2008, Sony released yet another revision of the Slimline which had an overhauled internal design incorporating the power supply into the console itself like the original PlayStation 2 resulting in a further reduced total weight of the console.[34]
40
+
41
+ Released on November 11, 2006 in Japan, the PlayStation 3 is a seventh generation game console from Sony. It competes with the Microsoft Xbox 360 and the Nintendo Wii. The PS3 is the first console in the series to introduce the use of motion-sensing technology through its Sixaxis wireless controller. The console also incorporates a Blu-ray Disc player and features high-definition resolution. The PS3 was originally offered with either a 20 GB or 60 GB hard drive, but over the years its capacity increased in increments available up to 500 GB. The PlayStation 3 has sold over 80 million consoles worldwide as of November 2013.[35]
42
+
43
+ Like its predecessors, the PlayStation 3 was re-released in 2009 as a "slim" model. The redesigned model is 33% smaller, 36% lighter, and consumes 34% to 45% less power than previous models.[36][37] In addition, it features a redesigned cooling system and a smaller Cell processor which was moved to a 45nm manufacturing process.[38] It sold in excess of a million units within its first 3 weeks on sale.[39] The redesign also features support for CEC (more commonly referred to by its manufacturer brandings of BraviaSync, VIERA Link, EasyLink and others) which allows control of the console over HDMI by using the remote control as the controller. The PS3 slim also runs quieter and is cooler than previous models due to its 45 nm Cell. The PS3 Slim no longer has the "main power" switch (similar to PlayStation 2 slim), like the previous PS3 models, which was located at the back of the console.[36] It was officially released on September 1, 2009 in North America and Europe and on September 3, 2009 in Japan, Australia and New Zealand.[36][40][41]
44
+
45
+ In 2012, Sony revealed a new "Super Slim" PlayStation 3. The new console, with a completely redesigned case that has a sliding door covering the disc drive (which has been moved to the top of the console), is 4.3 pounds, almost three pounds lighter than the previous "slim" model. The console comes with either 12GB flash memory or a 250GB, 500GB hard drive. Several bundles which include a Super Slim PS3 and a selection of games are available.
46
+
47
+ The PlayStation 4 (PS4) is the latest video game console from Sony Computer Entertainment announced at a press conference on February 20, 2013. In the meeting, Sony revealed some hardware specifications of the new console.[42][43] The eighth-generation system, launched in the fourth quarter of 2013, introduced the x86 architecture to the PlayStation series. According to lead system architect, Mark Cerny, development on the PlayStation 4 began as early as 2008.[44] PlayStation Europe CEO Jim Ryan emphasized in 2011 that Sony wanted to avoid launching the next-generation console behind the competition.[45]
48
+
49
+ Among the new applications and services, Sony introduced the PlayStation App, allowing PS4 owners to turn smartphones and tablets into a second screen to enhance gameplay.[46] The company also planned to debut PlayStation Now game streaming service, powered by technology from Gaikai.[47][48] By incorporating a share button on the new controller and making it possible to view in-game content being streamed live from friends, Sony planned to place more focus on social gameplay as well.[46] The PlayStation 4 was first released in North America on November 15, 2013.
50
+
51
+ PlayStation 4 Slim (officially marketed simply as PlayStation 4 or PS4) was unveiled on September 7, 2016. It is a revision of the original PS4 hardware with a streamlined form factor. The new casing is 40% smaller and carries a rounded body with a matte finish on the top of the console rather than a two-tone finish. The two USB ports on the front have a larger gap between them, and the optical audio port was also removed.[168] It ships with a minor update to the DualShock 4 controller, with the light bar visible through the top of the touchpad and dark matte grey coloured exterior instead of a partially shiny black. The PS4 Slim was released on September 15, 2016, with a 500 GB model at the same price point as the original PS4 model.[169] Its model number is CUH-2000.[170]
52
+
53
+ PlayStation 4 Pro or PS4 Pro for short (originally announced under the codename Neo)[35] was unveiled on September 7, 2016. Its model number is CUH-7000.[170] It is an updated version of the PlayStation 4 with improved hardware, including an upgraded GPU with 4.2 teraflops of processing power, and higher CPU clock. It is designed primarily to enable selected games to be playable at 4K resolution, and improved quality for PlayStation VR. All games are backwards and forward compatible between PS4 and PS4 Pro, but games with optimizations will have improved graphics performance on PS4 Pro. Although capable of streaming 4K video from online sources, PS4 Pro does not support Ultra HD Blu-ray.[171] [172] [173] Additionally the PS4 Pro is the only PS4 model which can remote play at 1080p. The other models are limited to 720p.[174]
54
+
55
+ The first news of the PlayStation 5 (PS5)[49] came from Mark Cerny in an interview with Wired in April 2019.[50] Sony intends for the PlayStation 5 to be its next-generation console and to ship worldwide by the end of 2020.[51] In early 2019, Sony's financial report for the quarter ending March 31, 2019, affirmed that new next-generation hardware was in development but would ship no earlier than April 2020.[52]
56
+
57
+ The current specifications were released in October 2019.[53] The console is slated to use an 8-core, 16-thread CPU based on AMD's Zen 2 microarchitecture, manufactured on the 7 nanometer process node. The graphics processor is a custom variant of AMD's Navi family using the RDNA microarchitecture, which includes support for hardware acceleration of ray-tracing rendering, enabling real-time ray-traced graphics.[53] The new console will ship with a custom SSD storage, as Cerny emphasized the need for fast loading times and larger bandwidth to make games more immersive, as well as to support the required content streaming from disc for 8K resolution.[50] In a second interview with Wired in October 2019, further details of the new hardware were revealed: the console's integrated Blu-ray drive would support 100GB Blu-ray discs[51] and Ultra HD Blu-ray;[citation needed] whilst game installation from a disc is mandatory as to take advantage of the SSD, the user will have some fine-grain control of how much they want to have installed, such as only installing the multiplayer components of a game.[51] Sony is developing an improved suspended gameplay state for the PlayStation 5 to consume less energy than the PlayStation 4.[54]
58
+
59
+ The system's new controller will have adaptive triggers that can change the resistance to the player as necessary, such as changing the resistance during the action of pulling an arrow back in a bow in-game.[51] The controller will also have strong haptic feedback through voice coil actuators, which together with an improved controller speaker is intended to give better in-game feedback.[51] USB-C connectivity, together with a higher rated battery are other improvements to the new controller.[51]
60
+
61
+ The PlayStation 5 will feature a completely revamped user interface.[49] The PlayStation 5 is set to be backwards-compatible with PlayStation 4 and PlayStation VR games, with Cerny stating that the transition to the new console is meant to be a soft one.[50][53] However, in later interviews Sony was unwilling to commit to backward-compatibility.[55] At CES 2020, Sony unveiled the official logo for the platform.[56]
62
+
63
+
64
+
65
+ Top: PS
66
+
67
+ Bottom: PS One
68
+
69
+ Left: PS2
70
+
71
+ Right: PS2 Slim
72
+
73
+
74
+
75
+ Top: PS3 (left) and PS3 Slim (right)
76
+
77
+ Bottom: PS3 Super Slim
78
+
79
+
80
+
81
+ Top: PS4
82
+
83
+ Bottom: PS4 Pro (PS4 slim not shown)
84
+
85
+ ¥39,800[1]US$299[57]£299[58]
86
+
87
+ PS One
88
+
89
+ unknown
90
+
91
+ ¥39,800[1]US$299[57]£299[58]
92
+
93
+ PS2 Slim
94
+
95
+ unknown
96
+
97
+ ¥49,980 (20 GB)[1]US$499 (20 GB)
98
+
99
+ US$599 (60 GB)[57]£425 (60 GB)[59]€599 (60 GB)[58]
100
+
101
+ PS3 Slim
102
+
103
+ unknown
104
+
105
+ PS3 Super Slim
106
+
107
+ unknown
108
+
109
+ ¥38,980 (500 GB)US$399 (500 GB)€399 (500 GB)£349 (500 GB)
110
+
111
+ PS4 Slim
112
+
113
+ US$299 (500 GB)
114
+
115
+ US$349 (1 TB)
116
+
117
+ €299 (500 GB)
118
+
119
+ €349 (1 TB)
120
+
121
+ PS4 Pro
122
+
123
+ US$399 (1 TB)
124
+
125
+ €399 (1 TB)
126
+
127
+ Resolution: 256x224 – 640x480
128
+ Sprite/BG drawing
129
+ Adjustable frame buffer
130
+ No line restriction
131
+ Unlimited CLUTs (Color Look-Up Tables)
132
+ 4,000 8x8 pixel sprites with individual scaling and rotation
133
+ Simultaneous backgrounds (Parallax scrolling)
134
+ 620,000 polygons/sec
135
+
136
+ Capable of multi-pass rendering;
137
+
138
+ Connected to VU1 on CPU (a vector only for visual style coding things with 3.2 GFLOPS) to deliver enhanced shader graphics and other enhanced graphics
139
+
140
+ DVD Playback
141
+
142
+ Audio file playback (ATRAC3, AAC, MP3, WAV, WMA)
143
+ Video file playback (MPEG1, MPEG2, MPEG4, H.264-AVC, DivX)
144
+
145
+ Image editing and slideshows (JPEG, GIF, PNG, TIFF, BMP)
146
+ Mouse and keyboard support
147
+ Folding@Home client with visualizations from the RSX
148
+
149
+ DVD playback
150
+ Audio playback from inserted USB flash drive
151
+
152
+ The PocketStation was a miniature game console created by SCE as a peripheral for the original PlayStation.[81] Released exclusively in Japan on December 23, 1999,[82] it featured a monochrome LCD, a speaker, a real-time clock and infrared communication capability. It could also be used as a standard PlayStation memory card by connecting it to a PlayStation memory card slot.[81] It was extremely popular in Japan and Sony originally had plans to release it in the United States but the plan was ultimately scrapped due to various manufacturing and supply-and-demand problems.[83][84]
153
+
154
+ The PlayStation Portable (PSP) was Sony's first handheld console to compete with Nintendo's DS console. The original model (PSP-1000) was released in December 2004 and March 2005,[85] The console is the first to utilize a new proprietary optical storage medium known as Universal Media Disc (UMD), which can store both games and movies.[86][87] It contains 32 MB of internal flash memory storage, expandable via Memory Stick PRO Duo cards.[88] It has a similar control layout to the PS3 with its PlayStation logo button and its ('Triangle'), ('Circle/O'), ('Cross/X') and ('Square') buttons in their white-colored forms.
155
+
156
+ The PSP-2000 (also known as the Slim & Lite in PAL territories) was the first major hardware revision of the PlayStation Portable, released in September 2007. The 2000 series was 33% lighter and 19% slimmer than the original PlayStation Portable.[89][90] The capacity of the battery was also reduced by ⅓ but the run time remained the same as the previous model due to lower power consumption. Older model batteries will still work and they extend the amount of playing time.[91] The PSP Slim & Lite has a new gloss finish. Its serial port was also modified in order to accommodate a new video-out feature (while rendering older PSP remote controls incompatible). On a PSP-2000, PSP games will only output to external monitors or TVs in progressive scan mode, so that televisions incapable of supporting progressive scan will not display PSP games; non-game video will output in either progressive or interlaced mode. USB charging was also made possible.[92] Buttons are also reportedly more responsive on the PSP-2000.[93] In 2008, Sony released a second hardware revision called the PSP-3000 which included several features that were not present in the PSP-2000, such as a built-in microphone and upgraded screen, as well as the ability to output PSP games in interlaced mode.
157
+
158
+ Released in October 2009, the PSP Go is the biggest redesign of the PlayStation Portable to date. Unlike previous PSP models, the PSP Go does not feature a UMD drive but instead has 16 GB of internal flash memory to store games, videos and other media.[94] This can be extended by up to 32GB with the use of a Memory Stick Micro (M2) flash card. Also unlike previous PSP models, the PSP Go's rechargeable battery is not removable or replaceable by the user. The unit is 43% lighter and 56% smaller than the original PSP-1000,[95] and 16% lighter and 35% smaller than the PSP-3000.[96] It has a 3.8" 480 × 272 LCD[97] (compared to the larger 4.3" 480 × 272 pixel LCD on previous PSP models).[98] The screen slides up to reveal the main controls. The overall shape and sliding mechanism are similar to that of Sony's mylo COM-2 internet device.[99] The PSP Go was produced and sold concurrently with its predecessor the PSP-3000 although it did not replace it.[95] All games on the PSP Go must be purchased and downloaded from the PlayStation Store as the handheld is not compatible with the original PSP's physical media, the Universal Media Disc. The handheld also features connectivity with the PlayStation 3's controllers the Sixaxis and DualShock 3 via Bluetooth connection.[96]
159
+
160
+ The PSP-E1000 is a budget-focused PSP model which, unlike previous PSP models, does not feature Wi-Fi or stereo speakers (replaced by a single mono speaker)[100] and has a matte "charcoal black" finish similar to the slim PlayStation 3.[101] The E1000 was announced at Gamescom 2011 and available across the PAL region for an RRP of €99.99.[101]
161
+
162
+ Released in Japan on December 17, 2011 and North America on February 22, 2012,[102] the PlayStation Vita[103] was previously codenamed Next Generation Portable (NGP). It was officially unveiled by Sony on January 27, 2011 at the PlayStation Meeting 2011.[104] The original model of the handheld, the PCH-1000 series features a 5-inch OLED touchscreen,[105] two analog sticks, a rear touchpad, Sixaxis motion sensing and a 4 core ARM Cortex-A9 MPCore processor.
163
+
164
+ The new PCH-2000 series system is a lighter redesign of the device that was announced at the SCEJA Press Conference in September 2013 prior to the Tokyo Game Show. This model is 20% thinner and 15% lighter compared to the original model, has an additional hour of battery life, an LCD instead of OLED, includes a micro USB Type B port, 1GB of internal storage memory. It was released in Japan on October 10, 2013 in six colors: white, black, pink, yellow, blue, and olive green, and in North America on May 6, 2014.[106]
165
+
166
+ The Vita was discontinued in March 2019. SIE president Jim Ryan said that while the Vita was a great device, they have moved away from portable consoles, "clearly it's a business that we're no longer in now".[23]
167
+
168
+ Released solely in Japan in 2003, the Sony PSX was a fully integrated DVR and PlayStation 2 video game console. It was the first Sony product to utilize the XrossMediaBar (XMB)[107] and can be linked with a PlayStation Portable to transfer videos and music via USB.[108] It also features software for video, photo and audio editing.[107] PSX supports online game compatibility using an internal broadband adapter. Games that utilize the PS2 HDD (for example, Final Fantasy XI) are supported as well.[109] It was the first product released by Sony under the PlayStation brand that did not include a controller with the device itself.[110]
169
+
170
+ Released in 2010, the Sony BRAVIA KDL22PX300 is a 22-inch 720p television which incorporates a PlayStation 2 console, along with 4 HDMI ports.[111]
171
+
172
+ A 24-inch 1080p PlayStation branded 3D television, officially called the PlayStation 3D Display, was released in late 2011. A feature of this 3D television is SimulView. During multiplayer games, each player will only see their respective screen (in full HD) appear on the television through their respective 3D glasses, instead of seeing a split screen (e.g. player 1 will only see player 1's screen displayed through their 3D glasses).
173
+
174
+ The Xperia Play is an Android-powered smartphone with a slide-up gamepad resembling the PSP Go developed by Sony Ericsson aimed at gamers and is the first to be PlayStation Certified.
175
+
176
+ Sony Tablets are PlayStation Certified Android tablets, released in 2011, 2012, and 2013. They offer connectivity with PlayStation 3 controllers and integrate with the PlayStation network using a proprietary application. The following models were released between 2011 and 2013: S, Sony Tablet S, Sony Tablet P, Xperia Tablet S and Xperia Tablet Z.
177
+
178
+ PlayStation TV, known in Asia as PlayStation Vita TV, is a microconsole and a non-portable variant of the PlayStation Vita handheld. It was announced on September 9, 2013 at a Sony Computer Entertainment Japan presentation. Instead of featuring a display screen, the console connects to a television via HDMI. Users can play using a DualShock 3 controller, although due to the difference in features between the controller and the handheld, certain games are not compatible with PS TV, such as those that are dependent on the system's touch-screen, rear touchpad, microphone or camera. The device is said to be compatible with over 100 Vita games, as well as various digital PlayStation Portable, PlayStation and PC Engine titles. The system supports Remote Play compatibility with the PlayStation 4, allowing players to stream games from the PS4 to a separate TV connected to PS TV, and also allows users to stream content from video services such as Hulu and Niconico, as well as access the PlayStation Store. The system was released in Japan on November 14, 2013, in North America on October 14, 2014, and in Europe and Australasia on November 14, 2014.[112]
179
+
180
+ PlayStation VR is a virtual reality device that is produced by Sony Computer Entertainment. It features a 5.7 inch 1920x1080 resolution OLED display, and operates at 120 Hz which can eliminate blur and produce a smooth image; the device also has a low latency of less than 18ms.[113] Additionally, it produces two sets of images, one being visible on a TV and one for the headset, and includes 3D audio technology so the player can hear from all angles. The PlayStation VR was released in October 2016.[114]
181
+
182
+ The PlayStation Classic is a miniature version of the original 1994 Model SCPH-1001 PlayStation console, that comes preloaded with 20 games, and two original style controllers. It was launched on the 24th anniversary of the original console on December 3, 2018.[115]
183
+
184
+ Each console has a variety of games. The PlayStation 2, PSX and PlayStation 3 exhibit backwards compatibility and can play most of the games released on the original PlayStation. Some of these games can also be played on the PlayStation Portable but they must be purchased and downloaded from a list of PS one Classics from the PlayStation Store. Games released on the PlayStation 2 can currently only be played on the original console as well as the PSX and the early models of the PlayStation 3 which are backwards compatible. The PlayStation 3 has two types of games, those released on Blu-ray Discs and downloadable games from the PlayStation Store. The PlayStation Portable consists of numerous games available on both its physical media, the Universal Media Disc and the Digital Download from the PlayStation Store. However, some games are only available on the UMD while others are only available on the PlayStation Store. The PlayStation Vita consists of games available on both its physical media, the PlayStation Vita card and digital download from the PlayStation Store.
185
+
186
+ Sony Computer Entertainment Worldwide Studios is a group of video game developers owned by Sony Computer Entertainment. It is dedicated to developing video games exclusively for the PlayStation series of consoles. The series has produced several best-selling franchises such as the Gran Turismo series of racing video games as well as critically acclaimed titles such as the Uncharted series. Other notable franchises include God of War, Twisted Metal and more recently, LittleBigPlanet (series), InFAMOUS, and MotorStorm.
187
+
188
+ Greatest Hits (North America), Platinum Range (PAL territories) and The Best (Japan and Asia) are video games for the Sony PlayStation, PlayStation 2, PlayStation 3, and PlayStation Portable consoles that have been officially re-released at a lower price by Sony. Each region has its own qualifications to enter the re-release program. Initially, during the PlayStation era, a game had to sell at least 150,000 copies (later 250,000)[116] and be on the market for at least a year[117] to enter the Greatest Hits range. During the PlayStation 2 era, the requirements increased with the minimum number of copies sold increasing to 400,000 and the game had to be on the market for at least 9 months.[116] For the PlayStation Portable, games had to be on the market for at least 9 months with 250,000 copies or more sold.[118] Currently, a PlayStation 3 game must be on the market for 10 months and sell at least 500,000 copies to meet the Greatest Hits criteria.[119] PS one Classics were games that were released originally on the PlayStation and have been re-released on the PlayStation Store for the PlayStation 3 and PlayStation Portable. Classics HD are compilations of PlayStation 2 games that have been remastered for the PlayStation 3 on a single disc with additional features such as upscaled graphics, PlayStation Move support, 3D support and PlayStation Network trophies. PlayStation Mobile (formerly PlayStation Suite) is a cross-platform, cross-device software framework aimed at providing PlayStation content, currently original PlayStation games, across several devices including PlayStation Certified Android devices as well as the PlayStation Vita.
189
+
190
+ Sony has generally supported indie game development since incorporating the digital distribution storefront in the PlayStation 3, though initially required developers to complete multiple steps to get an indie game certified on the platform. Sony improved and simplified the process in transitioning to the PlayStation 4.[120]
191
+
192
+ As Sony prepared to transition from the PlayStation 4 to PlayStation 5, they introduced a new PlayStation Indies program led by Shuhei Yoshida in July 2020. The program's goals are to spotlight new and upcoming indie titles for the PlayStation 4 and 5, focusing on those that are more innovative and novel, akin to past titles such as PaRappa the Rapper, Katamari Damacy, LittleBigPlanet, and Journey. Sony also anticipates bringing more indie titles to the PlayStation Now series as part of this program.[121]
193
+
194
+ Online gaming on PlayStation consoles first started in July 2001 with the release of PlayStation 2's unnamed online service in Japan. Later in August 2002 saw its release in North America, followed by the European release in June 2003. This service was shut down on March 31, 2016.
195
+
196
+ Released in 2006, the PlayStation Network is an online service[122] focusing on online multiplayer gaming and digital media delivery. The service is provided and run by Sony Computer Entertainment for use with the PlayStation 3, and was later implemented on the PlayStation Portable, PlayStation Vita and PlayStation 4 video game consoles.[123] The service has over 103 million active users monthly (as of December 2019).[12] The Sony Entertainment Network provides other features for users like PlayStation Home, PlayStation Store, and Trophies.
197
+
198
+ The PlayStation Store is an online virtual market available to users of the PlayStation 3, PlayStation 4 and PlayStation Portable game consoles via the PlayStation Network. The store uses both physical currency and PlayStation Network Cards. The PlayStation Store's gaming content is updated every Tuesday and offers a range of downloadable content both for purchase and available free of charge. Available content includes full games, add-on content, playable demos, themes and game and movie trailers. The service is accessible through an icon on the XMB on the PS3 and PSP. The PS3 store can also be accessed on the PSP via a Remote Play connection to the PS3. The PSP store is also available via the PC application, Media Go. As of September 24, 2009, there have been more than 600 million downloads from the PlayStation Store worldwide.[124]
199
+
200
+ Video content such as films and television shows are also available from the PlayStation Store on the PlayStation 3 and PSP and will be made available on some new Sony BRAVIA televisions, VAIO laptop computers and Sony Blu-ray Disc players from February 2010.[125]
201
+
202
+ Life with PlayStation was a Folding@home application available for PlayStation 3 which connected to Stanford University’s Folding@home distributed computer network and allowed the user to donate their console's spare processing cycles to the project.[126] Folding@home is supported by Stanford University and volunteers make a contribution to society by donating computing power to this project. Research made by the project may eventually contribute to the creation of vital cures. The Folding@home client was developed by Sony Computer Entertainment in collaboration with Stanford University.[127] Life with PlayStation also consisted of a 3D virtual view of the Earth and contained current weather and news information of various cities and countries from around the world, as well as a World Heritage channel which offered information about historical sites, and the United Village channel which is a project designed to share information about communities and cultures worldwide.[128][129] As of PlayStation 3 system software update version 4.30 on October 24, 2012, the Life With PlayStation project has ended.
203
+
204
+ PlayStation Plus, a subscription-based service on the PlayStation Network, complements the standard PSN services.[130] It enables an auto-download feature for game patches and system software updates. Subscribers gain early or exclusive access to some betas, game demos, premium downloadable content (such as full game trials of retail games like Infamous, and LittleBigPlanet) and other PlayStation Store items, as well as a free subscription to Qore. Other downloadable items include PlayStation Store discounts and free PlayStation Network games, PS one Classics, PlayStation Minis, themes and avatars.[131] It offers a 14-day free trial.
205
+
206
+ PlayStation Blog (stylized as PlayStation.Blog) is an online PlayStation focused gaming blog, part of the PlayStation Network. It was launched on June 11, 2007[132] and has featured in numerous interviews with third-party companies such as Square Enix.[133] It also has posts from high-ranking Sony Computer Entertainment executives such as Jack Tretton, former President and Chief Executive Officer of Sony Computer Entertainment, and Shawn Layden, current President, SIEA, and Chairman, SIE Worldwide Studios.[134][135] A sub-site of the blog called PlayStation Blog Share was launched on March 17, 2010 and allowed readers of the blog as well as users of the PlayStation Blog to submit ideas to the PlayStation team about anything PlayStation-related and vote on the ideas of other submissions.[136][137]
207
+
208
+ The PlayStation App is an application that was released on January 11, 2011 in several European countries for iOS (version 4 and above) and for Android (version 1.6 and above),[138] and has been installed more than 3.6 million times as of March 2, 2014.[139] It allows users to view their trophies, see which of their PSN friends are online and read up to date information about PlayStation.[138] It does not feature any gaming functionality.[138]
209
+
210
+ The PlayStation Mobile (formerly PlayStation Suite) is a software framework that will be used to provide downloadable PlayStation content to devices running Android 2.3 and above as well as the PlayStation Vita. The framework will be cross-platform and cross-device, which is what Sony calls "hardware-neutral". It was set to release before the end of calendar year 2011. In addition, Android devices that have been certified to be able to playback PlayStation Suite content smoothly will be certified with the PlayStation Certified certification.[15]
211
+
212
+ PlayStation Now (PS Now) is a Gaikai-based video game streaming service used to provide PlayStation gaming content to PlayStation 3 (PS3), PlayStation 4 (PS4), PlayStation Vita, PlayStation TV and BRAVIA televisions.[140] The service currently allows users to pay for access to a selection of original PlayStation 3 titles on either a per-game basis or via a subscription. PlayStation Now was announced on January 7, 2014 at the 2014 Consumer Electronic Show. At CES, Sony presented demos of The Last of Us, God of War: Ascension, Puppeteer and Beyond: Two Souls, playable through PS Now on Bravia TVs and PlayStation Vitas. PlayStation Now was launched in Open Beta in the United States and Canada on PS4 on July 31, 2014, on PS3 on September 18, 2014, on PS Vita and PS TV on October 14, 2014, with support for select 2014 Bravia TVs coming later in the year.[141]
213
+
214
+ PlayStation Home is a community-based social gaming networking service for the PlayStation 3 on the PlayStation Network (PSN). It is available directly from the PlayStation 3 XrossMediaBar. Membership is free, and only requires a PSN account. Home has been in development since early 2005 and started an open public beta test on December 11, 2008.[142] Home allows users to create a custom avatar, which can be made to suit the user's preference.[143] Users can decorate their avatar's personal apartment ("HomeSpace") with default, bought, or won items. They can travel throughout the Home world (except cross region), which is constantly updated by Sony and partners. Each part of the world is known as a space. Public spaces can just be for display, fun, or for meeting people. Home features many mini-games which can be single player or multiplayer. Users can shop for new items to express themselves more through their avatars or HomeSpace.[144] Home features video screens in many places for advertising, but the main video content is shown at the theatre for entertainment. Home plays host to a variety of special events which range from prize-giving events to entertaining events. Users can also use Home to connect with friends and customize content.[142] Xi, a once notable feature of Home, is the world's first console based Alternate Reality Game that took place in secret areas in Home and was created by nDreams.[145][146]
215
+
216
+ "Room" (officially spelled as R∞M with capital letters and the infinity symbol in place of the "oo") was being beta tested in Japan from October 2009 to April 2010. Development of Room has been halted on April 15, 2010 due to negative feedback from the community.[147] Announced at TGS 2009, it was supposed to be a similar service to the PlayStation Home and was being developed for the PSP.[148] Launching directly from the PlayStation Network section of the XMB was also to be enabled. Just like in Home, PSP owners would have been able to invite other PSP owners into their rooms to "enjoy real time communication."[149] A closed beta test had begun in Q4 2009 in Japan.[150]
217
+
218
+ The XrossMediaBar, originally used on the PSX, is a graphical user interface used for the PlayStation 3 and PlayStation Portable, as well as a variety of other Sony devices. The interface features icons that are spread horizontally across the screen. Navigation moves the icons instead of a cursor. These icons are used as categories to organize the options available to the user. When an icon is selected on the horizontal bar, several more appear vertically, above and below it (selectable by the up and down directions on a directional pad).[151] The XMB can also be accessed in-game albeit with restrictions, it allows players to access certain areas of the XMB menu from within the game and is only available for the PlayStation 3.[152] Although the capacity to play users' own music in-game was added with this update, the feature is dependent on game developers who must either enable the feature in their games or update existing games.[153]
219
+
220
+ LiveArea, designed to be used on the PlayStation Vita, is a graphical user interface set to incorporate various social networking features via the PlayStation Network. It has been designed specifically as a touchscreen user interface for users.[154]
221
+
222
+ In 2002, Sony released the first useful and fully functioning operating system for a video game console, after the Net Yaroze experiment for the original PlayStation. The kit, which included an internal hard disk drive and the necessary software tools, turned the PlayStation 2 into a full-fledged computer system running Linux. Users can utilize a network adapter to connect the PlayStation 2 to the internet, a monitor cable adaptor to connect the PlayStation 2 to computer monitors as well as a USB Keyboard and Mouse which can be used to control Linux on the PlayStation 2.[155][156]
223
+
224
+ The PlayStation 3 (excluding PlayStation 3 Slim) also supports running Linux OS on firmware versions prior to 3.21 without the need for buying additional hardware purchase. Yellow Dog Linux provides an official distribution that can be downloaded, and other distributions such as Fedora, Gentoo and Ubuntu have been successfully installed and operated on the console.[38] The use of Linux on the PlayStation 3 allowed users to access 6 of the 7 Synergistic Processing Elements; Sony implemented a hypervisor restricting access to the RSX. The feature to install a second operating system on a PlayStation 3 was removed in a firmware update released in 2010.[157]
225
+
226
+ Released in 1994, the PlayStation control pad was the first controller made for the original PlayStation. It featured a basic design of a D-pad, 4 main select buttons ( ('Green Triangle'), ('Red Circle/Red O')), ('Blue Cross/Blue X') and ('Pink Square'), and start and select buttons on the face. 'Shoulder buttons' are also featured on the top [L1, L2, R1, R2] (named by the side [L=Left, R=Right] and 1 and 2 [top and bottom]). In 1996, Sony released the PlayStation Analog Joystick for use with flight simulation games.[158] The original digital controller was then replaced by the Dual Analog in 1997, which added two analog sticks based on the same potentiometer technology as the Analog Joystick.[159] This controller was then also succeeded by the DualShock controller.
227
+
228
+ Released in 1998, the DualShock controller for the PlayStation succeeded its predecessor, the Dual Analog, and became the longest running series of controllers for the PlayStation brand. In addition to the inputs of the original, digital, controller (, , , , L1, L2, R1, R2, Start, Select and a D-pad), the DualShock featured two analog sticks in a similar fashion to the previous Dual Analog controller, which can also be depressed to activate the L3 and R3 buttons.[160]
229
+
230
+ The DualShock series consists of four controllers: the DualShock which was the fourth controller released for the PlayStation; the DualShock 2, the only standard controller released for the PlayStation 2, and the DualShock 3, the second and current controller released for the PlayStation 3, and the DualShock 4, which went through a massive redesign and is the default input of the PlayStation 4, and upon release was compatible with the PS3 originally only via USB and eventually with a firmware update, Bluetooth connectivity was enabled. The Sixaxis was the first official controller for the PlayStation 3, and is based on the same design as the DualShock series (but lacking the vibration motors of the DualShock series of controllers).
231
+
232
+ Like the Dual Analog, the DualShock and DualShock 2 feature an "Analog" button between the analog sticks that toggles the analog sticks on and off (for use with games which support only the digital input of the original controller). On the PlayStation 3 Sixaxis and DualShock 3 controllers, the analog sticks are always enabled. Beginning with the Sixaxis, a 'PlayStation button' (which featured the incorporated PS logo and is similar in function to the Xbox 360 "Guide" button) was included on controllers. The PlayStation button replaces the "Analog" button of the DualShock and DualShock 2 controllers. Pressing the PS button on the PS3 brings up the XMB, while holding it down brings up system options (such as quit the game, change controller settings, turn off the system, and turn off the controller).[161]
233
+
234
+ PlayStation Move is a motion-sensing game controller platform for the PlayStation 3 video game console by Sony Computer Entertainment (SCE). Based on the handheld motion controller wand, PlayStation Move uses the PlayStation Eye webcam to track the wand's position and the inertial sensors in the wand to detect its motion. First revealed on June 2, 2009, PlayStation Move was launched in Q3/Q4 2010. Hardware available at launch included the main PlayStation Move motion controller and an optional PlayStation Move sub-controller.[162]
235
+ Although PlayStation Move is implemented on the existing PlayStation 3 console, Sony states that it is treating Move's debut as its own major "platform launch", planning an aggressive marketing campaign to support it. In addition to selling the controllers individually,[163] Sony also plans to provide several different bundle options for PlayStation Move hardware; including a starter kit with a PS Eye, a Move motion controller, and a demo/sampler disc, priced under US$100;[164] a full console pack with a PS3 console, DualShock 3 gamepad, PS Eye, and Move motion controller; and bundles of a Move motion controller with select games.[163]
236
+
237
+ The PlayStation brand has a wide series of magazines, from across different continents, covering PlayStation related articles and stories. Many of these magazines work closely with Sony and thus often come with demo discs for PlayStation games. Currently there are three magazines still in circulation namely PlayStation: The Official Magazine,[165] PlayStation Official Magazine,[166] Official PlayStation Magazine (Australia).[167] However, over the years, many PlayStation magazines have spawned while a few have also become defunct, these include the Official U.S. PlayStation Magazine,[168] Official UK PlayStation Magazine,[169] Official UK PlayStation 2 Magazine.[170]
238
+
239
+ PlayStation Underground was a non-traditional magazine that Sony Computer Entertainment America produced and published between Spring 1997 to Spring 2001. Subscribers received two PlayStation CDs, along with a booklet and colorful packaging every quarter.[171] The CDs contained interviews, cheats, programmers moves, game demos and one-of-a-kind Memory Card saves. Several issues showed how a game was created from basic design to final product. Since the CDs could only be run on a PlayStation, it proved a useful marketing tool which spawned a line of PlayStation Underground JamPacks Demo CDs and which contained highlights from recent issues of PlayStation Underground, along with seemingly as many game demos that could be packed on a single CD. Unlike PlayStation Underground these were available in most stores for $4.95, were published twice a year in Summer and Winter and usually spotlighted newly released or coming soon games. By 2001, Sony had decided to phase out Underground to focus on the JamPacks with the release of the PlayStation 2. PlayStation Underground CDs are mainly in the hands of collectors these days.[172]
240
+
241
+ Advertising slogans used for each PlayStation console iteration:
242
+
243
+ The most notable of recent PlayStation commercials is the series of "It Only Does Everything" commercials featuring a fictional character called Kevin Butler who is a Vice President at PlayStation. These commercials usually advertise the PlayStation 3 and its games through a series of comedic answers to "Dear PlayStation" queries.[183] These commercials garnered popularity among gamers, though its debut commercial received criticism from the Nigerian government due to a reference to the common 419 scams originating in Nigeria. Sony issued an apology and a new version of the advert with the offending line changed was produced.[191]
244
+
245
+ A spin-off of the campaign has been created for the PlayStation Portable which features similar campaign commercials called the "Step Your Game Up" campaign featuring a fictional teenage character named Marcus Rivers acting in a similar fashion to Kevin Butler but answering the "Dear PlayStation" queries about the PSP.[180]
246
+
247
+ In July 2006, an advertising campaign in the Netherlands was released in which a white model dressed entirely in white and a black model dressed entirely in black was used to compare Sony's new Ceramic White PSP and the original Piano Black PSP. This series of ads depicted both models fighting with each other[192] and drew criticism from the media for being racist, though Sony maintains that the ad did not feature any racist message.[193]
248
+
249
+ In November 2006, a marketing company employed by Sony's American division created a website entitled "All I want for Xmas is a PSP", designed to promote the PSP virally. The site contained a blog which was purportedly written by "Charlie", a teenage boy attempting to get his friend Jeremy's parents to buy him a PSP, and providing a "music video" of either Charlie or Jeremy "rapping" about the PSP. Visitors to the website quickly recognized that the website was registered to a marketing company, exposing the campaign on sites such as YouTube and digg. Sony was forced to admit that the site was in fact a marketing campaign and in an interview with next-gen.biz, Sony admitted that the idea was "poorly executed".[194]
250
+
251
+ In 2005, Australian newspaper The Age wrote an article about the PlayStation brand. Among the numerous interviews conducted with various people in the industry was an interview with Dr Jeffrey Brand, associate professor in communication and media at Bond University who said, "PlayStation re-ignited our imagination with video games". Game designers Yoshiki Okamoto called the brand "revolutionary — PlayStation has changed gaming, distribution, sales, image and more", while Evan Wells of Naughty Dog said "PlayStation is responsible for making playing games cool."[195]
252
+
253
+ In 2009, ViTrue, Inc. listed the PlayStation brand as number 13 on their "The Vitrue 100: Top Social Brands of 2009". The ranking was based on various aspects mainly dealing with popular social media sites in aspects such as Social Networking, Video Sharing, Photo Sharing and Blogs.[196]
254
+
255
+ In 2010, Gizmodo stated that the PlayStation brand was one of the last Sony products to completely stand apart from its competitors, stating that "If you ask the average person on the street what their favorite Sony product is, more often than not you'll hear PlayStation".[197] As of April 2012, the PlayStation brand is the "most followed" brand on social networking site, Facebook, with over 22 million fans and followers in total which is more than any other brand in the entertainment industry. A study by Greenlight's Entertainment Retail has also shown that the PlayStation brand is the most interactive making 634 posts and tweets on social networking sites Facebook and Twitter.[198]
256
+
257
+ In July 2014, Sony boasted in a company release video that the PlayStation 3, PlayStation 4 and PlayStation Vita sold a combined total of 100 million units.[199] It was announced at Tokyo Game Show on September 1, 2014, that PlayStation home game consoles claim 78% market share of all home consoles in Japan.[200]
258
+
259
+ As of 2015[update], PlayStation is the strongest selling console brand worldwide.[201]
260
+
en/4672.html.txt ADDED
@@ -0,0 +1,205 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ Lead (/ˈlɛd/) is a chemical element with the symbol Pb (from the Latin plumbum) and atomic number 82. It is a heavy metal that is denser than most common materials. Lead is soft and malleable, and also has a relatively low melting point. When freshly cut, lead is silvery with a hint of blue; it tarnishes to a dull gray color when exposed to air. Lead has the highest atomic number of any stable element and three of its isotopes are endpoints of major nuclear decay chains of heavier elements.
4
+
5
+ Lead is a relatively unreactive post-transition metal. Its weak metallic character is illustrated by its amphoteric nature; lead and lead oxides react with acids and bases, and it tends to form covalent bonds. Compounds of lead are usually found in the +2 oxidation state rather than the +4 state common with lighter members of the carbon group. Exceptions are mostly limited to organolead compounds. Like the lighter members of the group, lead tends to bond with itself; it can form chains and polyhedral structures.
6
+
7
+ Lead is easily extracted from its ores; prehistoric people in Western Asia knew of it. Galena is a principal ore of lead which often bears silver. Interest in silver helped initiate widespread extraction and use of lead in ancient Rome. Lead production declined after the fall of Rome and did not reach comparable levels until the Industrial Revolution. In 2014, the annual global production of lead was about ten million tonnes, over half of which was from recycling. Lead's high density, low melting point, ductility and relative inertness to oxidation make it useful. These properties, combined with its relative abundance and low cost, resulted in its extensive use in construction, plumbing, batteries, bullets and shot, weights, solders, pewters, fusible alloys, white paints, leaded gasoline, and radiation shielding.
8
+
9
+ In the late 19th century, lead's toxicity was recognized, and its use has since been phased out of many applications. However, many countries still allow the sale of products that expose humans to lead, including some types of paints and bullets. Lead is a neurotoxin that accumulates in soft tissues and bones; it damages the nervous system and interferes with the function of biological enzymes, causing neurological disorders, such as brain damage and behavioral problems.
10
+
11
+ A lead atom has 82 electrons, arranged in an electron configuration of [Xe]4f145d106s26p2. The sum of lead's first and second ionization energies—the total energy required to remove the two 6p electrons—is close to that of tin, lead's upper neighbor in the carbon group. This is unusual; ionization energies generally fall going down a group, as an element's outer electrons become more distant from the nucleus, and more shielded by smaller orbitals. The similarity of ionization energies is caused by the lanthanide contraction—the decrease in element radii from lanthanum (atomic number 57) to lutetium (71), and the relatively small radii of the elements from hafnium (72) onwards. This is due to poor shielding of the nucleus by the lanthanide 4f electrons. The sum of the first four ionization energies of lead exceeds that of tin,[3] contrary to what periodic trends would predict. Relativistic effects, which become significant in heavier atoms, contribute to this behavior.[a] One such effect is the inert pair effect: the 6s electrons of lead become reluctant to participate in bonding, making the distance between nearest atoms in crystalline lead unusually long.[5]
12
+
13
+ Lead's lighter carbon group congeners form stable or metastable allotropes with the tetrahedrally coordinated and covalently bonded diamond cubic structure. The energy levels of their outer s- and p-orbitals are close enough to allow mixing into four hybrid sp3 orbitals. In lead, the inert pair effect increases the separation between its s- and p-orbitals, and the gap cannot be overcome by the energy that would be released by extra bonds following hybridization.[6] Rather than having a diamond cubic structure, lead forms metallic bonds in which only the p-electrons are delocalized and shared between the Pb2+ ions. Lead consequently has a face-centered cubic structure[7] like the similarly sized[8] divalent metals calcium and strontium.[9][b][c][d]
14
+
15
+ Pure lead has a bright, silvery appearance with a hint of blue.[14] It tarnishes on contact with moist air and takes on a dull appearance, the hue of which depends on the prevailing conditions. Characteristic properties of lead include high density, malleability, ductility, and high resistance to corrosion due to passivation.[15]
16
+
17
+ Lead's close-packed face-centered cubic structure and high atomic weight result in a density[16] of 11.34 g/cm3, which is greater than that of common metals such as iron (7.87 g/cm3), copper (8.93 g/cm3), and zinc (7.14 g/cm3).[17] This density is the origin of the idiom to go over like a lead balloon.[18][19][e] Some rarer metals are denser: tungsten and gold are both at 19.3 g/cm3, and osmium—the densest metal known—has a density of 22.59 g/cm3, almost twice that of lead.[20]
18
+
19
+ Lead is a very soft metal with a Mohs hardness of 1.5; it can be scratched with a fingernail.[21] It is quite malleable and somewhat ductile.[22][f] The bulk modulus of lead—a measure of its ease of compressibility—is 45.8 GPa. In comparison, that of aluminium is 75.2 GPa; copper 137.8 GPa; and mild steel 160–169 GPa.[23] Lead's tensile strength, at 12–17 MPa, is low (that of aluminium is 6 times higher, copper 10 times, and mild steel 15 times higher); it can be strengthened by adding small amounts of copper or antimony.[24]
20
+
21
+ The melting point of lead—at 327.5 °C (621.5 °F)[25]—is very low compared to most metals.[16][g] Its boiling point of 1749 °C (3180 °F)[25] is the lowest among the carbon group elements. The electrical resistivity of lead at 20 °C is 192 nanoohm-meters, almost an order of magnitude higher than those of other industrial metals (copper at 15.43 nΩ·m; gold 20.51 nΩ·m; and aluminium at 24.15 nΩ·m).[27] Lead is a superconductor at temperatures lower than 7.19 K;[28] this is the highest critical temperature of all type-I superconductors and the third highest of the elemental superconductors.[29]
22
+
23
+ Natural lead consists of four stable isotopes with mass numbers of 204, 206, 207, and 208,[30] and traces of five short-lived radioisotopes.[31] The high number of isotopes is consistent with lead's atomic number being even.[h] Lead has a magic number of protons (82), for which the nuclear shell model accurately predicts an especially stable nucleus.[32] Lead-208 has 126 neutrons, another magic number, which may explain why lead-208 is extraordinarily stable.[32]
24
+
25
+ With its high atomic number, lead is the heaviest element whose natural isotopes are regarded as stable; lead-208 is the heaviest stable nucleus. (This distinction formerly fell to bismuth, with an atomic number of 83, until its only primordial isotope, bismuth-209, was found in 2003 to decay very slowly.)[i] The four stable isotopes of lead could theoretically undergo alpha decay to isotopes of mercury with a release of energy, but this has not been observed for any of them; their predicted half-lives range from 1035 to 10189 years[35] (at least 1025 times the current age of the universe).
26
+
27
+ Three of the stable isotopes are found in three of the four major decay chains: lead-206, lead-207, and lead-208 are the final decay products of uranium-238, uranium-235, and thorium-232, respectively.[36] These decay chains are called the uranium chain, the actinium chain, and the thorium chain.[37] Their isotopic concentrations in a natural rock sample depends greatly on the presence of these three parent uranium and thorium isotopes. For example, the relative abundance of lead-208 can range from 52% in normal samples to 90% in thorium ores;[38] for this reason, the standard atomic weight of lead is given to only one decimal place.[39] As time passes, the ratio of lead-206 and lead-207 to lead-204 increases, since the former two are supplemented by radioactive decay of heavier elements while the latter is not; this allows for lead–lead dating. As uranium decays into lead, their relative amounts change; this is the basis for uranium–lead dating.[40] Lead-207 exhibits nuclear magnetic resonance, a property that has been used to study its compounds in solution and solid state,[41][42] including in human body.[43]
28
+
29
+ Apart from the stable isotopes, which make up almost all lead that exists naturally, there are trace quantities of a few radioactive isotopes. One of them is lead-210; although it has a half-life of only 22.3 years,[30] small quantities occur in nature because lead-210 is produced by a long decay series that starts with uranium-238 (that has been present for billions of years on Earth). Lead-211, -212, and -214 are present in the decay chains of uranium-235, thorium-232, and uranium-238, respectively, so traces of all three of these lead isotopes are found naturally. Minute traces of lead-209 arise from the very rare cluster decay of radium-223, one of the daughter products of natural uranium-235, and the decay chain of neptunium-237, traces of which are produced by neutron capture in uranium ores. Lead-210 is particularly useful for helping to identify the ages of samples by measuring its ratio to lead-206 (both isotopes are present in a single decay chain).[44]
30
+
31
+ In total, 43 lead isotopes have been synthesized, with mass numbers 178–220.[30] Lead-205 is the most stable radioisotope, with a half-life of around 1.73×107 years.[j] The second-most stable is lead-202, which has a half-life of about 52,500 years, longer than any of the natural trace radioisotopes.[30]
32
+
33
+ Bulk lead exposed to moist air forms a protective layer of varying composition. Lead(II) carbonate is a common constituent;[46][47][48] the sulfate or chloride may also be present in urban or maritime settings.[49] This layer makes bulk lead effectively chemically inert in the air.[49] Finely powdered lead, as with many metals, is pyrophoric,[50] and burns with a bluish-white flame.[51]
34
+
35
+ Fluorine reacts with lead at room temperature, forming lead(II) fluoride. The reaction with chlorine is similar but requires heating, as the resulting chloride layer diminishes the reactivity of the elements.[49] Molten lead reacts with the chalcogens to give lead(II) chalcogenides.[52]
36
+
37
+ Lead metal resists sulfuric and phosphoric acid but not hydrochloric or nitric acid; the outcome depends on insolubility and subsequent passivation of the product salt.[53] Organic acids, such as acetic acid, dissolve lead in the presence of oxygen.[49] Concentrated alkalis will dissolve lead and form plumbites.[54]
38
+
39
+ Lead shows two main oxidation states: +4 and +2. The tetravalent state is common for the carbon group. The divalent state is rare for carbon and silicon, minor for germanium, important (but not prevailing) for tin, and is the more important of the two oxidation states for lead.[49] This is attributable to relativistic effects, specifically the inert pair effect, which manifests itself when there is a large difference in electronegativity between lead and oxide, halide, or nitride anions, leading to a significant partial positive charge on lead. The result is a stronger contraction of the lead 6s orbital than is the case for the 6p orbital, making it rather inert in ionic compounds. The inert pair effect is less applicable to compounds in which lead forms covalent bonds with elements of similar electronegativity, such as carbon in organolead compounds. In these, the 6s and 6p orbitals remain similarly sized and sp3 hybridization is still energetically favorable. Lead, like carbon, is predominantly tetravalent in such compounds.[55]
40
+
41
+ There is a relatively large difference in the electronegativity of lead(II) at 1.87 and lead(IV) at 2.33. This difference marks the reversal in the trend of increasing stability of the +4 oxidation state going down the carbon group; tin, by comparison, has values of 1.80 in the +2 oxidation state and 1.96 in the +4 state.[56]
42
+
43
+ Lead(II) compounds are characteristic of the inorganic chemistry of lead. Even strong oxidizing agents like fluorine and chlorine react with lead to give only PbF2 and PbCl2.[49] Lead(II) ions are usually colorless in solution,[57] and partially hydrolyze to form Pb(OH)+ and finally [Pb4(OH)4]4+ (in which the hydroxyl ions act as bridging ligands),[58][59] but are not reducing agents as tin(II) ions are. Techniques for identifying the presence of the Pb2+ ion in water generally rely on the precipitation of lead(II) chloride using dilute hydrochloric acid. As the chloride salt is sparingly soluble in water, in very dilute solutions the precipitation of lead(II) sulfide is achieved by bubbling hydrogen sulfide through the solution.[60]
44
+
45
+ Lead monoxide exists in two polymorphs, litharge α-PbO (red) and massicot β-PbO (yellow), the latter being stable only above around 488 °C. Litharge is the most commonly used inorganic compound of lead.[61] There is no lead(II) hydroxide; increasing the pH of solutions of lead(II) salts leads to hydrolysis and condensation.[62]
46
+ Lead commonly reacts with heavier chalcogens. Lead sulfide is a semiconductor, a photoconductor, and an extremely sensitive infrared radiation detector. The other two chalcogenides, lead selenide and lead telluride, are likewise photoconducting. They are unusual in that their color becomes lighter going down the group.[63]
47
+
48
+ Lead dihalides are well-characterized; this includes the diastatide[64] and mixed halides, such as PbFCl. The relative insolubility of the latter forms a useful basis for the gravimetric determination of fluorine. The difluoride was the first solid ionically conducting compound to be discovered (in 1834, by Michael Faraday).[65] The other dihalides decompose on exposure to ultraviolet or visible light, especially the diiodide.[66] Many lead(II) pseudohalides are known, such as the cyanide, cyanate, and thiocyanate.[63][67] Lead(II) forms an extensive variety of halide coordination complexes, such as [PbCl4]2−, [PbCl6]4−, and the [Pb2Cl9]n5n− chain anion.[66]
49
+
50
+ Lead(II) sulfate is insoluble in water, like the sulfates of other heavy divalent cations. Lead(II) nitrate and lead(II) acetate are very soluble, and this is exploited in the synthesis of other lead compounds.[68]
51
+
52
+ Few inorganic lead(IV) compounds are known. They are only formed in highly oxidizing solutions and do not normally exist under standard conditions.[69] Lead(II) oxide gives a mixed oxide on further oxidation, Pb3O4. It is described as lead(II,IV) oxide, or structurally 2PbO·PbO2, and is the best-known mixed valence lead compound. Lead dioxide is a strong oxidizing agent, capable of oxidizing hydrochloric acid to chlorine gas.[70] This is because the expected PbCl4 that would be produced is unstable and spontaneously decomposes to PbCl2 and Cl2.[71] Analogously to lead monoxide, lead dioxide is capable of forming plumbate anions. Lead disulfide[72] and lead diselenide[73] are only stable at high pressures. Lead tetrafluoride, a yellow crystalline powder, is stable, but less so than the difluoride. Lead tetrachloride (a yellow oil) decomposes at room temperature, lead tetrabromide is less stable still, and the existence of lead tetraiodide is questionable.[74]
53
+
54
+ Some lead compounds exist in formal oxidation states other than +4 or +2. Lead(III) may be obtained, as an intermediate between lead(II) and lead(IV), in larger organolead complexes; this oxidation state is not stable, as both the lead(III) ion and the larger complexes containing it are radicals.[76][77][78] The same applies for lead(I), which can be found in such radical species.[79]
55
+
56
+ Numerous mixed lead(II,IV) oxides are known. When PbO2 is heated in air, it becomes Pb12O19 at 293 °C, Pb12O17 at 351 °C, Pb3O4 at 374 °C, and finally PbO at 605 °C. A further sesquioxide, Pb2O3, can be obtained at high pressure, along with several non-stoichiometric phases. Many of them show defective fluorite structures in which some oxygen atoms are replaced by vacancies: PbO can be considered as having such a structure, with every alternate layer of oxygen atoms absent.[80]
57
+
58
+ Negative oxidation states can occur as Zintl phases, as either free lead anions, as in Ba2Pb, with lead formally being lead(−IV),[81] or in oxygen-sensitive ring-shaped or polyhedral cluster ions such as the trigonal bipyramidal Pb52− ion, where two lead atoms are lead(−I) and three are lead(0).[82] In such anions, each atom is at a polyhedral vertex and contributes two electrons to each covalent bond along an edge from their sp3 hybrid orbitals, the other two being an external lone pair.[58] They may be made in liquid ammonia via the reduction of lead by sodium.[83]
59
+
60
+ Lead can form multiply-bonded chains, a property it shares with its lighter homologs in the carbon group. Its capacity to do so is much less because the Pb–Pb bond energy is over three and a half times lower than that of the C–C bond.[52] With itself, lead can build metal–metal bonds of an order up to three.[84] With carbon, lead forms organolead compounds similar to, but generally less stable than, typical organic compounds[85] (due to the Pb–C bond being rather weak).[58] This makes the organometallic chemistry of lead far less wide-ranging than that of tin.[86] Lead predominantly forms organolead(IV) compounds, even when starting with inorganic lead(II) reactants; very few organolead(II) compounds are known. The most well-characterized exceptions are Pb[CH(SiMe3)2]2 and Pb(η5-C5H5)2.[86]
61
+
62
+ The lead analog of the simplest organic compound, methane, is plumbane. Plumbane may be obtained in a reaction between metallic lead and atomic hydrogen.[87] Two simple derivatives, tetramethyllead and tetraethyllead, are the best-known organolead compounds. These compounds are relatively stable: tetraethyllead only starts to decompose if heated[88] or if exposed to sunlight or ultraviolet light.[89][k] With sodium metal, lead readily forms an equimolar alloy that reacts with alkyl halides to form organometallic compounds such as tetraethyllead.[90] The oxidizing nature of many organolead compounds is usefully exploited: lead tetraacetate is an important laboratory reagent for oxidation in organic synthesis.[91] Tetraethyllead, once added to gasoline, was produced in larger quantities than any other organometallic compound.[86] Other organolead compounds are less chemically stable.[85] For many organic compounds, a lead analog does not exist.[87]
63
+
64
+ Lead's per-particle abundance in the Solar System is 0.121 ppb (parts per billion).[92][l] This figure is two and a half times higher than that of platinum, eight times more than mercury, and seventeen times more than gold.[92] The amount of lead in the universe is slowly increasing[93] as most heavier atoms (all of which are unstable) gradually decay to lead.[94] The abundance of lead in the Solar System since its formation 4.5 billion years ago has increased by about 0.75%.[95] The solar system abundances table shows that lead, despite its relatively high atomic number, is more prevalent than most other elements with atomic numbers greater than 40.[92]
65
+
66
+ Primordial lead—which comprises the isotopes lead-204, lead-206, lead-207, and lead-208—was mostly created as a result of repetitive neutron capture processes occurring in stars. The two main modes of capture are the s- and r-processes.[96]
67
+
68
+ In the s-process (s is for "slow"), captures are separated by years or decades, allowing less stable nuclei to undergo beta decay.[97] A stable thallium-203 nucleus can capture a neutron and become thallium-204; this undergoes beta decay to give stable lead-204; on capturing another neutron, it becomes lead-205, which has a half-life of around 15 million years. Further captures result in lead-206, lead-207, and lead-208. On capturing another neutron, lead-208 becomes lead-209, which quickly decays into bismuth-209. On capturing another neutron, bismuth-209 becomes bismuth-210, and this beta decays to polonium-210, which alpha decays to lead-206. The cycle hence ends at lead-206, lead-207, lead-208, and bismuth-209.[98]
69
+
70
+ In the r-process (r is for "rapid"), captures happen faster than nuclei can decay.[99] This occurs in environments with a high neutron density, such as a supernova or the merger of two neutron stars. The neutron flux involved may be on the order of 1022 neutrons per square centimeter per second.[100] The r-process does not form as much lead as the s-process.[101] It tends to stop once neutron-rich nuclei reach 126 neutrons.[102] At this point, the neutrons are arranged in complete shells in the atomic nucleus, and it becomes harder to energetically accommodate more of them.[103] When the neutron flux subsides, these nuclei beta decay into stable isotopes of osmium, iridium, and platinum.[104]
71
+
72
+ Lead is classified as a chalcophile under the Goldschmidt classification, meaning it is generally found combined with sulfur.[105] It rarely occurs in its native, metallic form.[106] Many lead minerals are relatively light and, over the course of the Earth's history, have remained in the crust instead of sinking deeper into the Earth's interior. This accounts for lead's relatively high crustal abundance of 14 ppm; it is the 38th most abundant element in the crust.[107][m]
73
+
74
+ The main lead-bearing mineral is galena (PbS), which is mostly found with zinc ores.[109] Most other lead minerals are related to galena in some way; boulangerite, Pb5Sb4S11, is a mixed sulfide derived from galena; anglesite, PbSO4, is a product of galena oxidation; and cerussite or white lead ore, PbCO3, is a decomposition product of galena. Arsenic, tin, antimony, silver, gold, copper, and bismuth are common impurities in lead minerals.[109]
75
+
76
+ World lead resources exceed two billion tons. Significant deposits are located in Australia, China, Ireland, Mexico, Peru, Portugal, Russia, and the United States. Global reserves—resources that are economically feasible to extract—totaled 88 million tons in 2016, of which Australia had 35 million, China 17 million, and Russia 6.4 million.[110]
77
+
78
+ Typical background concentrations of lead do not exceed 0.1 μg/m3 in the atmosphere; 100 mg/kg in soil; and 5 μg/L in freshwater and seawater.[111]
79
+
80
+ The modern English word "lead" is of Germanic origin; it comes from the Middle English leed and Old English lēad (with the macron above the "e" signifying that the vowel sound of that letter is long).[112] The Old English word is derived from the hypothetical reconstructed Proto-Germanic *lauda- ("lead").[113] According to linguistic theory, this word bore descendants in multiple Germanic languages of exactly the same meaning.[113]
81
+
82
+ There is no consensus on the origin of the Proto-Germanic *lauda-. One hypothesis suggests it is derived from Proto-Indo-European *lAudh- ("lead"; capitalization of the vowel is equivalent to the macron).[114] Another hypothesis suggests it is borrowed from Proto-Celtic *ɸloud-io- ("lead"). This word is related to the Latin plumbum, which gave the element its chemical symbol Pb. The word *ɸloud-io- is thought to be the origin of Proto-Germanic *bliwa- (which also means "lead"), from which stemmed the German Blei.[115]
83
+
84
+ The name of the chemical element is not related to the verb of the same spelling, which is derived from Proto-Germanic *laidijan- ("to lead").[116]
85
+
86
+ Metallic lead beads dating back to 7000–6500 BCE have been found in Asia Minor and may represent the first example of metal smelting.[118] At that time lead had few (if any) applications due to its softness and dull appearance.[118] The major reason for the spread of lead production was its association with silver, which may be obtained by burning galena (a common lead mineral).[119] The Ancient Egyptians were the first to use lead minerals in cosmetics, an application that spread to Ancient Greece and beyond;[120] the Egyptians may have used lead for sinkers in fishing nets, glazes, glasses, enamels, and for ornaments.[119] Various civilizations of the Fertile Crescent used lead as a writing material, as currency, and as a construction material.[119] Lead was used in the Ancient Chinese royal court as a stimulant,[119] as currency,[121] and as a contraceptive;[122] the Indus Valley civilization and the Mesoamericans[119] used it for making amulets; and the eastern and southern African peoples used lead in wire drawing.[123]
87
+
88
+ Because silver was extensively used as a decorative material and an exchange medium, lead deposits came to be worked in Asia Minor from 3000 BCE; later, lead deposits were developed in the Aegean and Laurion. These three regions collectively dominated production of mined lead until c. 1200 BCE.[124] Beginning circa 2000 BCE, the Phoenicians worked deposits in the Iberian peninsula; by 1600 BCE, lead mining existed in Cyprus, Greece, and Sardinia.[125]
89
+
90
+ Rome's territorial expansion in Europe and across the Mediterranean, and its development of mining, led to it becoming the greatest producer of lead during the classical era, with an estimated annual output peaking at 80,000 tonnes. Like their predecessors, the Romans obtained lead mostly as a by-product of silver smelting.[117][127] Lead mining occurred in Central Europe, Britain, the Balkans, Greece, Anatolia, and Hispania, the latter accounting for 40% of world production.[117]
91
+
92
+ Lead tablets were commonly used as a material for letters.[128] Lead coffins, cast in flat sand forms, with interchangeable motifs to suit the faith of the deceased were used in ancient Judea.[129] Lead was used to make sling bullets from the 5th century BC. In Roman times, lead sling bullets were amply used, and were effective at a distance of between 100 and 150 meters. The Balearic slingers, used as mercenaries in Carthaginian and Roman armies, were famous for their shooting distance and accuracy.[130]
93
+
94
+ Lead was used for making water pipes in the Roman Empire; the Latin word for the metal, plumbum, is the origin of the English word "plumbing". Its ease of working and resistance to corrosion[131] ensured its widespread use in other applications, including pharmaceuticals, roofing, currency, and warfare.[132][133][134] Writers of the time, such as Cato the Elder, Columella, and Pliny the Elder, recommended lead (or lead-coated) vessels for the preparation of sweeteners and preservatives added to wine and food. The lead conferred an agreeable taste due to the formation of "sugar of lead" (lead(II) acetate), whereas copper or bronze vessels could impart a bitter flavor through verdigris formation.[135]
95
+
96
+ Heinz Eschnauer and Markus Stoeppler"Wine—An enological specimen bank", 1992[136]
97
+
98
+ The Roman author Vitruvius reported the health dangers of lead[137] and modern writers have suggested that lead poisoning played a major role in the decline of the Roman Empire.[138][139][n] Other researchers have criticized such claims, pointing out, for instance, that not all abdominal pain is caused by lead poisoning.[141][142] According to archaeological research, Roman lead pipes increased lead levels in tap water but such an effect was "unlikely to have been truly harmful".[143][144] When lead poisoning did occur, victims were called "saturnine", dark and cynical, after the ghoulish father of the gods, Saturn. By association, lead was considered the father of all metals.[145] Its status in Roman society was low as it was readily available[146] and cheap.[147]
99
+
100
+ During the classical era (and even up to the 17th century), tin was often not distinguished from lead: Romans called lead plumbum nigrum ("black lead"), and tin plumbum candidum ("bright lead"). The association of lead and tin can be seen in other languages: the word olovo in Czech translates to "lead", but in Russian, its cognate олово (olovo) means "tin".[148] To add to the confusion, lead bore a close relation to antimony: both elements commonly occur as sulfides (galena and stibnite), often together. Pliny incorrectly wrote that stibnite would give lead on heating, instead of antimony.[149] In countries such as Turkey and India, the originally Persian name surma came to refer to either antimony sulfide or lead sulfide,[150] and in some languages, such as Russian, gave its name to antimony (сурьма).[151]
101
+
102
+ Lead mining in Western Europe declined after the fall of the Western Roman Empire, with Arabian Iberia being the only region having a significant output.[152][153] The largest production of lead occurred in South and East Asia, especially China and India, where lead mining grew rapidly.[153]
103
+
104
+ In Europe, lead production began to increase in the 11th and 12th centuries, when it was again used for roofing and piping. Starting in the 13th century, lead was used to create stained glass.[155] In the European and Arabian traditions of alchemy, lead (symbol in the European tradition)[156] was considered an impure base metal which, by the separation, purification and balancing of its constituent essences, could be transformed to pure and incorruptible gold.[157] During the period, lead was used increasingly for adulterating wine. The use of such wine was forbidden for use in Christian rites by a papal bull in 1498, but it continued to be imbibed and resulted in mass poisonings up to the late 18th century.[152][158] Lead was a key material in parts of the printing press, which was invented around 1440; lead dust was commonly inhaled by print workers, causing lead poisoning.[159] Firearms were invented at around the same time, and lead, despite being more expensive than iron, became the chief material for making bullets. It was less damaging to iron gun barrels, had a higher density (which allowed for better retention of velocity), and its lower melting point made the production of bullets easier as they could be made using a wood fire.[160] Lead, in the form of Venetian ceruse, was extensively used in cosmetics by Western European aristocracy as whitened faces were regarded as a sign of modesty.[161][162] This practice later expanded to white wigs and eyeliners, and only faded out with the French Revolution in the late 18th century. A similar fashion appeared in Japan in the 18th century with the emergence of the geishas, a practice that continued long into the 20th century. The white faces of women "came to represent their feminine virtue as Japanese women",[163] with lead commonly used in the whitener.[164]
105
+
106
+ In the New World, lead production was recorded soon after the arrival of European settlers. The earliest record dates to 1621 in the English Colony of Virginia, fourteen years after its foundation.[165] In Australia, the first mine opened by colonists on the continent was a lead mine, in 1841.[166] In Africa, lead mining and smelting were known in the Benue Trough[167] and the lower Congo Basin, where lead was used for trade with Europeans, and as a currency by the 17th century,[168] well before the scramble for Africa.
107
+
108
+ In the second half of the 18th century, Britain, and later continental Europe and the United States, experienced the Industrial Revolution. This was the first time during which lead production rates exceeded those of Rome.[117] Britain was the leading producer, losing this status by the mid-19th century with the depletion of its mines and the development of lead mining in Germany, Spain, and the United States.[169] By 1900, the United States was the leader in global lead production, and other non-European nations—Canada, Mexico, and Australia—had begun significant production; production outside Europe exceeded that within.[170] A great share of the demand for lead came from plumbing and painting—lead paints were in regular use.[171] At this time, more (working class) people were exposed to the metal and lead poisoning cases escalated. This led to research into the effects of lead intake. Lead was proven to be more dangerous in its fume form than as a solid metal. Lead poisoning and gout were linked; British physician Alfred Baring Garrod noted a third of his gout patients were plumbers and painters. The effects of chronic ingestion of lead, including mental disorders, were also studied in the 19th century. The first laws aimed at decreasing lead poisoning in factories were enacted during the 1870s and 1880s in the United Kingdom.[171]
109
+
110
+ Further evidence of the threat that lead posed to humans was discovered in the late 19th and early 20th centuries. Mechanisms of harm were better understood, lead blindness was documented, and the element was phased out of public use in the United States and Europe. The United Kingdom introduced mandatory factory inspections in 1878 and appointed the first Medical Inspector of Factories in 1898; as a result, a 25-fold decrease in lead poisoning incidents from 1900 to 1944 was reported.[172] Most European countries banned lead paint—commonly used because of its opacity and water resistance[173]—for interiors by 1930.[174]
111
+
112
+ The last major human exposure to lead was the addition of tetraethyllead to gasoline as an antiknock agent, a practice that originated in the United States in 1921. It was phased out in the United States and the European Union by 2000.[171]
113
+
114
+ In the 1970s, the United States and Western European countries introduced legislation to reduce lead air pollution.[175][176] The impact was significant: while a study conducted by the Centers for Disease Control and Prevention in the United States in 1976–1980 showed that 77.8% of the population had elevated blood lead levels, in 1991–1994, a study by the same institute showed the share of people with such high levels dropped to 2.2%.[177] The main product made of lead by the end of the 20th century was the lead–acid battery.[178]
115
+
116
+ From 1960 to 1990, lead output in the Western Bloc grew by about 31%.[179] The share of the world's lead production by the Eastern Bloc increased from 10% to 30%, from 1950 to 1990, with the Soviet Union being the world's largest producer during the mid-1970s and the 1980s, and China starting major lead production in the late 20th century.[180] Unlike the European communist countries, China was largely unindustrialized by the mid-20th century; in 2004, China surpassed Australia as the largest producer of lead.[181] As was the case during European industrialization, lead has had a negative effect on health in China.[182]
117
+
118
+ As of 2014, production of lead is increasing worldwide due to its use in lead–acid batteries.[183] There are two major categories of production: primary from mined ores, and secondary from scrap. In 2014, 4.58 million metric tons came from primary production and 5.64 million from secondary production. The top three producers of mined lead concentrate in that year were China, Australia, and the United States.[110] The top three producers of refined lead were China, the United States, and India.[184] According to the International Resource Panel's Metal Stocks in Society report of 2010, the total amount of lead in use, stockpiled, discarded, or dissipated into the environment, on a global basis, is 8 kg per capita. Much of this is in more developed countries (20–150 kg per capita) rather than less developed ones (1–4 kg per capita).[185]
119
+
120
+ The primary and secondary lead production processes are similar. Some primary production plants now supplement their operations with scrap lead, and this trend is likely to increase in the future. Given adequate techniques, lead obtained via secondary processes is indistinguishable from lead obtained via primary processes. Scrap lead from the building trade is usually fairly clean and is re-melted without the need for smelting, though refining is sometimes needed. Secondary lead production is therefore cheaper, in terms of energy requirements, than is primary production, often by 50% or more.[186]
121
+
122
+ Most lead ores contain a low percentage of lead (rich ores have a typical content of 3–8%) which must be concentrated for extraction.[187] During initial processing, ores typically undergo crushing, dense-medium separation, grinding, froth flotation, and drying. The resulting concentrate, which has a lead content of 30–80% by mass (regularly 50–60%),[187] is then turned into (impure) lead metal.
123
+
124
+ There are two main ways of doing this: a two-stage process involving roasting followed by blast furnace extraction, carried out in separate vessels; or a direct process in which the extraction of the concentrate occurs in a single vessel. The latter has become the most common route, though the former is still significant.[188]
125
+
126
+ First, the sulfide concentrate is roasted in air to oxidize the lead sulfide:[189]
127
+
128
+ As the original concentrate was not pure lead sulfide, roasting yields not only the desired lead(II) oxide, but a mixture of oxides, sulfates, and silicates of lead and of the other metals contained in the ore.[190] This impure lead oxide is reduced in a coke-fired blast furnace to the (again, impure) metal:[191]
129
+
130
+ Impurities are mostly arsenic, antimony, bismuth, zinc, copper, silver, and gold. Typically they are removed in a series of pyrometallurgical processes. The melt is treated in a reverberatory furnace with air, steam, and sulfur, which oxidizes the impurities except for silver, gold, and bismuth. Oxidized contaminants float to the top of the melt and are skimmed off.[192][193] Metallic silver and gold are removed and recovered economically by means of the Parkes process, in which zinc is added to lead. Zinc, which is immiscible in lead, dissolves the silver and gold. The zinc solution can be separated from the lead, and the silver and gold retrieved.[193][194] De-silvered lead is freed of bismuth by the Betterton–Kroll process, treating it with metallic calcium and magnesium. The resulting bismuth dross can be skimmed off.[193]
131
+
132
+ Alternatively to the pyrometallurgical processes, very pure lead can be obtained by processing smelted lead electrolytically using the Betts process. Anodes of impure lead and cathodes of pure lead are placed in an electrolyte of lead fluorosilicate (PbSiF6). Once electrical potential is applied, impure lead at the anode dissolves and plates onto the cathode, leaving the majority of the impurities in solution.[193][195] This is a high-cost process and thus mostly reserved for refining bullion containing high percentages of impurities.[196]
133
+
134
+ In this process, lead bullion and slag is obtained directly from lead concentrates. The lead sulfide concentrate is melted in a furnace and oxidized, forming lead monoxide. Carbon (as coke or coal gas[p]) is added to the molten charge along with fluxing agents. The lead monoxide is thereby reduced to metallic lead, in the midst of a slag rich in lead monoxide.[188]
135
+
136
+ If the input is rich in lead, as much as 80% of the original lead can be obtained as bullion; the remaining 20% forms a slag rich in lead monoxide. For a low-grade feed, all of the lead can be oxidized to a high-lead slag.[188] Metallic lead is further obtained from the high-lead (25–40%) slags via submerged fuel combustion or injection, reduction assisted by an electric furnace, or a combination of both.[188]
137
+
138
+ Research on a cleaner, less energy-intensive lead extraction process continues; a major drawback is that either too much lead is lost as waste, or the alternatives result in a high sulfur content in the resulting lead metal. Hydrometallurgical extraction, in which anodes of impure lead are immersed into an electrolyte and pure lead is deposited onto a cathode, is a technique that may have potential, but is not currently economical except in cases where electricity is very cheap.[197]
139
+
140
+ Smelting, which is an essential part of the primary production, is often skipped during secondary production. It is only performed when metallic lead has undergone significant oxidation.[186] The process is similar to that of primary production in either a blast furnace or a rotary furnace, with the essential difference being the greater variability of yields: blast furnaces produce hard lead (10% antimony) while reverberatory and rotary kiln furnaces produced semisoft lead (3–4% antimony).[198] The Isasmelt process is a more recent smelting method that may act as an extension to primary production; battery paste from spent lead–acid batteries (containing lead sulfate and lead oxides) has its sulfate removed by treating it with alkali, and is then treated in a coal-fueled furnace in the presence of oxygen, which yields impure lead, with antimony the most common impurity.[199] Refining of secondary lead is similar to that of primary lead; some refining processes may be skipped depending on the material recycled and its potential contamination.[199]
141
+
142
+ Of the sources of lead for recycling, lead–acid batteries are the most important; lead pipe, sheet, and cable sheathing are also significant.[186]
143
+
144
+ Contrary to popular belief, pencil leads in wooden pencils have never been made from lead. When the pencil originated as a wrapped graphite writing tool, the particular type of graphite used was named plumbago (literally, act for lead or lead mockup).[201]
145
+
146
+ Lead metal has several useful mechanical properties, including high density, low melting point, ductility, and relative inertness. Many metals are superior to lead in some of these aspects but are generally less common and more difficult to extract from parent ores. Lead's toxicity has led to its phasing out for some uses.[202]
147
+
148
+ Lead has been used for bullets since their invention in the Middle Ages. It is inexpensive; its low melting point means small arms ammunition and shotgun pellets can be cast with minimal technical equipment; and it is denser than other common metals, which allows for better retention of velocity. It remains the main material for bullets, alloyed with other metals as hardeners.[160] Concerns have been raised that lead bullets used for hunting can damage the environment.[q]
149
+
150
+ Lead's high density and resistance to corrosion have been exploited in a number of related applications. It is used as ballast in sailboat keels; its density allows it to take up a small volume and minimize water resistance, thus counterbalancing the heeling effect of wind on the sails.[204] It is used in scuba diving weight belts to counteract the diver's buoyancy.[205] In 1993, the base of the Leaning Tower of Pisa was stabilized with 600 tonnes of lead.[206] Because of its corrosion resistance, lead is used as a protective sheath for underwater cables.[207]
151
+
152
+ Lead has many uses in the construction industry; lead sheets are used as architectural metals in roofing material, cladding, flashing, gutters and gutter joints, and on roof parapets.[208][209] Lead is still used in statues and sculptures,[r] including for armatures.[211] In the past it was often used to balance the wheels of cars; for environmental reasons this use is being phased out in favor of other materials.[110]
153
+
154
+ Lead is added to copper alloys, such as brass and bronze, to improve machinability and for its lubricating qualities. Being practically insoluble in copper the lead forms solid globules in imperfections throughout the alloy, such as grain boundaries. In low concentrations, as well as acting as a lubricant, the globules hinder the formation of swarf as the alloy is worked, thereby improving machinability. Copper alloys with larger concentrations of lead are used in bearings. The lead provides lubrication, and the copper provides the load-bearing support.[212]
155
+
156
+ Lead's high density, atomic number, and formability form the basis for use of lead as a barrier that absorbs sound, vibration, and radiation.[213] Lead has no natural resonance frequencies;[213] as a result, sheet-lead is used as a sound deadening layer in the walls, floors, and ceilings of sound studios.[214] Organ pipes are often made from a lead alloy, mixed with various amounts of tin to control the tone of each pipe.[215][216] Lead is an established shielding material from radiation in nuclear science and in X-ray rooms[217] due to its denseness and high attenuation coefficient.[218] Molten lead has been used as a coolant for lead-cooled fast reactors.[219]
157
+
158
+ The largest use of lead in the early 21st century is in lead–acid batteries. The lead in batteries undergoes no direct contact with humans, so there are fewer toxicity concerns.[s] People who work in battery production plants may be exposed to lead dust and inhale it.[221]} The reactions in the battery between lead, lead dioxide, and sulfuric acid provide a reliable source of voltage.[t] Supercapacitors incorporating lead–acid batteries have been installed in kilowatt and megawatt scale applications in Australia, Japan, and the United States in frequency regulation, solar smoothing and shifting, wind smoothing, and other applications.[223] These batteries have lower energy density and charge-discharge efficiency than lithium-ion batteries, but are significantly cheaper.[224]
159
+
160
+ Lead is used in high voltage power cables as sheathing material to prevent water diffusion into insulation; this use is decreasing as lead is being phased out.[225] Its use in solder for electronics is also being phased out by some countries to reduce the amount of environmentally hazardous waste.[226] Lead is one of three metals used in the Oddy test for museum materials, helping detect organic acids, aldehydes, and acidic gases.[227][228]
161
+
162
+ In addition to being the main application for lead metal, lead-acid batteries are also the main consumer of lead compounds. The energy storage/release reaction used in these devices involves lead sulfate and lead dioxide:
163
+
164
+ Other applications of lead compounds are very specialized and often fading. Lead-based coloring agents are used in ceramic glazes and glass, especially for red and yellow shades.[229] While lead paints are phased out in Europe and North America, they remain in use in less developed countries such as China,[230] India,[231] or Indonesia.[232] Lead tetraacetate and lead dioxide are used as oxidizing agents in organic chemistry. Lead is frequently used in the polyvinyl chloride coating of electrical cords.[233][234] It can be used to treat candle wicks to ensure a longer, more even burn. Because of its toxicity, European and North American manufacturers use alternatives such as zinc.[235][236] Lead glass is composed of 12–28% lead oxide, changing its optical characteristics and reducing the transmission of ionizing radiation.[237] Lead-based semiconductors such as lead telluride and lead selenide are used in photovoltaic cells and infrared detectors.[238]
165
+
166
+ Lead has no confirmed biological role, and there is no confirmed safe level of lead exposure.[240] A 2009 Canadian–American study concluded that even at levels that are considered to pose little to no risk, lead may cause "adverse mental health outcomes".[241] Its prevalence in the human body—at an adult average of 120 mg[u]—is nevertheless exceeded only by zinc (2500 mg) and iron (4000 mg) among the heavy metals.[243] Lead salts are very efficiently absorbed by the body.[244] A small amount of lead (1%) is stored in bones; the rest is excreted in urine and feces within a few weeks of exposure. Only about a third of lead is excreted by a child. Continual exposure may result in the bioaccumulation of lead.[245]
167
+
168
+ Lead is a highly poisonous metal (whether inhaled or swallowed), affecting almost every organ and system in the human body.[246] At airborne levels of 100 mg/m3, it is immediately dangerous to life and health.[247] Most ingested lead is absorbed into the bloodstream.[248] The primary cause of its toxicity is its predilection for interfering with the proper functioning of enzymes. It does so by binding to the sulfhydryl groups found on many enzymes,[249] or mimicking and displacing other metals which act as cofactors in many enzymatic reactions.[250] Among the essential metals that lead interacts with are calcium, iron, and zinc.[251] High levels of calcium and iron tend to provide some protection from lead poisoning; low levels cause increased susceptibility.[244]
169
+
170
+ Lead can cause severe damage to the brain and kidneys and, ultimately, death. By mimicking calcium, lead can cross the blood–brain barrier. It degrades the myelin sheaths of neurons, reduces their numbers, interferes with neurotransmission routes, and decreases neuronal growth.[249] In the human body, lead inhibits porphobilinogen synthase and ferrochelatase, preventing both porphobilinogen formation and the incorporation of iron into protoporphyrin IX, the final step in heme synthesis. This causes ineffective heme synthesis and microcytic anemia.[252]
171
+
172
+ Symptoms of lead poisoning include nephropathy, colic-like abdominal pains, and possibly weakness in the fingers, wrists, or ankles. Small blood pressure increases, particularly in middle-aged and older people, may be apparent and can cause anemia. Several studies, mostly cross-sectional, found an association between increased lead exposure and decreased heart rate variability.[253] In pregnant women, high levels of exposure to lead may cause miscarriage. Chronic, high-level exposure has been shown to reduce fertility in males.[254]
173
+
174
+ In a child's developing brain, lead interferes with synapse formation in the cerebral cortex, neurochemical development (including that of neurotransmitters), and the organization of ion channels.[255] Early childhood exposure has been linked with an increased risk of sleep disturbances and excessive daytime drowsiness in later childhood.[256] High blood levels are associated with delayed puberty in girls.[257] The rise and fall in exposure to airborne lead from the combustion of tetraethyl lead in gasoline during the 20th century has been linked with historical increases and decreases in crime levels, a hypothesis which is not universally accepted.[258]
175
+
176
+ Lead exposure is a global issue since lead mining and smelting, and battery manufacturing/disposal/recycling, are common in many countries. Lead enters the body via inhalation, ingestion, or skin absorption. Almost all inhaled lead is absorbed into the body; for ingestion, the rate is 20–70%, with children absorbing a higher percentage than adults.[259]
177
+
178
+ Poisoning typically results from ingestion of food or water contaminated with lead, and less commonly after accidental ingestion of contaminated soil, dust, or lead-based paint.[260] Seawater products can contain lead if affected by nearby industrial waters.[261] Fruit and vegetables can be contaminated by high levels of lead in the soils they were grown in. Soil can be contaminated through particulate accumulation from lead in pipes, lead paint, and residual emissions from leaded gasoline.[262]
179
+
180
+ The use of lead for water pipes is a problem in areas with soft or acidic water.[263] Hard water forms insoluble layers in the pipes whereas soft and acidic water dissolves the lead pipes.[264] Dissolved carbon dioxide in the carried water may result in the formation of soluble lead bicarbonate; oxygenated water may similarly dissolve lead as lead(II) hydroxide. Drinking such water, over time, can cause health problems due to the toxicity of the dissolved lead. The harder the water the more calcium bicarbonate and sulfate it will contain, and the more the inside of the pipes will be coated with a protective layer of lead carbonate or lead sulfate.[265]
181
+
182
+ Ingestion of applied lead-based paint is the major source of exposure for children:
183
+ a direct source is chewing on old painted window sills. Alternatively, as the applied dry paint deteriorates, it peels, is pulverized into dust and then enters the body through hand-to-mouth contact or contaminated food, water, or alcohol. Ingesting certain home remedies may result in exposure to lead or its compounds.[266]
184
+
185
+ Inhalation is the second major exposure pathway, affecting smokers and especially workers in lead-related occupations.[248] Cigarette smoke contains, among other toxic substances, radioactive lead-210.[267]
186
+
187
+ Skin exposure may be significant for people working with organic lead compounds. The rate of skin absorption is lower for inorganic lead.[268]
188
+
189
+ Treatment for lead poisoning normally involves the administration of dimercaprol and succimer.[269] Acute cases may require the use of disodium calcium edetate, the calcium chelate, and the disodium salt of ethylenediaminetetraacetic acid (EDTA). It has a greater affinity for lead than calcium, with the result that lead chelate is formed by exchange and excreted in the urine, leaving behind harmless calcium.[270]
190
+
191
+ The extraction, production, use, and disposal of lead and its products have caused significant contamination of the Earth's soils and waters. Atmospheric emissions of lead were at their peak during the Industrial Revolution, and the leaded gasoline period in the second half of the twentieth century. Lead releases originate from natural sources (i.e., concentration of the naturally occurring lead), industrial production, incineration and recycling, and mobilization of previously buried lead.[271] Elevated concentrations of lead persist in soils and sediments in post-industrial and urban areas; industrial emissions, including those arising from coal burning,[272] continue in many parts of the world, particularly in the developing countries.[273]
192
+
193
+ Lead can accumulate in soils, especially those with a high organic content, where it remains for hundreds to thousands of years. Environmental lead can compete with other metals found in and on plants surfaces potentially inhibiting photosynthesis and at high enough concentrations, negatively affecting plant growth and survival. Contamination of soils and plants can allow lead to ascend the food chain affecting microorganisms and animals. In animals, lead exhibits toxicity in many organs, damaging the nervous, renal, reproductive, hematopoietic, and cardiovascular systems after ingestion, inhalation, or skin absorption.[274] Fish uptake lead from both water and sediment;[275] bioaccumulation in the food chain poses a hazard to fish, birds, and sea mammals.[276]
194
+
195
+ Anthropogenic lead includes lead from shot and sinkers. These are among the most potent sources of lead contamination along with lead production sites.[277] Lead was banned for shot and sinkers in the United States in 2017,[278] although that ban was only effective for a month,[279] and a similar ban is being considered in the European Union.[280]
196
+
197
+ Analytical methods for the determination of lead in the environment include spectrophotometry, X-ray fluorescence, atomic spectroscopy and electrochemical methods. A specific ion-selective electrode has been developed based on the ionophore S,S'-methylenebis (N,N-diisobutyldithiocarbamate).[281] An important biomarker assay for lead poisoning is δ-aminolevulinic acid levels in plasma, serum, and urine.[282]
198
+
199
+ By the mid-1980s, there was significant decline in the use of lead in industry. In the United States, environmental regulations reduced or eliminated the use of lead in non-battery products, including gasoline, paints, solders, and water systems. Particulate control devices were installed in coal-fired power plants to capture lead emissions.[272] In 1992, U.S. Congress required the Environmental Protection Agency to reduce the blood lead levels of the country's children.[283] Lead use was further curtailed by the European Union's 2003 Restriction of Hazardous Substances Directive.[284] A large drop in lead deposition occurred in the Netherlands after the 1993 national ban on use of lead shot for hunting and sport shooting: from 230 tonnes in 1990 to 47.5 tonnes in 1995.[285]
200
+
201
+ In the United States, the permissible exposure limit for lead in the workplace, comprising metallic lead, inorganic lead compounds, and lead soaps, was set at 50 μg/m3 over an 8-hour workday, and the blood lead level limit at 5 μg per 100 g of blood in 2012.[286] Lead may still be found in harmful quantities in stoneware,[287] vinyl[288] (such as that used for tubing and the insulation of electrical cords), and Chinese brass.[v] Old houses may still contain lead paint.[288] White lead paint has been withdrawn from sale in industrialized countries, but specialized uses of other pigments such as yellow lead chromate remain.[173] Stripping old paint by sanding produces dust which can be inhaled.[290] Lead abatement programs have been mandated by some authorities in properties where young children live.[291]
202
+
203
+ Lead waste, depending on the jurisdiction and the nature of the waste, may be treated as household waste (in order to facilitate lead abatement activities),[292] or potentially hazardous waste requiring specialized treatment or storage.[293] Lead is released to the wildlife in shooting places and a number of lead management practices, such as stewardship of the environment and reduced public scrutiny, have been developed to counter the lead contamination.[294] Lead migration can be enhanced in acidic soils; to counter that, it is advised soils be treated with lime to neutralize the soils and prevent leaching of lead.[295]
204
+
205
+ Research has been conducted on how to remove lead from biosystems by biological means: Fish bones are being researched for their ability to bioremediate lead in contaminated soil.[296][297] The fungus Aspergillus versicolor is effective at absorbing lead ions from industrial waste before being released to water bodies.[298] Several bacteria have been researched for their ability to remove lead from the environment, including the sulfate-reducing bacteria Desulfovibrio and Desulfotomaculum, both of which are highly effective in aqueous solutions.[299]
en/4673.html.txt ADDED
@@ -0,0 +1,210 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ Rain is liquid water in the form of droplets that have condensed from atmospheric water vapor and then become heavy enough to fall under gravity. Rain is a major component of the water cycle and is responsible for depositing most of the fresh water on the Earth. It provides suitable conditions for many types of ecosystems, as well as water for hydroelectric power plants and crop irrigation.
4
+
5
+ The major cause of rain production is moisture moving along three-dimensional zones of temperature and moisture contrasts known as weather fronts. If enough moisture and upward motion is present, precipitation falls from convective clouds (those with strong upward vertical motion) such as cumulonimbus (thunder clouds) which can organize into narrow rainbands. In mountainous areas, heavy precipitation is possible where upslope flow is maximized within windward sides of the terrain at elevation which forces moist air to condense and fall out as rainfall along the sides of mountains. On the leeward side of mountains, desert climates can exist due to the dry air caused by downslope flow which causes heating and drying of the air mass. The movement of the monsoon trough, or intertropical convergence zone, brings rainy seasons to savannah climes.
6
+
7
+ The urban heat island effect leads to increased rainfall, both in amounts and intensity, downwind of cities. Global warming is also causing changes in the precipitation pattern globally, including wetter conditions across eastern North America and drier conditions in the tropics. Antarctica is the driest continent. The globally averaged annual precipitation over land is 715 mm (28.1 in), but over the whole Earth it is much higher at 990 mm (39 in).[1] Climate classification systems such as the Köppen classification system use average annual rainfall to help differentiate between differing climate regimes. Rainfall is measured using rain gauges. Rainfall amounts can be estimated by weather radar.
8
+
9
+ Rain is also known or suspected on other planets, where it may be composed of methane, neon, sulfuric acid, or even iron rather than water.
10
+
11
+ Air contains water vapor, and the amount of water in a given mass of dry air, known as the mixing ratio, is measured in grams of water per kilogram of dry air (g/kg).[2][3] The amount of moisture in air is also commonly reported as relative humidity; which is the percentage of the total water vapor air can hold at a particular air temperature.[4] How much water vapor a parcel of air can contain before it becomes saturated (100% relative humidity) and forms into a cloud (a group of visible and tiny water and ice particles suspended above the Earth's surface)[5] depends on its temperature. Warmer air can contain more water vapor than cooler air before becoming saturated. Therefore, one way to saturate a parcel of air is to cool it. The dew point is the temperature to which a parcel must be cooled in order to become saturated.[6]
12
+
13
+ There are four main mechanisms for cooling the air to its dew point: adiabatic cooling, conductive cooling, radiational cooling, and evaporative cooling. Adiabatic cooling occurs when air rises and expands.[7] The air can rise due to convection, large-scale atmospheric motions, or a physical barrier such as a mountain (orographic lift). Conductive cooling occurs when the air comes into contact with a colder surface,[8] usually by being blown from one surface to another, for example from a liquid water surface to colder land. Radiational cooling occurs due to the emission of infrared radiation, either by the air or by the surface underneath.[9] Evaporative cooling occurs when moisture is added to the air through evaporation, which forces the air temperature to cool to its wet-bulb temperature, or until it reaches saturation.[10]
14
+
15
+ The main ways water vapor is added to the air are: wind convergence into areas of upward motion,[11] precipitation or virga falling from above,[12] daytime heating evaporating water from the surface of oceans, water bodies or wet land,[13] transpiration from plants,[14] cool or dry air moving over warmer water,[15] and lifting air over mountains.[16] Water vapor normally begins to condense on condensation nuclei such as dust, ice, and salt in order to form clouds. Elevated portions of weather fronts (which are three-dimensional in nature)[17] force broad areas of upward motion within the Earth's atmosphere which form clouds decks such as altostratus or cirrostratus.[18] Stratus is a stable cloud deck which tends to form when a cool, stable air mass is trapped underneath a warm air mass. It can also form due to the lifting of advection fog during breezy conditions.[19]
16
+
17
+ Coalescence occurs when water droplets fuse to create larger water droplets. Air resistance typically causes the water droplets in a cloud to remain stationary. When air turbulence occurs, water droplets collide, producing larger droplets.
18
+
19
+ As these larger water droplets descend, coalescence continues, so that drops become heavy enough to overcome air resistance and fall as rain. Coalescence generally happens most often in clouds above freezing, and is also known as the warm rain process.[20] In clouds below freezing, when ice crystals gain enough mass they begin to fall. This generally requires more mass than coalescence when occurring between the crystal and neighboring water droplets. This process is temperature dependent, as supercooled water droplets only exist in a cloud that is below freezing. In addition, because of the great temperature difference between cloud and ground level, these ice crystals may melt as they fall and become rain.[21]
20
+
21
+ Raindrops have sizes ranging from 0.1 to 9 mm (0.0039 to 0.3543 in) mean diameter but develop a tendency to break up at larger sizes. Smaller drops are called cloud droplets, and their shape is spherical. As a raindrop increases in size, its shape becomes more oblate, with its largest cross-section facing the oncoming airflow. Large rain drops become increasingly flattened on the bottom, like hamburger buns; very large ones are shaped like parachutes.[22][23] Contrary to popular belief, their shape does not resemble a teardrop.[24] The biggest raindrops on Earth were recorded over Brazil and the Marshall Islands in 2004 — some of them were as large as 10 mm (0.39 in). The large size is explained by condensation on large smoke particles or by collisions between drops in small regions with particularly high content of liquid water.[25]
22
+
23
+ Rain drops associated with melting hail tend to be larger than other rain drops.[26]
24
+
25
+ Intensity and duration of rainfall are usually inversely related, i.e., high intensity storms are likely to be of short duration and low intensity storms can have a long duration.[27][28]
26
+
27
+ The final droplet size distribution is an exponential distribution. The number of droplets with diameter between
28
+
29
+
30
+
31
+ d
32
+
33
+
34
+ {\displaystyle d}
35
+
36
+ and
37
+
38
+
39
+
40
+ D
41
+ +
42
+ d
43
+ D
44
+
45
+
46
+ {\displaystyle D+dD}
47
+
48
+ per unit volume of space is
49
+
50
+
51
+
52
+ n
53
+ (
54
+ d
55
+ )
56
+ =
57
+
58
+ n
59
+
60
+ 0
61
+
62
+
63
+
64
+ e
65
+
66
+
67
+ d
68
+
69
+ /
70
+
71
+
72
+ d
73
+
74
+
75
+
76
+ d
77
+ D
78
+
79
+
80
+ {\displaystyle n(d)=n_{0}e^{-d/\langle d\rangle }dD}
81
+
82
+ . This is commonly referred to as the Marshall–Palmer law after the researchers who first characterized it.[23][29] The parameters are somewhat temperature-dependent,[30] and the slope also scales with the rate of rainfall
83
+
84
+
85
+
86
+
87
+ d
88
+
89
+
90
+
91
+
92
+ 1
93
+
94
+
95
+ =
96
+ 41
97
+
98
+ R
99
+
100
+
101
+ 0.21
102
+
103
+
104
+
105
+
106
+ {\displaystyle \langle d\rangle ^{-1}=41R^{-0.21}}
107
+
108
+ (d in centimeters and R in millimeters per hour).[23]
109
+
110
+ Deviations can occur for small droplets and during different rainfall conditions. The distribution tends to fit averaged rainfall, while instantaneous size spectra often deviate and have been modeled as gamma distributions.[31] The distribution has an upper limit due to droplet fragmentation.[23]
111
+
112
+ Raindrops impact at their terminal velocity, which is greater for larger drops due to their larger mass to drag ratio. At sea level and without wind, 0.5 mm (0.020 in) drizzle impacts at 2 m/s (6.6 ft/s) or 7.2 km/h (4.5 mph), while large 5 mm (0.20 in) drops impact at around 9 m/s (30 ft/s) or 32 km/h (20 mph).[32]
113
+
114
+ Rain falling on loosely packed material such as newly fallen ash can produce dimples that can be fossilized, called raindrop impressions.[33] The air density dependence of the maximum raindrop diameter together with fossil raindrop imprints has been used to constrain the density of the air 2.7 billion years ago.[34]
115
+
116
+ The sound of raindrops hitting water is caused by bubbles of air oscillating underwater.[35][36]
117
+
118
+ The METAR code for rain is RA, while the coding for rain showers is SHRA.[37]
119
+
120
+ In certain conditions precipitation may fall from a cloud but then evaporate or sublime before reaching the ground. This is termed virga and is more often seen in hot and dry climates.
121
+
122
+ Stratiform (a broad shield of precipitation with a relatively similar intensity) and dynamic precipitation (convective precipitation which is showery in nature with large changes in intensity over short distances) occur as a consequence of slow ascent of air in synoptic systems (on the order of cm/s), such as in the vicinity of cold fronts and near and poleward of surface warm fronts. Similar ascent is seen around tropical cyclones outside the eyewall, and in comma-head precipitation patterns around mid-latitude cyclones.[38] A wide variety of weather can be found along an occluded front, with thunderstorms possible, but usually their passage is associated with a drying of the air mass. Occluded fronts usually form around mature low-pressure areas.[39] What separates rainfall from other precipitation types, such as ice pellets and snow, is the presence of a thick layer of air aloft which is above the melting point of water, which melts the frozen precipitation well before it reaches the ground. If there is a shallow near surface layer that is below freezing, freezing rain (rain which freezes on contact with surfaces in subfreezing environments) will result.[40] Hail becomes an increasingly infrequent occurrence when the freezing level within the atmosphere exceeds 3,400 m (11,000 ft) above ground level.[41]
123
+
124
+ Convective rain, or showery precipitation, occurs from convective clouds (e.g., cumulonimbus or cumulus congestus). It falls as showers with rapidly changing intensity. Convective precipitation falls over a certain area for a relatively short time, as convective clouds have limited horizontal extent. Most precipitation in the tropics appears to be convective; however, it has been suggested that stratiform precipitation also occurs.[38][42] Graupel and hail indicate convection.[43] In mid-latitudes, convective precipitation is intermittent and often associated with baroclinic boundaries such as cold fronts, squall lines, and warm fronts.[44]
125
+
126
+ Orographic precipitation occurs on the windward side of mountains and is caused by the rising air motion of a large-scale flow of moist air across the mountain ridge, resulting in adiabatic cooling and condensation. In mountainous parts of the world subjected to relatively consistent winds (for example, the trade winds), a more moist climate usually prevails on the windward side of a mountain than on the leeward or downwind side. Moisture is removed by orographic lift, leaving drier air (see katabatic wind) on the descending and generally warming, leeward side where a rain shadow is observed.[16]
127
+
128
+ In Hawaii, Mount Waiʻaleʻale, on the island of Kauai, is notable for its extreme rainfall, as it is amongst the places in the world with the highest levels of rainfall, with 9,500 mm (373 in).[45] Systems known as Kona storms affect the state with heavy rains between October and April.[46] Local climates vary considerably on each island due to their topography, divisible into windward (Koʻolau) and leeward (Kona) regions based upon location relative to the higher mountains. Windward sides face the east to northeast trade winds and receive much more rainfall; leeward sides are drier and sunnier, with less rain and less cloud cover.[47]
129
+
130
+ In South America, the Andes mountain range blocks Pacific moisture that arrives in that continent, resulting in a desertlike climate just downwind across western Argentina.[48] The Sierra Nevada range creates the same effect in North America forming the Great Basin and Mojave Deserts.[49][50]
131
+
132
+ The wet, or rainy, season is the time of year, covering one or more months, when most of the average annual rainfall in a region falls.[51] The term green season is also sometimes used as a euphemism by tourist authorities.[52] Areas with wet seasons are dispersed across portions of the tropics and subtropics.[53] Savanna climates and areas with monsoon regimes have wet summers and dry winters. Tropical rainforests technically do not have dry or wet seasons, since their rainfall is equally distributed through the year.[54] Some areas with pronounced rainy seasons will see a break in rainfall mid-season when the intertropical convergence zone or monsoon trough move poleward of their location during the middle of the warm season.[27] When the wet season occurs during the warm season, or summer, rain falls mainly during the late afternoon and early evening hours. The wet season is a time when air quality improves,[55] freshwater quality improves,[56][57] and vegetation grows significantly.
133
+
134
+ Tropical cyclones, a source of very heavy rainfall, consist of large air masses several hundred miles across with low pressure at the centre and with winds blowing inward towards the centre in either a clockwise direction (southern hemisphere) or counter clockwise (northern hemisphere).[58] Although cyclones can take an enormous toll in lives and personal property, they may be important factors in the precipitation regimes of places they impact, as they may bring much-needed precipitation to otherwise dry regions.[59] Areas in their path can receive a year's worth of rainfall from a tropical cyclone passage.[60]
135
+
136
+ The fine particulate matter produced by car exhaust and other human sources of pollution forms cloud condensation nuclei, leads to the production of clouds and increases the likelihood of rain. As commuters and commercial traffic cause pollution to build up over the course of the week, the likelihood of rain increases: it peaks by Saturday, after five days of weekday pollution has been built up. In heavily populated areas that are near the coast, such as the United States' Eastern Seaboard, the effect can be dramatic: there is a 22% higher chance of rain on Saturdays than on Mondays.[61] The urban heat island effect warms cities 0.6 to 5.6 °C (1.1 to 10.1 °F) above surrounding suburbs and rural areas. This extra heat leads to greater upward motion, which can induce additional shower and thunderstorm activity. Rainfall rates downwind of cities are increased between 48% and 116%. Partly as a result of this warming, monthly rainfall is about 28% greater between 32 to 64 km (20 to 40 mi) downwind of cities, compared with upwind.[62] Some cities induce a total precipitation increase of 51%.[63]
137
+
138
+ Increasing temperatures tend to increase evaporation which can lead to more precipitation. Precipitation generally increased over land north of 30°N from 1900 through 2005 but has declined over the tropics since the 1970s. Globally there has been no statistically significant overall trend in precipitation over the past century, although trends have varied widely by region and over time. Eastern portions of North and South America, northern Europe, and northern and central Asia have become wetter. The Sahel, the Mediterranean, southern Africa and parts of southern Asia have become drier. There has been an increase in the number of heavy precipitation events over many areas during the past century, as well as an increase since the 1970s in the prevalence of droughts—especially in the tropics and subtropics. Changes in precipitation and evaporation over the oceans are suggested by the decreased salinity of mid- and high-latitude waters (implying more precipitation), along with increased salinity in lower latitudes (implying less precipitation and/or more evaporation). Over the contiguous United States, total annual precipitation increased at an average rate of 6.1 percent since 1900, with the greatest increases within the East North Central climate region (11.6 percent per century) and the South (11.1 percent). Hawaii was the only region to show a decrease (−9.25 percent).[64]
139
+
140
+ Analysis of 65 years of United States of America rainfall records show the lower 48 states have an increase in heavy downpours since 1950. The largest increases are in the Northeast and Midwest, which in the past decade, have seen 31 and 16 percent more heavy downpours compared to the 1950s. Rhode Island is the state with the largest increase, 104%. McAllen, Texas is the city with the largest increase, 700%. Heavy downpour in the analysis are the days where total precipitation exceeded the top one percent of all rain and snow days during the years 1950–2014.[65][66]
141
+
142
+ The most successful attempts at influencing weather involve cloud seeding, which include techniques used to increase winter precipitation over mountains and suppress hail.[67]
143
+
144
+ Rainbands are cloud and precipitation areas which are significantly elongated. Rainbands can be stratiform or convective,[68] and are generated by differences in temperature. When noted on weather radar imagery, this precipitation elongation is referred to as banded structure.[69] Rainbands in advance of warm occluded fronts and warm fronts are associated with weak upward motion,[70] and tend to be wide and stratiform in nature.[71]
145
+
146
+ Rainbands spawned near and ahead of cold fronts can be squall lines which are able to produce tornadoes.[72] Rainbands associated with cold fronts can be warped by mountain barriers perpendicular to the front's orientation due to the formation of a low-level barrier jet.[73] Bands of thunderstorms can form with sea breeze and land breeze boundaries, if enough moisture is present. If sea breeze rainbands become active enough just ahead of a cold front, they can mask the location of the cold front itself.[74]
147
+
148
+ Once a cyclone occludes an occluded front (a trough of warm air aloft) will be caused by strong southerly winds on its eastern periphery rotating aloft around its northeast, and ultimately northwestern, periphery (also termed the warm conveyor belt), forcing a surface trough to continue into the cold sector on a similar curve to the occluded front. The front creates the portion of an occluded cyclone known as its comma head, due to the comma-like shape of the mid-tropospheric cloudiness that accompanies the feature. It can also be the focus of locally heavy precipitation, with thunderstorms possible if the atmosphere along the front is unstable enough for convection.[75] Banding within the comma head precipitation pattern of an extratropical cyclone can yield significant amounts of rain.[76] Behind extratropical cyclones during fall and winter, rainbands can form downwind of relative warm bodies of water such as the Great Lakes. Downwind of islands, bands of showers and thunderstorms can develop due to low level wind convergence downwind of the island edges. Offshore California, this has been noted in the wake of cold fronts.[77]
149
+
150
+ Rainbands within tropical cyclones are curved in orientation. Tropical cyclone rainbands contain showers and thunderstorms that, together with the eyewall and the eye, constitute a hurricane or tropical storm. The extent of rainbands around a tropical cyclone can help determine the cyclone's intensity.[78]
151
+
152
+ The phrase acid rain was first used by Scottish chemist Robert Augus Smith in 1852.[79] The pH of rain varies, especially due to its origin. On America's East Coast, rain that is derived from the Atlantic Ocean typically has a pH of 5.0–5.6; rain that comes across the continental from the west has a pH of 3.8–4.8; and local thunderstorms can have a pH as low as 2.0.[80] Rain becomes acidic primarily due to the presence of two strong acids, sulfuric acid (H2SO4) and nitric acid (HNO3). Sulfuric acid is derived from natural sources such as volcanoes, and wetlands (sulfate reducing bacteria); and anthropogenic sources such as the combustion of fossil fuels, and mining where H2S is present. Nitric acid is produced by natural sources such as lightning, soil bacteria, and natural fires; while also produced anthropogenically by the combustion of fossil fuels and from power plants. In the past 20 years the concentrations of nitric and sulfuric acid has decreased in presence of rainwater, which may be due to the significant increase in ammonium (most likely as ammonia from livestock production), which acts as a buffer in acid rain and raises the pH.[81]
153
+
154
+ The Köppen classification depends on average monthly values of temperature and precipitation. The most commonly used form of the Köppen classification has five primary types labeled A through E. Specifically, the primary types are A, tropical; B, dry; C, mild mid-latitude; D, cold mid-latitude; and E, polar. The five primary classifications can be further divided into secondary classifications such as rain forest, monsoon, tropical savanna, humid subtropical, humid continental, oceanic climate, Mediterranean climate, steppe, subarctic climate, tundra, polar ice cap, and desert.
155
+
156
+ Rain forests are characterized by high rainfall, with definitions setting minimum normal annual rainfall between 1,750 and 2,000 mm (69 and 79 in).[83] A tropical savanna is a grassland biome located in semi-arid to semi-humid climate regions of subtropical and tropical latitudes, with rainfall between 750 and 1,270 mm (30 and 50 in) a year. They are widespread on Africa, and are also found in India, the northern parts of South America, Malaysia, and Australia.[84] The humid subtropical climate zone is where winter rainfall is associated with large storms that the westerlies steer from west to east. Most summer rainfall occurs during thunderstorms and from occasional tropical cyclones.[85] Humid subtropical climates lie on the east side continents, roughly between latitudes 20° and 40° degrees away from the equator.[86]
157
+
158
+ An oceanic (or maritime) climate is typically found along the west coasts at the middle latitudes of all the world's continents, bordering cool oceans, as well as southeastern Australia, and is accompanied by plentiful precipitation year-round.[87] The Mediterranean climate regime resembles the climate of the lands in the Mediterranean Basin, parts of western North America, parts of Western and South Australia, in southwestern South Africa and in parts of central Chile. The climate is characterized by hot, dry summers and cool, wet winters.[88] A steppe is a dry grassland.[89] Subarctic climates are cold with continuous permafrost and little precipitation.[90]
159
+
160
+ Rain is measured in units of length per unit time, typically in millimeters per hour,[91] or in countries where imperial units are more common, inches per hour.[92] The "length", or more accurately, "depth" being measured is the depth of rain water that would accumulate on a flat, horizontal and impermeable surface during a given amount of time, typically an hour.[93] One millimeter of rainfall is the equivalent of one liter of water per square meter.[94]
161
+
162
+ The standard way of measuring rainfall or snowfall is the standard rain gauge, which can be found in 100-mm (4-in) plastic and 200-mm (8-in) metal varieties.[95] The inner cylinder is filled by 25 mm (0.98 in) of rain, with overflow flowing into the outer cylinder. Plastic gauges have markings on the inner cylinder down to 0.25 mm (0.0098 in) resolution, while metal gauges require use of a stick designed with the appropriate 0.25 mm (0.0098 in) markings. After the inner cylinder is filled, the amount inside it is discarded, then filled with the remaining rainfall in the outer cylinder until all the fluid in the outer cylinder is gone, adding to the overall total until the outer cylinder is empty.[96] Other types of gauges include the popular wedge gauge (the cheapest rain gauge and most fragile), the tipping bucket rain gauge, and the weighing rain gauge.[97] For those looking to measure rainfall the most inexpensively, a can that is cylindrical with straight sides will act as a rain gauge if left out in the open, but its accuracy will depend on what ruler is used to measure the rain with. Any of the above rain gauges can be made at home, with enough know-how.[98]
163
+
164
+ When a precipitation measurement is made, various networks exist across the United States and elsewhere where rainfall measurements can be submitted through the Internet, such as CoCoRAHS or GLOBE.[99][100] If a network is not available in the area where one lives, the nearest local weather or met office will likely be interested in the measurement.[101]
165
+
166
+ One of the main uses of weather radar is to be able to assess the amount of precipitations fallen over large basins for hydrological purposes.[102] For instance, river flood control, sewer management and dam construction are all areas where planners use rainfall accumulation data. Radar-derived rainfall estimates complement surface station data which can be used for calibration. To produce radar accumulations, rain rates over a point are estimated by using the value of reflectivity data at individual grid points. A radar equation is then used, which is,
167
+
168
+ Satellite derived rainfall estimates use passive microwave instruments aboard polar orbiting as well as geostationary weather satellites to indirectly measure rainfall rates.[104] If one wants an accumulated rainfall over a time period, one has to add up all the accumulations from each grid box within the images during that time.
169
+
170
+ Rainfall intensity is classified according to the rate of precipitation, which depends on the considered time.[105] The following categories are used to classify rainfall intensity:
171
+
172
+ Euphemisms for a heavy or violent rain include gully washer, trash-mover and toad-strangler.[108]
173
+ The intensity can also be expressed by rainfall erosivity R-factor[109] or in terms of the rainfall time-structure n-index.[105]
174
+
175
+ The likelihood or probability of an event with a specified intensity and duration, is called the return period or frequency.[110] The intensity of a storm can be predicted for any return period and storm duration, from charts based on historic data for the location.[111] The term 1 in 10 year storm describes a rainfall event which is unusual and has a 50% chance of occurring in any 10-year period. The term 1 in 100 year storm describes a rainfall event which is rare and which will occur with a 50% probability in any 100-year period. As with all probabilistic events, it is possible, though improbable, to have multiple "1 in 100 Year Storms" in a single year.[112]
176
+
177
+ The Quantitative Precipitation Forecast (abbreviated QPF) is the expected amount of liquid precipitation accumulated over a specified time period over a specified area.[113] A QPF will be specified when a measurable precipitation type reaching a minimum threshold is forecast for any hour during a QPF valid period. Precipitation forecasts tend to be bound by synoptic hours such as 0000, 0600, 1200 and 1800 GMT. Terrain is considered in QPFs by use of topography or based upon climatological precipitation patterns from observations with fine detail.[114] Starting in the mid to late 1990s, QPFs were used within hydrologic forecast models to simulate impact to rivers throughout the United States.[115] Forecast models show significant sensitivity to humidity levels within the planetary boundary layer, or in the lowest levels of the atmosphere, which decreases with height.[116] QPF can be generated on a quantitative, forecasting amounts, or a qualitative, forecasting the probability of a specific amount, basis.[117] Radar imagery forecasting techniques show higher skill than model forecasts within 6 to 7 hours of the time of the radar image. The forecasts can be verified through use of rain gauge measurements, weather radar estimates, or a combination of both. Various skill scores can be determined to measure the value of the rainfall forecast.[118]
178
+
179
+ Precipitation, especially rain, has a dramatic effect on agriculture. All plants need at least some water to survive, therefore rain (being the most effective means of watering) is important to agriculture. While a regular rain pattern is usually vital to healthy plants, too much or too little rainfall can be harmful, even devastating to crops. Drought can kill crops and increase erosion,[119] while overly wet weather can cause harmful fungus growth.[120] Plants need varying amounts of rainfall to survive. For example, certain cacti require small amounts of water,[121] while tropical plants may need up to hundreds of inches of rain per year to survive.
180
+
181
+ In areas with wet and dry seasons, soil nutrients diminish and erosion increases during the wet season.[27] Animals have adaptation and survival strategies for the wetter regime. The previous dry season leads to food shortages into the wet season, as the crops have yet to mature.[122] Developing countries have noted that their populations show seasonal weight fluctuations due to food shortages seen before the first harvest, which occurs late in the wet season.[123] Rain may be harvested through the use of rainwater tanks; treated to potable use or for non-potable use indoors or for irrigation.[124] Excessive rain during short periods of time can cause flash floods.[125]
182
+
183
+ Cultural attitudes towards rain differ across the world. In temperate climates, people tend to be more stressed when the weather is unstable or cloudy, with its impact greater on men than women.[126] Rain can also bring joy, as some consider it to be soothing or enjoy the aesthetic appeal of it. In dry places, such as India,[127] or during periods of drought,[128] rain lifts people's moods. In Botswana, the Setswana word for rain, pula, is used as the name of the national currency, in recognition of the economic importance of rain in its country, since it has a desert climate.[129] Several cultures have developed means of dealing with rain and have developed numerous protection devices such as umbrellas and raincoats, and diversion devices such as gutters and storm drains that lead rains to sewers.[130] Many people find the scent during and immediately after rain pleasant or distinctive. The source of this scent is petrichor, an oil produced by plants, then absorbed by rocks and soil, and later released into the air during rainfall.[131]
184
+
185
+ Rain holds an important religious significance in many cultures.[132] The ancient Sumerians believed that rain was the semen of the sky-god An,[133] which fell from the heavens to inseminate his consort, the earth-goddess Ki,[133] causing her to give birth to all the plants of the earth.[133] The Akkadians believed that the clouds were the breasts of Anu's consort Antu[133] and that rain was milk from her breasts.[133] According to Jewish tradition, in the first century BC, the Jewish miracle-worker Honi ha-M'agel ended a three-year drought in Judaea by drawing a circle in the sand and praying for rain, refusing to leave the circle until his prayer was granted.[134] In his Meditations, the Roman emperor Marcus Aurelius preserves a prayer for rain made by the Athenians to the Greek sky-god Zeus.[132] Various Native American tribes are known to have historically conducted rain dances in effort to encourage rainfall.[132] Rainmaking rituals are also important in many African cultures.[135] In the present-day United States, various state governors have held Days of Prayer for rain, including the Days of Prayer for Rain in the State of Texas in 2011.[132]
186
+
187
+ Approximately 505,000 km3 (121,000 cu mi) of water falls as precipitation each year across the globe with 398,000 km3 (95,000 cu mi) of it over the oceans.[136] Given the Earth's surface area, that means the globally averaged annual precipitation is 990 mm (39 in). Deserts are defined as areas with an average annual precipitation of less than 250 mm (10 in) per year,[137][138] or as areas where more water is lost by evapotranspiration than falls as precipitation.[139]
188
+
189
+ The northern half of Africa is occupied by the world's most extensive hot, dry region, the Sahara Desert. Some deserts are also occupying much of southern Africa : the Namib and the Kalahari. Across Asia, a large annual rainfall minimum, composed primarily of deserts, stretches from the Gobi Desert in Mongolia west-southwest through western Pakistan (Balochistan) and Iran into the Arabian Desert in Saudi Arabia. Most of Australia is semi-arid or desert,[140] making it the world's driest inhabited continent. In South America, the Andes mountain range blocks Pacific moisture that arrives in that continent, resulting in a desertlike climate just downwind across western Argentina.[48] The drier areas of the United States are regions where the Sonoran Desert overspreads the Desert Southwest, the Great Basin and central Wyoming.[141]
190
+
191
+ Since rain only falls as liquid, in frozen temperatures, rain cannot fall. As a result, very cold climates see very little rainfall and are often known as polar deserts. A common biome in this area is the tundra which has a short summer thaw and a long frozen winter. Ice caps see no rain at all, making Antarctica the world's driest continent.
192
+
193
+ Rainforests are areas of the world with very high rainfall. Both tropical and temperate rainforests exist. Tropical rainforests occupy a large band of the planet mostly along the equator. Most temperate rainforests are located on mountainous west coasts between 45 and 55 degrees latitude, but they are often found in other areas.
194
+
195
+ Around 40–75% of all biotic life is found in rainforests. Rainforests are also responsible for 28% of the world's oxygen turnover.
196
+
197
+ The equatorial region near the Intertropical Convergence Zone (ITCZ), or monsoon trough, is the wettest portion of the world's continents. Annually, the rain belt within the tropics marches northward by August, then moves back southward into the Southern Hemisphere by February and March.[142] Within Asia, rainfall is favored across its southern portion from India east and northeast across the Philippines and southern China into Japan due to the monsoon advecting moisture primarily from the Indian Ocean into the region.[143] The monsoon trough can reach as far north as the 40th parallel in East Asia during August before moving southward thereafter. Its poleward progression is accelerated by the onset of the summer monsoon which is characterized by the development of lower air pressure (a thermal low) over the warmest part of Asia.[144][145] Similar, but weaker, monsoon circulations are present over North America and Australia.[146][147] During the summer, the Southwest monsoon combined with Gulf of California and Gulf of Mexico moisture moving around the subtropical ridge in the Atlantic Ocean bring the promise of afternoon and evening thunderstorms to the southern tier of the United States as well as the Great Plains.[148] The eastern half of the contiguous United States east of the 98th meridian, the mountains of the Pacific Northwest, and the Sierra Nevada range are the wetter portions of the nation, with average rainfall exceeding 760 mm (30 in) per year.[149] Tropical cyclones enhance precipitation across southern sections of the United States,[150] as well as Puerto Rico, the United States Virgin Islands,[151] the Northern Mariana Islands,[152] Guam, and American Samoa.
198
+
199
+ Westerly flow from the mild north Atlantic leads to wetness across western Europe, in particular Ireland and the United Kingdom, where the western coasts can receive between 1,000 mm (39 in), at sea level and 2,500 mm (98 in), on the mountains of rain per year. Bergen, Norway is one of the more famous European rain-cities with its yearly precipitation of 2,250 mm (89 in) on average. During the fall, winter, and spring, Pacific storm systems bring most of Hawaii and the western United States much of their precipitation.[148] Over the top of the ridge, the jet stream brings a summer precipitation maximum to the Great Lakes. Large thunderstorm areas known as mesoscale convective complexes move through the Plains, Midwest, and Great Lakes during the warm season, contributing up to 10% of the annual precipitation to the region.[153]
200
+
201
+ The El Niño-Southern Oscillation affects the precipitation distribution, by altering rainfall patterns across the western United States,[154] Midwest,[155][156] the Southeast,[157] and throughout the tropics. There is also evidence that global warming is leading to increased precipitation to the eastern portions of North America, while droughts are becoming more frequent in the tropics and subtropics.
202
+
203
+ Cherrapunji, situated on the southern slopes of the Eastern Himalaya in Shillong, India is the confirmed wettest place on Earth, with an average annual rainfall of 11,430 mm (450 in). The highest recorded rainfall in a single year was 22,987 mm (905.0 in) in 1861. The 38-year average at nearby Mawsynram, Meghalaya, India is 11,873 mm (467.4 in).[158] The wettest spot in Australia is Mount Bellenden Ker in the north-east of the country which records an average of 8,000 mm (310 in) per year, with over 12,200 mm (480.3 in) of rain recorded during 2000.[159] The Big Bog on the island of Maui has the highest average annual rainfall in the Hawaiian Islands, at 10,300 mm (404 in).[160] Mount Waiʻaleʻale on the island of Kauaʻi achieves similar torrential rains, while slightly lower than that of the Big Bog, at 9,500 mm (373 in)[161] of rain per year over the last 32 years, with a record 17,340 mm (683 in) in 1982. Its summit is considered one of the rainiest spots on earth, with a reported 350 days of rain per year.
204
+
205
+ Lloró, a town situated in Chocó, Colombia, is probably the place with the largest rainfall in the world, averaging 13,300 mm (523.6 in) per year.[162] The Department of Chocó is extraordinarily humid. Tutunendaó, a small town situated in the same department, is one of the wettest estimated places on Earth, averaging 11,394 mm (448.6 in) per year; in 1974 the town received 26,303 mm (86 ft 3.6 in), the largest annual rainfall measured in Colombia. Unlike Cherrapunji, which receives most of its rainfall between April and September, Tutunendaó receives rain almost uniformly distributed throughout the year.[163] Quibdó, the capital of Chocó, receives the most rain in the world among cities with over 100,000 inhabitants: 9,000 mm (354 in) per year.[162] Storms in Chocó can drop 500 mm (20 in) of rainfall in a day. This amount is more than what falls in many cities in a year's time.
206
+
207
+ Rainfalls of diamonds have been suggested to occur on the gas giant planets, Jupiter and Saturn,[168] as well as on the ice giant planets, Uranus and Neptune.[169] There is likely to be rain of various compositions in the upper atmospheres of the gas giants, as well as precipitation of liquid neon in the deep atmospheres.[170][171] On Titan, Saturn's largest natural satellite, infrequent methane rain is thought to carve the moon's numerous surface channels.[172] On Venus, sulfuric acid virga evaporates 25 km (16 mi) from the surface.[173] Extrasolar planet OGLE-TR-56b in the constellation Sagittarius is hypothesized to have iron rain.[174] Accordingly, research carried out by the European Southern Observatory shows that WASP-76b can produce showers of burning liquid iron droplets once temperature decreases during the planet's night hours.[175] There is evidence from samples of basalt brought back by the Apollo missions that the Moon has been subject to lava rain.[176]
208
+
209
+ Environment portal  Water portal
210
+
en/4674.html.txt ADDED
@@ -0,0 +1,157 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ Feathers are epidermal growths that form distinctive outer covering, or plumage, on dinosaurs, both avian and some non-avian, and possibly other archosauromorphs. They are considered as the most complex integumentary structures found in vertebrates[1][2] and a premier example of a complex evolutionary novelty.[3] They are among the characteristics that distinguish the extant birds from other living groups.[4]
4
+
5
+ Although feathers cover most of the bird's bodies, they arise only from certain well-defined tracts on the skin. They aid in flight, thermal insulation, and waterproofing. In addition, coloration helps in communication and protection.[5] Plumology (or plumage science) is the name for the science that is associated with the study of feathers.[6][7]
6
+
7
+ Feathers are among the most complex integumentary appendages found in vertebrates and are formed in tiny follicles in the epidermis, or outer skin layer, that produce keratin proteins. The β-keratins in feathers, beaks and claws — and the claws, scales and shells of reptiles — are composed of protein strands hydrogen-bonded into β-pleated sheets, which are then further twisted and crosslinked by disulfide bridges into structures even tougher than the α-keratins of mammalian hair, horns and hooves.[8][9] The exact signals that induce the growth of feathers on the skin are not known, but it has been found that the transcription factor cDermo-1 induces the growth of feathers on skin and scales on the leg.[10]
8
+
9
+ There are two basic types of feather: vaned feathers which cover the exterior of the body, and down feathers which are underneath the vaned feathers. The pennaceous feathers are vaned feathers. Also called contour feathers, pennaceous feathers arise from tracts and cover the entire body. A third rarer type of feather, the filoplume, is hairlike and (if present in a bird; they are entirely absent in ratites[11]) are closely associated with pennaceous feathers and are often entirely hidden by them, with one or two filoplumes attached and sprouting from near the same point of the skin as each pennaceous feather, at least on a bird's head, neck and trunk.[12][13] In some passerines, filoplumes arise exposed beyond the pennaceous feathers on the neck.[1] The remiges, or flight feathers of the wing, and rectrices, or flight feathers of the tail, are the most important feathers for flight. A typical vaned feather features a main shaft, called the rachis. Fused to the rachis are a series of branches, or barbs; the barbs themselves are also branched and form the barbules. These barbules have minute hooks called barbicels for cross-attachment. Down feathers are fluffy because they lack barbicels, so the barbules float free of each other, allowing the down to trap air and provide excellent thermal insulation. At the base of the feather, the rachis expands to form the hollow tubular calamus (or quill) which inserts into a follicle in the skin. The basal part of the calamus is without vanes. This part is embedded within the skin follicle and has an opening at the base (proximal umbilicus) and a small opening on the side (distal umbilicus).[14]
10
+
11
+ Hatchling birds of some species have a special kind of natal down feathers (neossoptiles) which are pushed out when the normal feathers (teleoptiles) emerge.[1]
12
+
13
+ Flight feathers are stiffened so as to work against the air in the downstroke but yield in other directions. It has been observed that the orientation pattern of β-keratin fibers in the feathers of flying birds differs from that in flightless birds: the fibers are better aligned along the shaft axis direction towards the tip,[15][16] and the lateral walls of rachis region show structure of crossed fibers.[17][18]
14
+
15
+ Feathers insulate birds from water and cold temperatures. They may also be plucked to line the nest and provide insulation to the eggs and young. The individual feathers in the wings and tail play important roles in controlling flight.[17] Some species have a crest of feathers on their heads. Although feathers are light, a bird's plumage weighs two or three times more than its skeleton, since many bones are hollow and contain air sacs. Color patterns serve as camouflage against predators for birds in their habitats, and serve as camouflage for predators looking for a meal. As with fish, the top and bottom colors may be different, in order to provide camouflage during flight. Striking differences in feather patterns and colors are part of the sexual dimorphism of many bird species and are particularly important in selection of mating pairs. In some cases there are differences in the UV reflectivity of feathers across sexes even though no differences in color are noted in the visible range.[19] The wing feathers of male club-winged manakins Machaeropterus deliciosus have special structures that are used to produce sounds by stridulation.[20]
16
+
17
+ Some birds have a supply of powder down feathers which grow continuously, with small particles regularly breaking off from the ends of the barbules. These particles produce a powder that sifts through the feathers on the bird's body and acts as a waterproofing agent and a feather conditioner. Powder down has evolved independently in several taxa and can be found in down as well as in pennaceous feathers. They may be scattered in plumage as in the pigeons and parrots or in localized patches on the breast, belly, or flanks, as in herons and frogmouths. Herons use their bill to break the powder down feathers and to spread them, while cockatoos may use their head as a powder puff to apply the powder.[21] Waterproofing can be lost by exposure to emulsifying agents due to human pollution. Feathers can then become waterlogged, causing the bird to sink. It is also very difficult to clean and rescue birds whose feathers have been fouled by oil spills. The feathers of cormorants soak up water and help to reduce buoyancy, thereby allowing the birds to swim submerged.[22]
18
+
19
+ Bristles are stiff, tapering feathers with a large rachis but few barbs. Rictal bristles are found around the eyes and bill. They may serve a similar purpose to eyelashes and vibrissae in mammals. Although there is as yet no clear evidence, it has been suggested that rictal bristles have sensory functions and may help insectivorous birds to capture prey.[23] In one study, willow flycatchers (Empidonax traillii) were found to catch insects equally well before and after removal of the rictal bristles.[24]
20
+
21
+ Grebes are peculiar in their habit of ingesting their own feathers and feeding them to their young. Observations on their diet of fish and the frequency of feather eating suggest that ingesting feathers, particularly down from their flanks, aids in forming easily ejectable pellets.[25]
22
+
23
+ Contour feathers are not uniformly distributed on the skin of the bird except in some groups such as the penguins, ratites and screamers.[26] In most birds the feathers grow from specific tracts of skin called pterylae; between the pterylae there are regions which are free of feathers called apterylae (or apteria). Filoplumes and down may arise from the apterylae. The arrangement of these feather tracts, pterylosis or pterylography, varies across bird families and has been used in the past as a means for determining the evolutionary relationships of bird families.[27][28]
24
+
25
+ The colors of feathers are produced by pigments, by microscopic structures that can refract, reflect, or scatter selected wavelengths of light, or by a combination of both.
26
+
27
+ Most feather pigments are melanins (brown and beige pheomelanins, black and grey eumelanins) and carotenoids (red, yellow, orange); other pigments occur only in certain taxa – the yellow to red psittacofulvins[29] (found in some parrots) and the red turacin and green turacoverdin (porphyrin pigments found only in turacos).
28
+
29
+ Structural coloration[5][30][31] is involved in the production of blue colors, iridescence, most ultraviolet reflectance and in the enhancement of pigmentary colors. Structural iridescence has been reported[32] in fossil feathers dating back 40 million years. White feathers lack pigment and scatter light diffusely; albinism in birds is caused by defective pigment production, though structural coloration will not be affected (as can be seen, for example, in blue-and-white budgerigars).
30
+
31
+ The blues and bright greens of many parrots are produced by constructive interference of light reflecting from different layers of structures in feathers. In the case of green plumage, in addition to yellow, the specific feather structure involved is called by some the Dyck texture.[33][34] Melanin is often involved in the absorption of light; in combination with a yellow pigment, it produces a dull olive-green.
32
+
33
+ In some birds, feather colors may be created, or altered, by secretions from the uropygial gland, also called the preen gland. The yellow bill colors of many hornbills are produced by such secretions. It has been suggested that there are other color differences that may be visible only in the ultraviolet region,[21] but studies have failed to find evidence.[35] The oil secretion from the uropygial gland may also have an inhibitory effect on feather bacteria.[36]
34
+
35
+ The reds, orange and yellow colors of many feathers are caused by various carotenoids. Carotenoid-based pigments might be honest signals of fitness because they are derived from special diets and hence might be difficult to obtain,[37][38] and/or because carotenoids are required for immune function and hence sexual displays come at the expense of health.[39]
36
+
37
+ A bird's feathers undergo wear and tear and are replaced periodically during the bird's life through molting. New feathers, known when developing as blood, or pin feathers, depending on the stage of growth, are formed through the same follicles from which the old ones were fledged. The presence of melanin in feathers increases their resistance to abrasion.[40] One study notes that melanin based feathers were observed to degrade more quickly under bacterial action, even compared to unpigmented feathers from the same species, than those unpigmented or with carotenoid pigments.[41] However, another study the same year compared the action of bacteria on pigmentations of two song sparrow species and observed that the darker pigmented feathers were more resistant; the authors cited other research also published in 2004 that stated increased melanin provided greater resistance. They observed that the greater resistance of the darker birds confirmed Gloger's rule.[42]
38
+
39
+ Although sexual selection plays a major role in the development of feathers, in particular the color of the feathers it is not the only conclusion available. New studies are suggesting that the unique feathers of birds is also a large influence on many important aspects of avian behavior, such as the height at which a different species build their nests. Since females are the prime care givers, evolution has helped select females to display duller colored down so that they may blend into the nesting environment. The position of the nest and whether it has a greater chance of being under predation has exerted constraints on female birds' plumage.[43] A species of bird that nests on the ground, rather than the canopy of the trees, will need to have much duller colors in order not to attract attention to the nest. Since the female is the main care giver in some species of birds, evolution has helped select traits that make her feathers dull and often allow her to blend into the surroundings. The height study found that birds that nest in the canopies of trees often have many more predator attacks due to the brighter color of feathers that the female displays.[43] Another influence of evolution that could play a part in why feathers of birds are so colorful and display so many patterns could be due to that birds developed their bright colors from the vegetation and flowers that thrive around them. Birds develop their bright colors from living around certain colors. Most bird species often blend into their environment, due to some degree of camouflage, so if the species habitat is full of colors and patterns, the species would eventually evolve to blend in to avoid being eaten. Birds' feathers show a large range of colors, even exceeding the variety of many plants, leaf and flower colors.[44]
40
+
41
+ The feather surface is the home for some ectoparasites, notably feather lice (Phthiraptera) and feather mites. Feather lice typically live on a single host and can move only from parents to chicks, between mating birds, and, occasionally, by phoresy. This life history has resulted in most of the parasite species being specific to the host and coevolving with the host, making them of interest in phylogenetic studies.[45]
42
+
43
+ Feather holes are chewing traces of lice (most probably Brueelia spp. lice) on the wing and tail feathers. They were described on barn swallows, and because of easy countability, many evolutionary, ecological, and behavioral publications use them to quantify the intensity of infestation.
44
+
45
+ Parasitic cuckoos which grow up in the nests of other species also have host-specific feather lice and these seem to be transmitted only after the young cuckoos leave the host nest.[46]
46
+
47
+ Birds maintain their feather condition by preening and bathing in water or dust. It has been suggested that a peculiar behavior of birds, anting, in which ants are introduced into the plumage, helps to reduce parasites, but no supporting evidence has been found.[47]
48
+
49
+ Feathers have a number of utilitarian, cultural and religious uses.
50
+
51
+ Feathers are both soft and excellent at trapping heat; thus, they are sometimes used in high-class bedding, especially pillows, blankets, and mattresses. They are also used as filling for winter clothing and outdoor bedding, such as quilted coats and sleeping bags. Goose and eider down have great loft, the ability to expand from a compressed, stored state to trap large amounts of compartmentalized, insulating air.[48]
52
+
53
+ Bird feathers have long been used for fletching arrows. Colorful feathers such as those belonging to pheasants have been used to decorate fishing lures.
54
+
55
+ Feathers of large birds (most often geese) have been and are used to make quill pens. The word pen itself is derived from the Latin penna, meaning feather.[49] The French word plume can mean either feather or pen.
56
+
57
+ Feathers are also valuable in aiding the identification of species in forensic studies, particularly in bird strikes to aircraft. The ratios of hydrogen isotopes in feathers help in determining the geographic origins of birds.[50] Feathers may also be useful in the non-destructive sampling of pollutants.[51]
58
+
59
+ The poultry industry produces a large amount of feathers as waste, which, like other forms of keratin, are slow to decompose. Feather waste has been used in a number of industrial applications as a medium for culturing microbes,[52] biodegradeable polymers,[53] and production of enzymes.[54] Feather proteins have been tried as an adhesive for wood board.[55]
60
+
61
+ Some groups of Native people in Alaska have used ptarmigan feathers as temper (non-plastic additives) in pottery manufacture since the first millennium BC in order to promote thermal shock resistance and strength.[56]
62
+
63
+ Historically, the hunting of birds for decorative and ornamental feathers (including in Victorian fashion) has endangered some species and helped to contribute to the extinction of others.[57] For instance, South American hummingbird feathers were used in the past to dress some of the miniature birds featured in singing bird boxes.
64
+
65
+ Eagle feathers have great cultural and spiritual value to American Indians in the US and First Nations peoples in Canada as religious objects. In the United States the religious use of eagle and hawk feathers is governed by the eagle feather law, a federal law limiting the possession of eagle feathers to certified and enrolled members of federally recognized Native American tribes.
66
+
67
+ In South America, brews made from the feathers of condors are used in traditional medications.[58] In India, feathers of the Indian peacock have been used in traditional medicine for snakebite, infertility, and coughs.[59][60]
68
+
69
+ During the 18th, 19th, and early 20th centuries, there was a booming international trade in plumes for extravagant women's hats and other headgear. Frank Chapman noted in 1886 that feathers of as many as 40 species of birds were used in about three-fourths of the 700 ladies' hats that he observed in New York City.[61] This trade caused severe losses to bird populations (for example, egrets and whooping cranes). Conservationists led a major campaign against the use of feathers in hats. This contributed to passage of the Lacey Act in 1900, and to changes in fashion. The ornamental feather market then largely collapsed.[62][63]
70
+
71
+ More recently, rooster plumage has become a popular trend as a hairstyle accessory, with feathers formerly used as fishing lures now being used to provide color and style to hair.[64] Today, feathers used in fashion and in military headdresses and clothes are obtained as a waste product of poultry farming, including chickens, geese, turkeys, pheasants, and ostriches. These feathers are dyed and manipulated to enhance their appearance, as poultry feathers are naturally often dull in appearance compared to the feathers of wild birds.
72
+
73
+ Feather products manufacturing in Europe has declined in the last 60 years, mainly due to competition from Asia.
74
+ Feathers have adorned hats at many prestigious events such as weddings and Ladies Day at racecourses (Royal Ascot).
75
+
76
+ The functional view on the evolution of feathers has traditionally focused on insulation, flight and display. Discoveries of non-flying Late Cretaceous feathered dinosaurs in China,[65] however, suggest that flight could not have been the original primary function as the feathers simply would not have been capable of providing any form of lift.[66][67] There have been suggestions that feathers may have had their original function in thermoregulation, waterproofing, or even as sinks for metabolic wastes such as sulphur.[68] Recent discoveries are argued to support a thermoregulatory function, at least in smaller dinosaurs.[69][70] Some researchers even argue that thermoregulation arose from bristles on the face that were used as tactile sensors.[71] While feathers have been suggested as having evolved from reptilian scales, there are numerous objections to that idea, and more recent explanations have arisen from the paradigm of evolutionary developmental biology.[2] Theories of the scale-based origins of feathers suggest that the planar scale structure was modified for development into feathers by splitting to form the webbing; however, that developmental process involves a tubular structure arising from a follicle and the tube splitting longitudinally to form the webbing.[1][2] The number of feathers per unit area of skin is higher in smaller birds than in larger birds, and this trend points to their important role in thermal insulation, since smaller birds lose more heat due to the relatively larger surface area in proportion to their body weight.[5] The miniaturization of birds also played a role in the evolution of powered flight.[72] The coloration of feathers is believed to have evolved primarily in response to sexual selection. In one fossil specimen of the paravian Anchiornis huxleyi, the features are so well preserved that the melanosome (pigment cells) structure can be observed. By comparing the shape of the fossil melanosomes to melanosomes from extant birds, the color and pattern of the feathers on Anchiornis could be determined.[73] Anchiornis was found to have black-and-white-patterned feathers on the forelimbs and hindlimbs, with a reddish-brown crest. This pattern is similar to the coloration of many extant bird species, which use plumage coloration for display and communication, including sexual selection and camouflage. It is likely that non-avian dinosaur species utilized plumage patterns for similar functions as modern birds before the origin of flight. In many cases, the physiological condition of the birds (especially males) is indicated by the quality of their feathers, and this is used (by the females) in mate choice.[74][75] Additionally, when comparing different Ornithomimus edmontonicus specimens, older individuals were found to have a pennibrachium (a wing-like structure consisting of elongate feathers), while younger ones did not. This suggests that the pennibrachium was a secondary sex characteristic and likely had a sexual function.[76]
77
+
78
+ Feathers and scales are made up of two distinct forms of keratin, and it was long thought that each type of keratin was exclusive to each skin structure (feathers and scales). However, a study published in 2006 confirmed the presence of feather keratin in the early stages of development of American alligator scales. This type of keratin, previously thought to be specific to feathers, is suppressed during embryological development of the alligator and so is not present in the scales of mature alligators. The presence of this homologous keratin in both birds and crocodilians indicates that it was inherited from a common ancestor. This may suggest that crocodilian scales, bird and dinosaur feathers, and pterosaur pycnofibres are all developmental expressions of the same primitive archosaur skin structures; suggesting that feathers and pycnofibers could be homologous.[77]
79
+
80
+ Several non-avian dinosaurs had feathers on their limbs that would not have functioned for flight.[65][2] One theory suggests that feathers originally evolved on dinosaurs due to their insulation properties; then, small dinosaur species which grew longer feathers may have found them helpful in gliding, leading to the evolution of proto-birds like Archaeopteryx and Microraptor zhaoianus. Another theory posits that the original adaptive advantage of early feathers was their pigmentation or iridescence, contributing to sexual preference in mate selection.[78] Dinosaurs that had feathers or protofeathers include Pedopenna daohugouensis[79] and Dilong paradoxus, a tyrannosauroid which is 60 to 70 million years older than Tyrannosaurus rex.[80]
81
+
82
+ The majority of dinosaurs known to have had feathers or protofeathers are theropods, however featherlike "filamentous integumentary structures" are also known from the ornithischian dinosaurs Tianyulong and Psittacosaurus.[81] The exact nature of these structures is still under study. However, it is believed that the stage-1 feathers (see Evolutionary stages section below) such as those seen in these two ornithischians likely functioned in display.[82] In 2014, the ornithischian Kulindadromeus was reported as having structures resembling stage-3 feathers.[83]
83
+
84
+ Since the 1990s, dozens of feathered dinosaurs have been discovered in the clade Maniraptora, which includes the clade Avialae and the recent common ancestors of birds, Oviraptorosauria and Deinonychosauria. In 1998, the discovery of a feathered oviraptorosaurian, Caudipteryx zoui, challenged the notion of feathers as a structure exclusive to Avialae.[84] Buried in the Yixian Formation in Liaoning, China, C. zoui lived during the Early Cretaceous Period. Present on the forelimbs and tails, their integumentary structure has been accepted[by whom?] as pennaceous vaned feathers based on the rachis and herringbone pattern of the barbs. In the clade Deinonychosauria, the continued divergence of feathers is also apparent in the families Troodontidae and Dromaeosauridae. Branched feathers with rachis, barbs, and barbules were discovered in many members including Sinornithosaurus millenii, a dromaeosaurid found in the Yixian formation (124.6 MYA).[85]
85
+
86
+ Previously, a temporal paradox existed in the evolution of feathers—theropods with highly derived bird-like characteristics occurred at a later time than Archaeopteryx—suggesting that the descendants of birds arose before the ancestor. However, the discovery of Anchiornis huxleyi in the Late Jurassic Tiaojishan Formation (160 MYA) in western Liaoning in 2009[86][87]
87
+ resolved this paradox. By predating Archaeopteryx, Anchiornis proves the existence of a modernly feathered theropod ancestor, providing insight into the dinosaur-bird transition. The specimen shows distribution of large pennaceous feathers on the forelimbs and tail, implying that pennaceous feathers spread to the rest of the body at an earlier stage in theropod evolution.[88] The development of pennaceous feathers did not replace earlier filamentous feathers. Filamentous feathers are preserved alongside modern-looking flight feathers — including some with modifications found in the feathers of extant diving birds — in 80 million year old amber from Alberta.[89]
88
+
89
+ Two small wings trapped in amber dating to 100 mya show plumage existed in some bird predecessors. The wings most probably belonged to enantiornithes, a diverse group of avian dinosaurs.[90][91]
90
+
91
+ A large phylogenetic analysis of early dinosaurs by Matthew Baron, David B. Norman and Paul Barrett (2017) found that Theropoda is actually more closely related to Ornithischia, to which it formed the sister group within the clade Ornithoscelida. The study also suggested that if the feather-like structures of theropods and ornithischians are of common evolutionary origin then it would be possible that feathers were restricted to Ornithoscelida. If so, then the origin of feathers would have likely occurred as early as the Middle Triassic.[92]
92
+
93
+ Several studies of feather development in the embryos of modern birds, coupled with the distribution of feather types among various prehistoric bird precursors, have allowed scientists to attempt a reconstruction of the sequence in which feathers first evolved and developed into the types found on modern birds.
94
+
95
+ Feather evolution was broken down into the following stages by Xu and Guo in 2009:[82]
96
+
97
+ However, Foth (2011) showed that some of these purported stages (stages 2 and 5 in particular) are likely simply artifacts of preservation caused by the way fossil feathers are crushed and the feather remains or imprints are preserved. Foth re-interpreted stage 2 feathers as crushed or misidentified feathers of at least stage 3, and stage 5 feathers as crushed stage 6 feathers.[93]
98
+
99
+ The following simplified diagram of dinosaur relationships follows these results, and shows the likely distribution of plumaceous (downy) and pennaceous (vaned) feathers among dinosaurs and prehistoric birds. The diagram follows one presented by Xu and Guo (2009)[82] modified with the findings of Foth (2011).[93] The numbers accompanying each name refer to the presence of specific feather stages. Note that 's' indicates the known presence of scales on the body.
100
+
101
+ Heterodontosauridae (1)
102
+
103
+ Thyreophora (s)
104
+
105
+ Ornithopoda (s)
106
+
107
+ Psittacosauridae (s, 1)
108
+
109
+ Ceratopsidae (s)
110
+
111
+ Sauropodomorpha (s)
112
+
113
+ Aucasaurus (s)
114
+
115
+ Carnotaurus (s)
116
+
117
+ Ceratosaurus (s)
118
+
119
+ Dilong (3?)
120
+
121
+ Other tyrannosauroids (s, 1)
122
+
123
+ Juravenator (s, 3?)
124
+
125
+ Sinosauropteryx (3+)
126
+
127
+ Therizinosauria (1, 3+)
128
+
129
+ Alvarezsauridae (3?)
130
+
131
+ Oviraptorosauria (4, 6)
132
+
133
+ Troodontidae (3+, 6)
134
+
135
+ Other dromaeosaurids
136
+
137
+ Sinornithosaurus (3+, 6)
138
+
139
+ Microraptor (3+, 6, 7)
140
+
141
+ Scansoriopterygidae (3+, 6, 8)
142
+
143
+ Archaeopterygidae (3+, 6, 7)
144
+
145
+ Jeholornis (6, 7)
146
+
147
+ Confuciusornis (4, 6, 7, 8)
148
+
149
+ Enantiornithes (4, 6, 7, 8)
150
+
151
+ Neornithes (4, 6, 7, 8)
152
+
153
+ Pterosaurs were long known to have filamentous fur-like structures covering their body known as pycnofibres, which were generally considered distinct from the "true feathers" of birds and their dinosaur kin. However, a 2018 study of two small, well-preserved pterosaur fossils from the Jurassic of Inner Mongolia, China indicated that pterosaurs were covered in an array of differently-structured pycnofibres (rather than just filamentous ones), with several of these structures displaying diagnostic features of feathers, such as non-veined grouped filaments and bilaterally branched filaments, both of which were originally thought to be exclusive to birds and other maniraptoran dinosaurs. Given these findings, it is possible that feathers have deep evolutionary origins in ancestral archosaurs, though there is also a possibility that these structures independently evolved to resemble bird feathers via convergent evolution. Mike Benton, the study's senior author, lent credence to the former theory, stating "We couldn’t find any anatomical evidence that the four pycnofiber types are in any way different from the feathers of birds and dinosaurs. Therefore, because they are the same, they must share an evolutionary origin, and that was about 250 million years ago, long before the origin of birds." [94][95][96][97]
154
+
155
+ {{Reflist
156
+ [57]
157
+ }}
en/4675.html.txt ADDED
@@ -0,0 +1,157 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ Feathers are epidermal growths that form distinctive outer covering, or plumage, on dinosaurs, both avian and some non-avian, and possibly other archosauromorphs. They are considered as the most complex integumentary structures found in vertebrates[1][2] and a premier example of a complex evolutionary novelty.[3] They are among the characteristics that distinguish the extant birds from other living groups.[4]
4
+
5
+ Although feathers cover most of the bird's bodies, they arise only from certain well-defined tracts on the skin. They aid in flight, thermal insulation, and waterproofing. In addition, coloration helps in communication and protection.[5] Plumology (or plumage science) is the name for the science that is associated with the study of feathers.[6][7]
6
+
7
+ Feathers are among the most complex integumentary appendages found in vertebrates and are formed in tiny follicles in the epidermis, or outer skin layer, that produce keratin proteins. The β-keratins in feathers, beaks and claws — and the claws, scales and shells of reptiles — are composed of protein strands hydrogen-bonded into β-pleated sheets, which are then further twisted and crosslinked by disulfide bridges into structures even tougher than the α-keratins of mammalian hair, horns and hooves.[8][9] The exact signals that induce the growth of feathers on the skin are not known, but it has been found that the transcription factor cDermo-1 induces the growth of feathers on skin and scales on the leg.[10]
8
+
9
+ There are two basic types of feather: vaned feathers which cover the exterior of the body, and down feathers which are underneath the vaned feathers. The pennaceous feathers are vaned feathers. Also called contour feathers, pennaceous feathers arise from tracts and cover the entire body. A third rarer type of feather, the filoplume, is hairlike and (if present in a bird; they are entirely absent in ratites[11]) are closely associated with pennaceous feathers and are often entirely hidden by them, with one or two filoplumes attached and sprouting from near the same point of the skin as each pennaceous feather, at least on a bird's head, neck and trunk.[12][13] In some passerines, filoplumes arise exposed beyond the pennaceous feathers on the neck.[1] The remiges, or flight feathers of the wing, and rectrices, or flight feathers of the tail, are the most important feathers for flight. A typical vaned feather features a main shaft, called the rachis. Fused to the rachis are a series of branches, or barbs; the barbs themselves are also branched and form the barbules. These barbules have minute hooks called barbicels for cross-attachment. Down feathers are fluffy because they lack barbicels, so the barbules float free of each other, allowing the down to trap air and provide excellent thermal insulation. At the base of the feather, the rachis expands to form the hollow tubular calamus (or quill) which inserts into a follicle in the skin. The basal part of the calamus is without vanes. This part is embedded within the skin follicle and has an opening at the base (proximal umbilicus) and a small opening on the side (distal umbilicus).[14]
10
+
11
+ Hatchling birds of some species have a special kind of natal down feathers (neossoptiles) which are pushed out when the normal feathers (teleoptiles) emerge.[1]
12
+
13
+ Flight feathers are stiffened so as to work against the air in the downstroke but yield in other directions. It has been observed that the orientation pattern of β-keratin fibers in the feathers of flying birds differs from that in flightless birds: the fibers are better aligned along the shaft axis direction towards the tip,[15][16] and the lateral walls of rachis region show structure of crossed fibers.[17][18]
14
+
15
+ Feathers insulate birds from water and cold temperatures. They may also be plucked to line the nest and provide insulation to the eggs and young. The individual feathers in the wings and tail play important roles in controlling flight.[17] Some species have a crest of feathers on their heads. Although feathers are light, a bird's plumage weighs two or three times more than its skeleton, since many bones are hollow and contain air sacs. Color patterns serve as camouflage against predators for birds in their habitats, and serve as camouflage for predators looking for a meal. As with fish, the top and bottom colors may be different, in order to provide camouflage during flight. Striking differences in feather patterns and colors are part of the sexual dimorphism of many bird species and are particularly important in selection of mating pairs. In some cases there are differences in the UV reflectivity of feathers across sexes even though no differences in color are noted in the visible range.[19] The wing feathers of male club-winged manakins Machaeropterus deliciosus have special structures that are used to produce sounds by stridulation.[20]
16
+
17
+ Some birds have a supply of powder down feathers which grow continuously, with small particles regularly breaking off from the ends of the barbules. These particles produce a powder that sifts through the feathers on the bird's body and acts as a waterproofing agent and a feather conditioner. Powder down has evolved independently in several taxa and can be found in down as well as in pennaceous feathers. They may be scattered in plumage as in the pigeons and parrots or in localized patches on the breast, belly, or flanks, as in herons and frogmouths. Herons use their bill to break the powder down feathers and to spread them, while cockatoos may use their head as a powder puff to apply the powder.[21] Waterproofing can be lost by exposure to emulsifying agents due to human pollution. Feathers can then become waterlogged, causing the bird to sink. It is also very difficult to clean and rescue birds whose feathers have been fouled by oil spills. The feathers of cormorants soak up water and help to reduce buoyancy, thereby allowing the birds to swim submerged.[22]
18
+
19
+ Bristles are stiff, tapering feathers with a large rachis but few barbs. Rictal bristles are found around the eyes and bill. They may serve a similar purpose to eyelashes and vibrissae in mammals. Although there is as yet no clear evidence, it has been suggested that rictal bristles have sensory functions and may help insectivorous birds to capture prey.[23] In one study, willow flycatchers (Empidonax traillii) were found to catch insects equally well before and after removal of the rictal bristles.[24]
20
+
21
+ Grebes are peculiar in their habit of ingesting their own feathers and feeding them to their young. Observations on their diet of fish and the frequency of feather eating suggest that ingesting feathers, particularly down from their flanks, aids in forming easily ejectable pellets.[25]
22
+
23
+ Contour feathers are not uniformly distributed on the skin of the bird except in some groups such as the penguins, ratites and screamers.[26] In most birds the feathers grow from specific tracts of skin called pterylae; between the pterylae there are regions which are free of feathers called apterylae (or apteria). Filoplumes and down may arise from the apterylae. The arrangement of these feather tracts, pterylosis or pterylography, varies across bird families and has been used in the past as a means for determining the evolutionary relationships of bird families.[27][28]
24
+
25
+ The colors of feathers are produced by pigments, by microscopic structures that can refract, reflect, or scatter selected wavelengths of light, or by a combination of both.
26
+
27
+ Most feather pigments are melanins (brown and beige pheomelanins, black and grey eumelanins) and carotenoids (red, yellow, orange); other pigments occur only in certain taxa – the yellow to red psittacofulvins[29] (found in some parrots) and the red turacin and green turacoverdin (porphyrin pigments found only in turacos).
28
+
29
+ Structural coloration[5][30][31] is involved in the production of blue colors, iridescence, most ultraviolet reflectance and in the enhancement of pigmentary colors. Structural iridescence has been reported[32] in fossil feathers dating back 40 million years. White feathers lack pigment and scatter light diffusely; albinism in birds is caused by defective pigment production, though structural coloration will not be affected (as can be seen, for example, in blue-and-white budgerigars).
30
+
31
+ The blues and bright greens of many parrots are produced by constructive interference of light reflecting from different layers of structures in feathers. In the case of green plumage, in addition to yellow, the specific feather structure involved is called by some the Dyck texture.[33][34] Melanin is often involved in the absorption of light; in combination with a yellow pigment, it produces a dull olive-green.
32
+
33
+ In some birds, feather colors may be created, or altered, by secretions from the uropygial gland, also called the preen gland. The yellow bill colors of many hornbills are produced by such secretions. It has been suggested that there are other color differences that may be visible only in the ultraviolet region,[21] but studies have failed to find evidence.[35] The oil secretion from the uropygial gland may also have an inhibitory effect on feather bacteria.[36]
34
+
35
+ The reds, orange and yellow colors of many feathers are caused by various carotenoids. Carotenoid-based pigments might be honest signals of fitness because they are derived from special diets and hence might be difficult to obtain,[37][38] and/or because carotenoids are required for immune function and hence sexual displays come at the expense of health.[39]
36
+
37
+ A bird's feathers undergo wear and tear and are replaced periodically during the bird's life through molting. New feathers, known when developing as blood, or pin feathers, depending on the stage of growth, are formed through the same follicles from which the old ones were fledged. The presence of melanin in feathers increases their resistance to abrasion.[40] One study notes that melanin based feathers were observed to degrade more quickly under bacterial action, even compared to unpigmented feathers from the same species, than those unpigmented or with carotenoid pigments.[41] However, another study the same year compared the action of bacteria on pigmentations of two song sparrow species and observed that the darker pigmented feathers were more resistant; the authors cited other research also published in 2004 that stated increased melanin provided greater resistance. They observed that the greater resistance of the darker birds confirmed Gloger's rule.[42]
38
+
39
+ Although sexual selection plays a major role in the development of feathers, in particular the color of the feathers it is not the only conclusion available. New studies are suggesting that the unique feathers of birds is also a large influence on many important aspects of avian behavior, such as the height at which a different species build their nests. Since females are the prime care givers, evolution has helped select females to display duller colored down so that they may blend into the nesting environment. The position of the nest and whether it has a greater chance of being under predation has exerted constraints on female birds' plumage.[43] A species of bird that nests on the ground, rather than the canopy of the trees, will need to have much duller colors in order not to attract attention to the nest. Since the female is the main care giver in some species of birds, evolution has helped select traits that make her feathers dull and often allow her to blend into the surroundings. The height study found that birds that nest in the canopies of trees often have many more predator attacks due to the brighter color of feathers that the female displays.[43] Another influence of evolution that could play a part in why feathers of birds are so colorful and display so many patterns could be due to that birds developed their bright colors from the vegetation and flowers that thrive around them. Birds develop their bright colors from living around certain colors. Most bird species often blend into their environment, due to some degree of camouflage, so if the species habitat is full of colors and patterns, the species would eventually evolve to blend in to avoid being eaten. Birds' feathers show a large range of colors, even exceeding the variety of many plants, leaf and flower colors.[44]
40
+
41
+ The feather surface is the home for some ectoparasites, notably feather lice (Phthiraptera) and feather mites. Feather lice typically live on a single host and can move only from parents to chicks, between mating birds, and, occasionally, by phoresy. This life history has resulted in most of the parasite species being specific to the host and coevolving with the host, making them of interest in phylogenetic studies.[45]
42
+
43
+ Feather holes are chewing traces of lice (most probably Brueelia spp. lice) on the wing and tail feathers. They were described on barn swallows, and because of easy countability, many evolutionary, ecological, and behavioral publications use them to quantify the intensity of infestation.
44
+
45
+ Parasitic cuckoos which grow up in the nests of other species also have host-specific feather lice and these seem to be transmitted only after the young cuckoos leave the host nest.[46]
46
+
47
+ Birds maintain their feather condition by preening and bathing in water or dust. It has been suggested that a peculiar behavior of birds, anting, in which ants are introduced into the plumage, helps to reduce parasites, but no supporting evidence has been found.[47]
48
+
49
+ Feathers have a number of utilitarian, cultural and religious uses.
50
+
51
+ Feathers are both soft and excellent at trapping heat; thus, they are sometimes used in high-class bedding, especially pillows, blankets, and mattresses. They are also used as filling for winter clothing and outdoor bedding, such as quilted coats and sleeping bags. Goose and eider down have great loft, the ability to expand from a compressed, stored state to trap large amounts of compartmentalized, insulating air.[48]
52
+
53
+ Bird feathers have long been used for fletching arrows. Colorful feathers such as those belonging to pheasants have been used to decorate fishing lures.
54
+
55
+ Feathers of large birds (most often geese) have been and are used to make quill pens. The word pen itself is derived from the Latin penna, meaning feather.[49] The French word plume can mean either feather or pen.
56
+
57
+ Feathers are also valuable in aiding the identification of species in forensic studies, particularly in bird strikes to aircraft. The ratios of hydrogen isotopes in feathers help in determining the geographic origins of birds.[50] Feathers may also be useful in the non-destructive sampling of pollutants.[51]
58
+
59
+ The poultry industry produces a large amount of feathers as waste, which, like other forms of keratin, are slow to decompose. Feather waste has been used in a number of industrial applications as a medium for culturing microbes,[52] biodegradeable polymers,[53] and production of enzymes.[54] Feather proteins have been tried as an adhesive for wood board.[55]
60
+
61
+ Some groups of Native people in Alaska have used ptarmigan feathers as temper (non-plastic additives) in pottery manufacture since the first millennium BC in order to promote thermal shock resistance and strength.[56]
62
+
63
+ Historically, the hunting of birds for decorative and ornamental feathers (including in Victorian fashion) has endangered some species and helped to contribute to the extinction of others.[57] For instance, South American hummingbird feathers were used in the past to dress some of the miniature birds featured in singing bird boxes.
64
+
65
+ Eagle feathers have great cultural and spiritual value to American Indians in the US and First Nations peoples in Canada as religious objects. In the United States the religious use of eagle and hawk feathers is governed by the eagle feather law, a federal law limiting the possession of eagle feathers to certified and enrolled members of federally recognized Native American tribes.
66
+
67
+ In South America, brews made from the feathers of condors are used in traditional medications.[58] In India, feathers of the Indian peacock have been used in traditional medicine for snakebite, infertility, and coughs.[59][60]
68
+
69
+ During the 18th, 19th, and early 20th centuries, there was a booming international trade in plumes for extravagant women's hats and other headgear. Frank Chapman noted in 1886 that feathers of as many as 40 species of birds were used in about three-fourths of the 700 ladies' hats that he observed in New York City.[61] This trade caused severe losses to bird populations (for example, egrets and whooping cranes). Conservationists led a major campaign against the use of feathers in hats. This contributed to passage of the Lacey Act in 1900, and to changes in fashion. The ornamental feather market then largely collapsed.[62][63]
70
+
71
+ More recently, rooster plumage has become a popular trend as a hairstyle accessory, with feathers formerly used as fishing lures now being used to provide color and style to hair.[64] Today, feathers used in fashion and in military headdresses and clothes are obtained as a waste product of poultry farming, including chickens, geese, turkeys, pheasants, and ostriches. These feathers are dyed and manipulated to enhance their appearance, as poultry feathers are naturally often dull in appearance compared to the feathers of wild birds.
72
+
73
+ Feather products manufacturing in Europe has declined in the last 60 years, mainly due to competition from Asia.
74
+ Feathers have adorned hats at many prestigious events such as weddings and Ladies Day at racecourses (Royal Ascot).
75
+
76
+ The functional view on the evolution of feathers has traditionally focused on insulation, flight and display. Discoveries of non-flying Late Cretaceous feathered dinosaurs in China,[65] however, suggest that flight could not have been the original primary function as the feathers simply would not have been capable of providing any form of lift.[66][67] There have been suggestions that feathers may have had their original function in thermoregulation, waterproofing, or even as sinks for metabolic wastes such as sulphur.[68] Recent discoveries are argued to support a thermoregulatory function, at least in smaller dinosaurs.[69][70] Some researchers even argue that thermoregulation arose from bristles on the face that were used as tactile sensors.[71] While feathers have been suggested as having evolved from reptilian scales, there are numerous objections to that idea, and more recent explanations have arisen from the paradigm of evolutionary developmental biology.[2] Theories of the scale-based origins of feathers suggest that the planar scale structure was modified for development into feathers by splitting to form the webbing; however, that developmental process involves a tubular structure arising from a follicle and the tube splitting longitudinally to form the webbing.[1][2] The number of feathers per unit area of skin is higher in smaller birds than in larger birds, and this trend points to their important role in thermal insulation, since smaller birds lose more heat due to the relatively larger surface area in proportion to their body weight.[5] The miniaturization of birds also played a role in the evolution of powered flight.[72] The coloration of feathers is believed to have evolved primarily in response to sexual selection. In one fossil specimen of the paravian Anchiornis huxleyi, the features are so well preserved that the melanosome (pigment cells) structure can be observed. By comparing the shape of the fossil melanosomes to melanosomes from extant birds, the color and pattern of the feathers on Anchiornis could be determined.[73] Anchiornis was found to have black-and-white-patterned feathers on the forelimbs and hindlimbs, with a reddish-brown crest. This pattern is similar to the coloration of many extant bird species, which use plumage coloration for display and communication, including sexual selection and camouflage. It is likely that non-avian dinosaur species utilized plumage patterns for similar functions as modern birds before the origin of flight. In many cases, the physiological condition of the birds (especially males) is indicated by the quality of their feathers, and this is used (by the females) in mate choice.[74][75] Additionally, when comparing different Ornithomimus edmontonicus specimens, older individuals were found to have a pennibrachium (a wing-like structure consisting of elongate feathers), while younger ones did not. This suggests that the pennibrachium was a secondary sex characteristic and likely had a sexual function.[76]
77
+
78
+ Feathers and scales are made up of two distinct forms of keratin, and it was long thought that each type of keratin was exclusive to each skin structure (feathers and scales). However, a study published in 2006 confirmed the presence of feather keratin in the early stages of development of American alligator scales. This type of keratin, previously thought to be specific to feathers, is suppressed during embryological development of the alligator and so is not present in the scales of mature alligators. The presence of this homologous keratin in both birds and crocodilians indicates that it was inherited from a common ancestor. This may suggest that crocodilian scales, bird and dinosaur feathers, and pterosaur pycnofibres are all developmental expressions of the same primitive archosaur skin structures; suggesting that feathers and pycnofibers could be homologous.[77]
79
+
80
+ Several non-avian dinosaurs had feathers on their limbs that would not have functioned for flight.[65][2] One theory suggests that feathers originally evolved on dinosaurs due to their insulation properties; then, small dinosaur species which grew longer feathers may have found them helpful in gliding, leading to the evolution of proto-birds like Archaeopteryx and Microraptor zhaoianus. Another theory posits that the original adaptive advantage of early feathers was their pigmentation or iridescence, contributing to sexual preference in mate selection.[78] Dinosaurs that had feathers or protofeathers include Pedopenna daohugouensis[79] and Dilong paradoxus, a tyrannosauroid which is 60 to 70 million years older than Tyrannosaurus rex.[80]
81
+
82
+ The majority of dinosaurs known to have had feathers or protofeathers are theropods, however featherlike "filamentous integumentary structures" are also known from the ornithischian dinosaurs Tianyulong and Psittacosaurus.[81] The exact nature of these structures is still under study. However, it is believed that the stage-1 feathers (see Evolutionary stages section below) such as those seen in these two ornithischians likely functioned in display.[82] In 2014, the ornithischian Kulindadromeus was reported as having structures resembling stage-3 feathers.[83]
83
+
84
+ Since the 1990s, dozens of feathered dinosaurs have been discovered in the clade Maniraptora, which includes the clade Avialae and the recent common ancestors of birds, Oviraptorosauria and Deinonychosauria. In 1998, the discovery of a feathered oviraptorosaurian, Caudipteryx zoui, challenged the notion of feathers as a structure exclusive to Avialae.[84] Buried in the Yixian Formation in Liaoning, China, C. zoui lived during the Early Cretaceous Period. Present on the forelimbs and tails, their integumentary structure has been accepted[by whom?] as pennaceous vaned feathers based on the rachis and herringbone pattern of the barbs. In the clade Deinonychosauria, the continued divergence of feathers is also apparent in the families Troodontidae and Dromaeosauridae. Branched feathers with rachis, barbs, and barbules were discovered in many members including Sinornithosaurus millenii, a dromaeosaurid found in the Yixian formation (124.6 MYA).[85]
85
+
86
+ Previously, a temporal paradox existed in the evolution of feathers—theropods with highly derived bird-like characteristics occurred at a later time than Archaeopteryx—suggesting that the descendants of birds arose before the ancestor. However, the discovery of Anchiornis huxleyi in the Late Jurassic Tiaojishan Formation (160 MYA) in western Liaoning in 2009[86][87]
87
+ resolved this paradox. By predating Archaeopteryx, Anchiornis proves the existence of a modernly feathered theropod ancestor, providing insight into the dinosaur-bird transition. The specimen shows distribution of large pennaceous feathers on the forelimbs and tail, implying that pennaceous feathers spread to the rest of the body at an earlier stage in theropod evolution.[88] The development of pennaceous feathers did not replace earlier filamentous feathers. Filamentous feathers are preserved alongside modern-looking flight feathers — including some with modifications found in the feathers of extant diving birds — in 80 million year old amber from Alberta.[89]
88
+
89
+ Two small wings trapped in amber dating to 100 mya show plumage existed in some bird predecessors. The wings most probably belonged to enantiornithes, a diverse group of avian dinosaurs.[90][91]
90
+
91
+ A large phylogenetic analysis of early dinosaurs by Matthew Baron, David B. Norman and Paul Barrett (2017) found that Theropoda is actually more closely related to Ornithischia, to which it formed the sister group within the clade Ornithoscelida. The study also suggested that if the feather-like structures of theropods and ornithischians are of common evolutionary origin then it would be possible that feathers were restricted to Ornithoscelida. If so, then the origin of feathers would have likely occurred as early as the Middle Triassic.[92]
92
+
93
+ Several studies of feather development in the embryos of modern birds, coupled with the distribution of feather types among various prehistoric bird precursors, have allowed scientists to attempt a reconstruction of the sequence in which feathers first evolved and developed into the types found on modern birds.
94
+
95
+ Feather evolution was broken down into the following stages by Xu and Guo in 2009:[82]
96
+
97
+ However, Foth (2011) showed that some of these purported stages (stages 2 and 5 in particular) are likely simply artifacts of preservation caused by the way fossil feathers are crushed and the feather remains or imprints are preserved. Foth re-interpreted stage 2 feathers as crushed or misidentified feathers of at least stage 3, and stage 5 feathers as crushed stage 6 feathers.[93]
98
+
99
+ The following simplified diagram of dinosaur relationships follows these results, and shows the likely distribution of plumaceous (downy) and pennaceous (vaned) feathers among dinosaurs and prehistoric birds. The diagram follows one presented by Xu and Guo (2009)[82] modified with the findings of Foth (2011).[93] The numbers accompanying each name refer to the presence of specific feather stages. Note that 's' indicates the known presence of scales on the body.
100
+
101
+ Heterodontosauridae (1)
102
+
103
+ Thyreophora (s)
104
+
105
+ Ornithopoda (s)
106
+
107
+ Psittacosauridae (s, 1)
108
+
109
+ Ceratopsidae (s)
110
+
111
+ Sauropodomorpha (s)
112
+
113
+ Aucasaurus (s)
114
+
115
+ Carnotaurus (s)
116
+
117
+ Ceratosaurus (s)
118
+
119
+ Dilong (3?)
120
+
121
+ Other tyrannosauroids (s, 1)
122
+
123
+ Juravenator (s, 3?)
124
+
125
+ Sinosauropteryx (3+)
126
+
127
+ Therizinosauria (1, 3+)
128
+
129
+ Alvarezsauridae (3?)
130
+
131
+ Oviraptorosauria (4, 6)
132
+
133
+ Troodontidae (3+, 6)
134
+
135
+ Other dromaeosaurids
136
+
137
+ Sinornithosaurus (3+, 6)
138
+
139
+ Microraptor (3+, 6, 7)
140
+
141
+ Scansoriopterygidae (3+, 6, 8)
142
+
143
+ Archaeopterygidae (3+, 6, 7)
144
+
145
+ Jeholornis (6, 7)
146
+
147
+ Confuciusornis (4, 6, 7, 8)
148
+
149
+ Enantiornithes (4, 6, 7, 8)
150
+
151
+ Neornithes (4, 6, 7, 8)
152
+
153
+ Pterosaurs were long known to have filamentous fur-like structures covering their body known as pycnofibres, which were generally considered distinct from the "true feathers" of birds and their dinosaur kin. However, a 2018 study of two small, well-preserved pterosaur fossils from the Jurassic of Inner Mongolia, China indicated that pterosaurs were covered in an array of differently-structured pycnofibres (rather than just filamentous ones), with several of these structures displaying diagnostic features of feathers, such as non-veined grouped filaments and bilaterally branched filaments, both of which were originally thought to be exclusive to birds and other maniraptoran dinosaurs. Given these findings, it is possible that feathers have deep evolutionary origins in ancestral archosaurs, though there is also a possibility that these structures independently evolved to resemble bird feathers via convergent evolution. Mike Benton, the study's senior author, lent credence to the former theory, stating "We couldn’t find any anatomical evidence that the four pycnofiber types are in any way different from the feathers of birds and dinosaurs. Therefore, because they are the same, they must share an evolutionary origin, and that was about 250 million years ago, long before the origin of birds." [94][95][96][97]
154
+
155
+ {{Reflist
156
+ [57]
157
+ }}
en/4676.html.txt ADDED
@@ -0,0 +1,157 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ Feathers are epidermal growths that form distinctive outer covering, or plumage, on dinosaurs, both avian and some non-avian, and possibly other archosauromorphs. They are considered as the most complex integumentary structures found in vertebrates[1][2] and a premier example of a complex evolutionary novelty.[3] They are among the characteristics that distinguish the extant birds from other living groups.[4]
4
+
5
+ Although feathers cover most of the bird's bodies, they arise only from certain well-defined tracts on the skin. They aid in flight, thermal insulation, and waterproofing. In addition, coloration helps in communication and protection.[5] Plumology (or plumage science) is the name for the science that is associated with the study of feathers.[6][7]
6
+
7
+ Feathers are among the most complex integumentary appendages found in vertebrates and are formed in tiny follicles in the epidermis, or outer skin layer, that produce keratin proteins. The β-keratins in feathers, beaks and claws — and the claws, scales and shells of reptiles — are composed of protein strands hydrogen-bonded into β-pleated sheets, which are then further twisted and crosslinked by disulfide bridges into structures even tougher than the α-keratins of mammalian hair, horns and hooves.[8][9] The exact signals that induce the growth of feathers on the skin are not known, but it has been found that the transcription factor cDermo-1 induces the growth of feathers on skin and scales on the leg.[10]
8
+
9
+ There are two basic types of feather: vaned feathers which cover the exterior of the body, and down feathers which are underneath the vaned feathers. The pennaceous feathers are vaned feathers. Also called contour feathers, pennaceous feathers arise from tracts and cover the entire body. A third rarer type of feather, the filoplume, is hairlike and (if present in a bird; they are entirely absent in ratites[11]) are closely associated with pennaceous feathers and are often entirely hidden by them, with one or two filoplumes attached and sprouting from near the same point of the skin as each pennaceous feather, at least on a bird's head, neck and trunk.[12][13] In some passerines, filoplumes arise exposed beyond the pennaceous feathers on the neck.[1] The remiges, or flight feathers of the wing, and rectrices, or flight feathers of the tail, are the most important feathers for flight. A typical vaned feather features a main shaft, called the rachis. Fused to the rachis are a series of branches, or barbs; the barbs themselves are also branched and form the barbules. These barbules have minute hooks called barbicels for cross-attachment. Down feathers are fluffy because they lack barbicels, so the barbules float free of each other, allowing the down to trap air and provide excellent thermal insulation. At the base of the feather, the rachis expands to form the hollow tubular calamus (or quill) which inserts into a follicle in the skin. The basal part of the calamus is without vanes. This part is embedded within the skin follicle and has an opening at the base (proximal umbilicus) and a small opening on the side (distal umbilicus).[14]
10
+
11
+ Hatchling birds of some species have a special kind of natal down feathers (neossoptiles) which are pushed out when the normal feathers (teleoptiles) emerge.[1]
12
+
13
+ Flight feathers are stiffened so as to work against the air in the downstroke but yield in other directions. It has been observed that the orientation pattern of β-keratin fibers in the feathers of flying birds differs from that in flightless birds: the fibers are better aligned along the shaft axis direction towards the tip,[15][16] and the lateral walls of rachis region show structure of crossed fibers.[17][18]
14
+
15
+ Feathers insulate birds from water and cold temperatures. They may also be plucked to line the nest and provide insulation to the eggs and young. The individual feathers in the wings and tail play important roles in controlling flight.[17] Some species have a crest of feathers on their heads. Although feathers are light, a bird's plumage weighs two or three times more than its skeleton, since many bones are hollow and contain air sacs. Color patterns serve as camouflage against predators for birds in their habitats, and serve as camouflage for predators looking for a meal. As with fish, the top and bottom colors may be different, in order to provide camouflage during flight. Striking differences in feather patterns and colors are part of the sexual dimorphism of many bird species and are particularly important in selection of mating pairs. In some cases there are differences in the UV reflectivity of feathers across sexes even though no differences in color are noted in the visible range.[19] The wing feathers of male club-winged manakins Machaeropterus deliciosus have special structures that are used to produce sounds by stridulation.[20]
16
+
17
+ Some birds have a supply of powder down feathers which grow continuously, with small particles regularly breaking off from the ends of the barbules. These particles produce a powder that sifts through the feathers on the bird's body and acts as a waterproofing agent and a feather conditioner. Powder down has evolved independently in several taxa and can be found in down as well as in pennaceous feathers. They may be scattered in plumage as in the pigeons and parrots or in localized patches on the breast, belly, or flanks, as in herons and frogmouths. Herons use their bill to break the powder down feathers and to spread them, while cockatoos may use their head as a powder puff to apply the powder.[21] Waterproofing can be lost by exposure to emulsifying agents due to human pollution. Feathers can then become waterlogged, causing the bird to sink. It is also very difficult to clean and rescue birds whose feathers have been fouled by oil spills. The feathers of cormorants soak up water and help to reduce buoyancy, thereby allowing the birds to swim submerged.[22]
18
+
19
+ Bristles are stiff, tapering feathers with a large rachis but few barbs. Rictal bristles are found around the eyes and bill. They may serve a similar purpose to eyelashes and vibrissae in mammals. Although there is as yet no clear evidence, it has been suggested that rictal bristles have sensory functions and may help insectivorous birds to capture prey.[23] In one study, willow flycatchers (Empidonax traillii) were found to catch insects equally well before and after removal of the rictal bristles.[24]
20
+
21
+ Grebes are peculiar in their habit of ingesting their own feathers and feeding them to their young. Observations on their diet of fish and the frequency of feather eating suggest that ingesting feathers, particularly down from their flanks, aids in forming easily ejectable pellets.[25]
22
+
23
+ Contour feathers are not uniformly distributed on the skin of the bird except in some groups such as the penguins, ratites and screamers.[26] In most birds the feathers grow from specific tracts of skin called pterylae; between the pterylae there are regions which are free of feathers called apterylae (or apteria). Filoplumes and down may arise from the apterylae. The arrangement of these feather tracts, pterylosis or pterylography, varies across bird families and has been used in the past as a means for determining the evolutionary relationships of bird families.[27][28]
24
+
25
+ The colors of feathers are produced by pigments, by microscopic structures that can refract, reflect, or scatter selected wavelengths of light, or by a combination of both.
26
+
27
+ Most feather pigments are melanins (brown and beige pheomelanins, black and grey eumelanins) and carotenoids (red, yellow, orange); other pigments occur only in certain taxa – the yellow to red psittacofulvins[29] (found in some parrots) and the red turacin and green turacoverdin (porphyrin pigments found only in turacos).
28
+
29
+ Structural coloration[5][30][31] is involved in the production of blue colors, iridescence, most ultraviolet reflectance and in the enhancement of pigmentary colors. Structural iridescence has been reported[32] in fossil feathers dating back 40 million years. White feathers lack pigment and scatter light diffusely; albinism in birds is caused by defective pigment production, though structural coloration will not be affected (as can be seen, for example, in blue-and-white budgerigars).
30
+
31
+ The blues and bright greens of many parrots are produced by constructive interference of light reflecting from different layers of structures in feathers. In the case of green plumage, in addition to yellow, the specific feather structure involved is called by some the Dyck texture.[33][34] Melanin is often involved in the absorption of light; in combination with a yellow pigment, it produces a dull olive-green.
32
+
33
+ In some birds, feather colors may be created, or altered, by secretions from the uropygial gland, also called the preen gland. The yellow bill colors of many hornbills are produced by such secretions. It has been suggested that there are other color differences that may be visible only in the ultraviolet region,[21] but studies have failed to find evidence.[35] The oil secretion from the uropygial gland may also have an inhibitory effect on feather bacteria.[36]
34
+
35
+ The reds, orange and yellow colors of many feathers are caused by various carotenoids. Carotenoid-based pigments might be honest signals of fitness because they are derived from special diets and hence might be difficult to obtain,[37][38] and/or because carotenoids are required for immune function and hence sexual displays come at the expense of health.[39]
36
+
37
+ A bird's feathers undergo wear and tear and are replaced periodically during the bird's life through molting. New feathers, known when developing as blood, or pin feathers, depending on the stage of growth, are formed through the same follicles from which the old ones were fledged. The presence of melanin in feathers increases their resistance to abrasion.[40] One study notes that melanin based feathers were observed to degrade more quickly under bacterial action, even compared to unpigmented feathers from the same species, than those unpigmented or with carotenoid pigments.[41] However, another study the same year compared the action of bacteria on pigmentations of two song sparrow species and observed that the darker pigmented feathers were more resistant; the authors cited other research also published in 2004 that stated increased melanin provided greater resistance. They observed that the greater resistance of the darker birds confirmed Gloger's rule.[42]
38
+
39
+ Although sexual selection plays a major role in the development of feathers, in particular the color of the feathers it is not the only conclusion available. New studies are suggesting that the unique feathers of birds is also a large influence on many important aspects of avian behavior, such as the height at which a different species build their nests. Since females are the prime care givers, evolution has helped select females to display duller colored down so that they may blend into the nesting environment. The position of the nest and whether it has a greater chance of being under predation has exerted constraints on female birds' plumage.[43] A species of bird that nests on the ground, rather than the canopy of the trees, will need to have much duller colors in order not to attract attention to the nest. Since the female is the main care giver in some species of birds, evolution has helped select traits that make her feathers dull and often allow her to blend into the surroundings. The height study found that birds that nest in the canopies of trees often have many more predator attacks due to the brighter color of feathers that the female displays.[43] Another influence of evolution that could play a part in why feathers of birds are so colorful and display so many patterns could be due to that birds developed their bright colors from the vegetation and flowers that thrive around them. Birds develop their bright colors from living around certain colors. Most bird species often blend into their environment, due to some degree of camouflage, so if the species habitat is full of colors and patterns, the species would eventually evolve to blend in to avoid being eaten. Birds' feathers show a large range of colors, even exceeding the variety of many plants, leaf and flower colors.[44]
40
+
41
+ The feather surface is the home for some ectoparasites, notably feather lice (Phthiraptera) and feather mites. Feather lice typically live on a single host and can move only from parents to chicks, between mating birds, and, occasionally, by phoresy. This life history has resulted in most of the parasite species being specific to the host and coevolving with the host, making them of interest in phylogenetic studies.[45]
42
+
43
+ Feather holes are chewing traces of lice (most probably Brueelia spp. lice) on the wing and tail feathers. They were described on barn swallows, and because of easy countability, many evolutionary, ecological, and behavioral publications use them to quantify the intensity of infestation.
44
+
45
+ Parasitic cuckoos which grow up in the nests of other species also have host-specific feather lice and these seem to be transmitted only after the young cuckoos leave the host nest.[46]
46
+
47
+ Birds maintain their feather condition by preening and bathing in water or dust. It has been suggested that a peculiar behavior of birds, anting, in which ants are introduced into the plumage, helps to reduce parasites, but no supporting evidence has been found.[47]
48
+
49
+ Feathers have a number of utilitarian, cultural and religious uses.
50
+
51
+ Feathers are both soft and excellent at trapping heat; thus, they are sometimes used in high-class bedding, especially pillows, blankets, and mattresses. They are also used as filling for winter clothing and outdoor bedding, such as quilted coats and sleeping bags. Goose and eider down have great loft, the ability to expand from a compressed, stored state to trap large amounts of compartmentalized, insulating air.[48]
52
+
53
+ Bird feathers have long been used for fletching arrows. Colorful feathers such as those belonging to pheasants have been used to decorate fishing lures.
54
+
55
+ Feathers of large birds (most often geese) have been and are used to make quill pens. The word pen itself is derived from the Latin penna, meaning feather.[49] The French word plume can mean either feather or pen.
56
+
57
+ Feathers are also valuable in aiding the identification of species in forensic studies, particularly in bird strikes to aircraft. The ratios of hydrogen isotopes in feathers help in determining the geographic origins of birds.[50] Feathers may also be useful in the non-destructive sampling of pollutants.[51]
58
+
59
+ The poultry industry produces a large amount of feathers as waste, which, like other forms of keratin, are slow to decompose. Feather waste has been used in a number of industrial applications as a medium for culturing microbes,[52] biodegradeable polymers,[53] and production of enzymes.[54] Feather proteins have been tried as an adhesive for wood board.[55]
60
+
61
+ Some groups of Native people in Alaska have used ptarmigan feathers as temper (non-plastic additives) in pottery manufacture since the first millennium BC in order to promote thermal shock resistance and strength.[56]
62
+
63
+ Historically, the hunting of birds for decorative and ornamental feathers (including in Victorian fashion) has endangered some species and helped to contribute to the extinction of others.[57] For instance, South American hummingbird feathers were used in the past to dress some of the miniature birds featured in singing bird boxes.
64
+
65
+ Eagle feathers have great cultural and spiritual value to American Indians in the US and First Nations peoples in Canada as religious objects. In the United States the religious use of eagle and hawk feathers is governed by the eagle feather law, a federal law limiting the possession of eagle feathers to certified and enrolled members of federally recognized Native American tribes.
66
+
67
+ In South America, brews made from the feathers of condors are used in traditional medications.[58] In India, feathers of the Indian peacock have been used in traditional medicine for snakebite, infertility, and coughs.[59][60]
68
+
69
+ During the 18th, 19th, and early 20th centuries, there was a booming international trade in plumes for extravagant women's hats and other headgear. Frank Chapman noted in 1886 that feathers of as many as 40 species of birds were used in about three-fourths of the 700 ladies' hats that he observed in New York City.[61] This trade caused severe losses to bird populations (for example, egrets and whooping cranes). Conservationists led a major campaign against the use of feathers in hats. This contributed to passage of the Lacey Act in 1900, and to changes in fashion. The ornamental feather market then largely collapsed.[62][63]
70
+
71
+ More recently, rooster plumage has become a popular trend as a hairstyle accessory, with feathers formerly used as fishing lures now being used to provide color and style to hair.[64] Today, feathers used in fashion and in military headdresses and clothes are obtained as a waste product of poultry farming, including chickens, geese, turkeys, pheasants, and ostriches. These feathers are dyed and manipulated to enhance their appearance, as poultry feathers are naturally often dull in appearance compared to the feathers of wild birds.
72
+
73
+ Feather products manufacturing in Europe has declined in the last 60 years, mainly due to competition from Asia.
74
+ Feathers have adorned hats at many prestigious events such as weddings and Ladies Day at racecourses (Royal Ascot).
75
+
76
+ The functional view on the evolution of feathers has traditionally focused on insulation, flight and display. Discoveries of non-flying Late Cretaceous feathered dinosaurs in China,[65] however, suggest that flight could not have been the original primary function as the feathers simply would not have been capable of providing any form of lift.[66][67] There have been suggestions that feathers may have had their original function in thermoregulation, waterproofing, or even as sinks for metabolic wastes such as sulphur.[68] Recent discoveries are argued to support a thermoregulatory function, at least in smaller dinosaurs.[69][70] Some researchers even argue that thermoregulation arose from bristles on the face that were used as tactile sensors.[71] While feathers have been suggested as having evolved from reptilian scales, there are numerous objections to that idea, and more recent explanations have arisen from the paradigm of evolutionary developmental biology.[2] Theories of the scale-based origins of feathers suggest that the planar scale structure was modified for development into feathers by splitting to form the webbing; however, that developmental process involves a tubular structure arising from a follicle and the tube splitting longitudinally to form the webbing.[1][2] The number of feathers per unit area of skin is higher in smaller birds than in larger birds, and this trend points to their important role in thermal insulation, since smaller birds lose more heat due to the relatively larger surface area in proportion to their body weight.[5] The miniaturization of birds also played a role in the evolution of powered flight.[72] The coloration of feathers is believed to have evolved primarily in response to sexual selection. In one fossil specimen of the paravian Anchiornis huxleyi, the features are so well preserved that the melanosome (pigment cells) structure can be observed. By comparing the shape of the fossil melanosomes to melanosomes from extant birds, the color and pattern of the feathers on Anchiornis could be determined.[73] Anchiornis was found to have black-and-white-patterned feathers on the forelimbs and hindlimbs, with a reddish-brown crest. This pattern is similar to the coloration of many extant bird species, which use plumage coloration for display and communication, including sexual selection and camouflage. It is likely that non-avian dinosaur species utilized plumage patterns for similar functions as modern birds before the origin of flight. In many cases, the physiological condition of the birds (especially males) is indicated by the quality of their feathers, and this is used (by the females) in mate choice.[74][75] Additionally, when comparing different Ornithomimus edmontonicus specimens, older individuals were found to have a pennibrachium (a wing-like structure consisting of elongate feathers), while younger ones did not. This suggests that the pennibrachium was a secondary sex characteristic and likely had a sexual function.[76]
77
+
78
+ Feathers and scales are made up of two distinct forms of keratin, and it was long thought that each type of keratin was exclusive to each skin structure (feathers and scales). However, a study published in 2006 confirmed the presence of feather keratin in the early stages of development of American alligator scales. This type of keratin, previously thought to be specific to feathers, is suppressed during embryological development of the alligator and so is not present in the scales of mature alligators. The presence of this homologous keratin in both birds and crocodilians indicates that it was inherited from a common ancestor. This may suggest that crocodilian scales, bird and dinosaur feathers, and pterosaur pycnofibres are all developmental expressions of the same primitive archosaur skin structures; suggesting that feathers and pycnofibers could be homologous.[77]
79
+
80
+ Several non-avian dinosaurs had feathers on their limbs that would not have functioned for flight.[65][2] One theory suggests that feathers originally evolved on dinosaurs due to their insulation properties; then, small dinosaur species which grew longer feathers may have found them helpful in gliding, leading to the evolution of proto-birds like Archaeopteryx and Microraptor zhaoianus. Another theory posits that the original adaptive advantage of early feathers was their pigmentation or iridescence, contributing to sexual preference in mate selection.[78] Dinosaurs that had feathers or protofeathers include Pedopenna daohugouensis[79] and Dilong paradoxus, a tyrannosauroid which is 60 to 70 million years older than Tyrannosaurus rex.[80]
81
+
82
+ The majority of dinosaurs known to have had feathers or protofeathers are theropods, however featherlike "filamentous integumentary structures" are also known from the ornithischian dinosaurs Tianyulong and Psittacosaurus.[81] The exact nature of these structures is still under study. However, it is believed that the stage-1 feathers (see Evolutionary stages section below) such as those seen in these two ornithischians likely functioned in display.[82] In 2014, the ornithischian Kulindadromeus was reported as having structures resembling stage-3 feathers.[83]
83
+
84
+ Since the 1990s, dozens of feathered dinosaurs have been discovered in the clade Maniraptora, which includes the clade Avialae and the recent common ancestors of birds, Oviraptorosauria and Deinonychosauria. In 1998, the discovery of a feathered oviraptorosaurian, Caudipteryx zoui, challenged the notion of feathers as a structure exclusive to Avialae.[84] Buried in the Yixian Formation in Liaoning, China, C. zoui lived during the Early Cretaceous Period. Present on the forelimbs and tails, their integumentary structure has been accepted[by whom?] as pennaceous vaned feathers based on the rachis and herringbone pattern of the barbs. In the clade Deinonychosauria, the continued divergence of feathers is also apparent in the families Troodontidae and Dromaeosauridae. Branched feathers with rachis, barbs, and barbules were discovered in many members including Sinornithosaurus millenii, a dromaeosaurid found in the Yixian formation (124.6 MYA).[85]
85
+
86
+ Previously, a temporal paradox existed in the evolution of feathers—theropods with highly derived bird-like characteristics occurred at a later time than Archaeopteryx—suggesting that the descendants of birds arose before the ancestor. However, the discovery of Anchiornis huxleyi in the Late Jurassic Tiaojishan Formation (160 MYA) in western Liaoning in 2009[86][87]
87
+ resolved this paradox. By predating Archaeopteryx, Anchiornis proves the existence of a modernly feathered theropod ancestor, providing insight into the dinosaur-bird transition. The specimen shows distribution of large pennaceous feathers on the forelimbs and tail, implying that pennaceous feathers spread to the rest of the body at an earlier stage in theropod evolution.[88] The development of pennaceous feathers did not replace earlier filamentous feathers. Filamentous feathers are preserved alongside modern-looking flight feathers — including some with modifications found in the feathers of extant diving birds — in 80 million year old amber from Alberta.[89]
88
+
89
+ Two small wings trapped in amber dating to 100 mya show plumage existed in some bird predecessors. The wings most probably belonged to enantiornithes, a diverse group of avian dinosaurs.[90][91]
90
+
91
+ A large phylogenetic analysis of early dinosaurs by Matthew Baron, David B. Norman and Paul Barrett (2017) found that Theropoda is actually more closely related to Ornithischia, to which it formed the sister group within the clade Ornithoscelida. The study also suggested that if the feather-like structures of theropods and ornithischians are of common evolutionary origin then it would be possible that feathers were restricted to Ornithoscelida. If so, then the origin of feathers would have likely occurred as early as the Middle Triassic.[92]
92
+
93
+ Several studies of feather development in the embryos of modern birds, coupled with the distribution of feather types among various prehistoric bird precursors, have allowed scientists to attempt a reconstruction of the sequence in which feathers first evolved and developed into the types found on modern birds.
94
+
95
+ Feather evolution was broken down into the following stages by Xu and Guo in 2009:[82]
96
+
97
+ However, Foth (2011) showed that some of these purported stages (stages 2 and 5 in particular) are likely simply artifacts of preservation caused by the way fossil feathers are crushed and the feather remains or imprints are preserved. Foth re-interpreted stage 2 feathers as crushed or misidentified feathers of at least stage 3, and stage 5 feathers as crushed stage 6 feathers.[93]
98
+
99
+ The following simplified diagram of dinosaur relationships follows these results, and shows the likely distribution of plumaceous (downy) and pennaceous (vaned) feathers among dinosaurs and prehistoric birds. The diagram follows one presented by Xu and Guo (2009)[82] modified with the findings of Foth (2011).[93] The numbers accompanying each name refer to the presence of specific feather stages. Note that 's' indicates the known presence of scales on the body.
100
+
101
+ Heterodontosauridae (1)
102
+
103
+ Thyreophora (s)
104
+
105
+ Ornithopoda (s)
106
+
107
+ Psittacosauridae (s, 1)
108
+
109
+ Ceratopsidae (s)
110
+
111
+ Sauropodomorpha (s)
112
+
113
+ Aucasaurus (s)
114
+
115
+ Carnotaurus (s)
116
+
117
+ Ceratosaurus (s)
118
+
119
+ Dilong (3?)
120
+
121
+ Other tyrannosauroids (s, 1)
122
+
123
+ Juravenator (s, 3?)
124
+
125
+ Sinosauropteryx (3+)
126
+
127
+ Therizinosauria (1, 3+)
128
+
129
+ Alvarezsauridae (3?)
130
+
131
+ Oviraptorosauria (4, 6)
132
+
133
+ Troodontidae (3+, 6)
134
+
135
+ Other dromaeosaurids
136
+
137
+ Sinornithosaurus (3+, 6)
138
+
139
+ Microraptor (3+, 6, 7)
140
+
141
+ Scansoriopterygidae (3+, 6, 8)
142
+
143
+ Archaeopterygidae (3+, 6, 7)
144
+
145
+ Jeholornis (6, 7)
146
+
147
+ Confuciusornis (4, 6, 7, 8)
148
+
149
+ Enantiornithes (4, 6, 7, 8)
150
+
151
+ Neornithes (4, 6, 7, 8)
152
+
153
+ Pterosaurs were long known to have filamentous fur-like structures covering their body known as pycnofibres, which were generally considered distinct from the "true feathers" of birds and their dinosaur kin. However, a 2018 study of two small, well-preserved pterosaur fossils from the Jurassic of Inner Mongolia, China indicated that pterosaurs were covered in an array of differently-structured pycnofibres (rather than just filamentous ones), with several of these structures displaying diagnostic features of feathers, such as non-veined grouped filaments and bilaterally branched filaments, both of which were originally thought to be exclusive to birds and other maniraptoran dinosaurs. Given these findings, it is possible that feathers have deep evolutionary origins in ancestral archosaurs, though there is also a possibility that these structures independently evolved to resemble bird feathers via convergent evolution. Mike Benton, the study's senior author, lent credence to the former theory, stating "We couldn’t find any anatomical evidence that the four pycnofiber types are in any way different from the feathers of birds and dinosaurs. Therefore, because they are the same, they must share an evolutionary origin, and that was about 250 million years ago, long before the origin of birds." [94][95][96][97]
154
+
155
+ {{Reflist
156
+ [57]
157
+ }}
en/4677.html.txt ADDED
@@ -0,0 +1,174 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ Pluto (Latin: Plūtō; Greek: Πλούτων, Ploútōn) was the ruler of the underworld in classical mythology. The earlier name for the god was Hades, which became more common as the name of the underworld itself. In ancient Greek religion and mythology, Pluto represents a more positive concept of the god who presides over the afterlife. Ploutōn was frequently conflated with Ploutos, the Greek god of wealth, because mineral wealth was found underground, and because as a chthonic god Pluto ruled the deep earth that contained the seeds necessary for a bountiful harvest.[1] The name Ploutōn came into widespread usage with the Eleusinian Mysteries, in which Pluto was venerated as both a stern ruler and a loving husband to Persephone. The couple received souls in the afterlife and are invoked together in religious inscriptions, being referred to as Plouton and as Kore respectively. Hades, by contrast, had few temples and religious practices associated with him, and he is portrayed as the dark and violent abductor of Persephone.
4
+
5
+ Pluto and Hades differ in character, but they are not distinct figures and share two dominant myths. In Greek cosmogony, the god received the rule of the underworld in a three-way division of sovereignty over the world, with his brother Zeus ruling the sky and his other brother Poseidon sovereign over the sea. His central narrative in myth is of him abducting Persephone to be his wife and the queen of his realm.[2] Plouton as the name of the ruler of the underworld first appears in Greek literature of the Classical period, in the works of the Athenian playwrights and of the philosopher Plato, who is the major Greek source on its significance. Under the name Pluto, the god appears in other myths in a secondary role, mostly as the possessor of a quest-object, and especially in the descent of Orpheus or other heroes to the underworld.[3]
6
+
7
+ Plūtō ([ˈpluːtoː]; genitive Plūtōnis) is the Latinized form of the Greek Plouton. Pluto's Roman equivalent is Dis Pater, whose name is most often taken to mean "Rich Father" and is perhaps a direct translation of Plouton. Pluto was also identified with the obscure Roman Orcus, like Hades the name of both a god of the underworld and the underworld as a place. The borrowed Greek name Pluto is sometimes used for the ruler of the dead in Latin literature, leading some mythology handbooks to assert misleadingly that Pluto was the Roman counterpart of Hades.[4][citation needed] Pluto (Pluton in French and German, Plutone in Italian) becomes the most common name for the classical ruler of the underworld in subsequent Western literature and other art forms.
8
+
9
+ The name Plouton does not appear in Greek literature of the Archaic period.[5] In Hesiod's Theogony, the six children of Cronus and Rhea are Zeus, Hera, Poseidon, Hades, Demeter, and Hestia. The male children divide the world into three realms. Hades takes Persephone by force from her mother Demeter, with the consent of Zeus. Ploutos, "Wealth," appears in the Theogony as the child of Demeter and Iasion: "fine Plutus, who goes upon the whole earth and the broad back of the sea, and whoever meets him and comes into his hands, that man he makes rich, and he bestows much wealth upon him." The union of Demeter and Iasion, described also in the Odyssey,[6] took place in a fallow field that had been ploughed three times, in what seems to be a reference to a ritual copulation or sympathetic magic to ensure the earth's fertility.[7] "The resemblance of the name Ploutos to Plouton ...," it has been noted, "cannot be accidental. Plouton is lord of the dead, but as Persephone's husband he has serious claims to the powers of fertility."[8] Demeter's son Plutus merges in the narrative tradition with her son-in-law Pluto, redefining the implacable chariot-driver Hades whose horses trample the flowering earth.[9]
10
+
11
+ That the underworld god was associated early on with success in agricultural activity is already evident in Hesiod's Works and Days, line 465-469: "Pray to Zeus of the Earth and to pure Demeter to make Demeter's holy grain sound and heavy, when first you begin ploughing, when you hold in your hand the end of the plough-tail and bring down your stick on the backs of the oxen as they draw on the pole-bar by the yoke-straps."[10]
12
+
13
+ Plouton was one of several euphemistic names for Hades, described in the Iliad as the god most hateful to mortals.[11] Plato says that people prefer the name Plouton, "giver of wealth," because the name of Hades is fear-provoking.[12] The name was understood as referring to "the boundless riches of the earth, both the crops on its surface—he was originally a god of the land—and the mines hidden within it."[13] What is sometimes taken as "confusion" of the two gods Plouton and Ploutos ("Wealth") held or acquired a theological significance in antiquity. As a lord of abundance or riches, Pluto expresses the aspect of the underworld god that was positive, symbolized in art by the "horn of plenty" (cornucopia),[14] by means of which Plouton is distinguished from the gloomier Hades.[15]
14
+
15
+ The Roman poet Ennius (ca. 239–169 BC), the leading figure in the Hellenization of Latin literature, considered Pluto a Greek god to be explained in terms of the Roman equivalents Dis Pater and Orcus.[16] It is unclear whether Pluto had a literary presence in Rome before Ennius. Some scholars think that rituals and beliefs pertaining to Pluto entered Roman culture with the establishment of the Saecular Games in 249 BC, and that Dis pater was only a translation of Plouton.[17] In the mid-1st century BC, Cicero identifies Pluto with Dis, explaining that "The earth in all its power and plenty is sacred to Father Dis, a name which is the same as Dives, 'The Wealthy One,' as is the Greek Plouton. This is because everything is born of the earth and returns to it again."[18]
16
+
17
+ During the Roman Imperial era, the Greek geographer Strabo (1st century AD) makes a distinction between Pluto and Hades. In writing of the mineral wealth of ancient Iberia (Roman Spain), he says that among the Turdetani, it is "Pluto, and not Hades, who inhabits the region down below."[19] In the discourse On Mourning by the Greek author Lucian (2nd century AD), Pluto's "wealth" is the dead he rules over in the abyss (chasma); the name Hades is reserved for the underworld itself.[20]
18
+
19
+ In Greek religious practice, Pluto is sometimes seen as the "chthonic Zeus" (Zeus Chthonios[21] or Zeus Catachthonios[22]), or at least as having functions or significance equivalent to those of Zeus but pertaining to the earth or underworld.[23] In ancient Roman and Hellenistic religion, Pluto was identified with a number of other deities, including Summanus, the Roman god of nocturnal thunder;[24] Februus, the Roman god from whose purification rites the month of February takes its name;[25] the syncretic god Serapis, regarded as Pluto's Egyptian equivalent;[26] and the Semitic god Muth (Μούθ). Muth was described by Philo of Byblos as the equivalent of both Thanatos (Death personified) and Pluto.[27] The ancient Greeks did not regard Pluto as "death" per se.[28]
20
+
21
+ The best-known myth involving Pluto or Hades is the abduction of Persephone, also known as Kore ("the Maiden"). The earliest literary versions of the myth are a brief mention in Hesiod's Theogony and the extended narrative of the Homeric Hymn to Demeter; in both these works, the ruler of the underworld is named as Hades ("the Hidden One"). Hades is an unsympathetic figure, and Persephone's unwillingness is emphasized.[29] Increased usage of the name Plouton in religious inscriptions and literary texts reflects the influence of the Eleusinian Mysteries, which treated Pluto and Persephone as a divine couple who received initiates in the afterlife; as such, Pluto was disassociated from the "violent abductor" of Kore.[30] Two early works that give the abductor god's name as Pluto are the Greek mythography traditionally known as the Library of "Apollodorus" (1st century BC)[31] and the Latin Fables of Hyginus (ca. 64 BC–AD 17).[32]
22
+
23
+ The most influential version of the abduction myth is that of Ovid (d. 17 or 18 AD), who tells the story in both the Metamorphoses (Book 5) and the Fasti (Book 4).[33] Another major retelling, also in Latin, is the long unfinished poem De raptu Proserpinae ("On the Abduction of Proserpina") by Claudian (d. 404 AD). Ovid uses the name Dis, not Pluto in these two passages,[34] and Claudian uses Pluto only once; translators and editors, however, sometimes supply the more familiar "Pluto" when other epithets appear in the source text.[35] The abduction myth was a popular subject for Greek and Roman art, and recurs throughout Western art and literature, where the name "Pluto" becomes common (see Pluto in Western art and literature below). Narrative details from Ovid and Claudian influence these later versions in which the abductor is named as Pluto, especially the role of Venus and Cupid in manipulating Pluto with love and desire.[36] Throughout the Middle Ages and Renaissance, and certainly by the time of Natale Conti's influential Mythologiae (1567), the traditions pertaining to the various rulers of the classical underworld coalesced into a single mythology that made few if any distinctions among Hades, Pluto, Dis, and Orcus.
24
+
25
+ Unlike his freely procreating brothers Zeus and Poseidon, Pluto is monogamous, and is rarely said to have children.[37] In Orphic texts,[38] the chthonic nymph Melinoe is the daughter of Persephone by Zeus disguised as Pluto,[39] and the Eumenides ("The Kindly Ones") are the offspring of Persephone and Zeus Chthonios, often identified as Pluto.[40] The Augustan poet Vergil says that Pluto is the father of Allecto the Fury, whom he hates.[41] The lack of a clear distinction between Pluto and "chthonic Zeus" confuses the question of whether in some traditions, now obscure, Persephone bore children to her husband. In the late 4th century AD, Claudian's epic on the abduction motivates Pluto with a desire for children. The poem is unfinished, however, and anything Claudian may have known of these traditions is lost.[42]
26
+
27
+ Justin Martyr (2nd century AD) alludes to children of Pluto, but neither names nor enumerates them.[43] Hesychius (5th century AD) mentions a "son of Pluto."[44] In his 14th-century mythography, Boccaccio records a tradition in which Pluto was the father of the divine personification Veneratio ("Reverence"), noting that she had no mother because Proserpina (the Latin name of Persephone) was sterile.[45]
28
+
29
+ In The Faerie Queene (1590s), Edmund Spenser invents a daughter for Pluto whom he calls Lucifera.[46] The character's name was taken from the 16th-century mythography of Natale Conti, who used it as the Latin translation of Greek phosphor, "light-bearer," a regular epithet of Hecate.[47] Spenser incorporated aspects of the mysteries into The Faerie Queene.[48]
30
+
31
+ Orpheus was regarded as a founder and prophet of the mysteries called "Orphic," "Dionysiac," or "Bacchic." Mythologized for his ability to entrance even animals and trees with his music, he was also credited in antiquity with the authorship of the lyrics that have survived as the Orphic Hymns, among them a hymn to Pluto. Orpheus's voice and lyre-playing represented a medium of revelation or higher knowledge for the mystery cults.[49]
32
+
33
+ In his central myth, Orpheus visits the underworld in the hope of retrieving his bride, Eurydice, relying on the power of his music to charm the king and queen of Hades. Greek narratives of Orpheus's descent and performance typically name the ruler of the underworld as Plouton, as for instance in the Bibliotheca.[50] The myth demonstrates the importance of Pluto "the Rich" as the possessor of a quest-object. Orpheus performing before Pluto and Persephone was a common subject of ancient and later Western literature and art, and one of the most significant mythological themes of the classical tradition.[51]
34
+
35
+ The demonstration of Orpheus's power depends on the normal obduracy of Pluto; the Augustan poet Horace describes him as incapable of tears.[52] Claudian, however, portrays the steely god as succumbing to Orpheus's song so that "with iron cloak he wipes his tears" (ferrugineo lacrimas deterget amictu), an image renewed by Milton in Il Penseroso (106–107): "Such notes ... / Drew iron tears down Pluto's cheek."[53]
36
+
37
+ The Greek writer Lucian (ca. 125–after 180 AD) suggests that Pluto's love for his wife gave the ruler of the underworld a special sympathy or insight into lovers parted by death.[54] In one of Lucian's Dialogues of the Dead, Pluto questions Protesilaus, the first Greek hero killed in the Trojan War, who wishes to return to the world of the living. "You are then in love with life?" Pluto asks. "Such lovers we have here in plenty; but they love an object, which none of them can obtain." Protesilaus explains, like an Orpheus in reverse, that he has left behind a young bride whose memory even the Lethe's waters of forgetting have not erased from him. Pluto assures him that death will reunite them someday, but Protesilaus argues that Pluto himself should understand love and its impatience, and reminds the king of his grant to Orpheus and to Alcestis, who took her husband's place in death and then was permitted at the insistence of Heracles to return to him. When Persephone intercedes for the dead warrior, Pluto grants the request at once, though allowing only one day for the reunion.[55]
38
+
39
+ As Pluto gained importance as an embodiment of agricultural wealth within the Eleusinian Mysteries, from the 5th century BC onward the name Hades was increasingly reserved for the underworld as a place.[56] Neither Hades nor Pluto was one of the traditional Twelve Olympians, and Hades seems to have received limited cult,[57] perhaps only at Elis, where the temple was opened once a year.[58] During the time of Plato, the Athenians periodically honored the god called Plouton with the "strewing of a couch" (tên klinên strôsai).[59] At Eleusis, Plouton had his own priestess.[60] Pluto was worshipped with Persephone as a divine couple at Knidos, Ephesos, Mytilene, and Sparta as well as at Eleusis, where they were known simply as God (Theos) and Goddess (Thea).[61]
40
+
41
+ In the ritual texts of the mystery religions preserved by the so-called Orphic or Bacchic gold tablets, from the late 5th century BC onward[62] the name Hades appears more frequently than Plouton, but in reference to the underground place:[63] Plouton is the ruler who presides over it in a harmonious partnership[64] with Persephone.[65] By the end of the 4th century BC, the name Plouton appears in Greek metrical inscriptions.[66] Two fragmentary tablets greet Pluto and Persephone jointly,[67] and the divine couple appear as welcoming figures in a metrical epitaph:
42
+
43
+ I know that even below the earth, if there is indeed a reward for the worthy ones,the first and foremost honors, nurse,[68] shall be yours, next to Persephone and Pluto.[69]
44
+
45
+ Hesychius identifies Pluto with Eubouleus,[70] but other ancient sources distinguish between these two underworld deities. In the Mysteries Eubouleus plays the role of a torchbearer, possibly a guide for the initiate's return.[71] In the view of Lewis Richard Farnell, Eubouleus was originally a title referring to the "good counsel" the ruler of the underworld was able to give and which was sought at Pluto's dream oracles; by the 2nd century BC, however, he had acquired a separate identity.[72]
46
+
47
+ The Orphic Hymn to Pluto addresses the god as "strong-spirited" and the "All-Receiver" who commands death and is the master of mortals. His titles are given as Zeus Chthonios and Euboulos ("Good Counsel").[73] In the hymn's topography, Pluto's dwelling is in Tartarus, simultaneously a "meadow" and "thick-shaded and dark," where the Acheron encircles "the roots of the earth." Hades is again the name of the place, here described as "windless," and its gates, through which Pluto carried "pure Demeter's daughter" as his bride, are located in an Attic cave within the district of Eleusis. The route from Persephone's meadow to Hades crosses the sea. The hymn concludes:
48
+
49
+ You alone were born to judge deeds obscure and conspicuous.Holiest and illustrious ruler of all, frenzied god,You delight in the worshiper's respect and reverence.Come with favor and joy to the initiates. I summon you.[74]
50
+
51
+ The hymn is one of several examples of Greco-Roman prayer that express a desire for the presence of a deity, and has been compared to a similar epiclesis in the Acts of Thomas.[75]
52
+
53
+ The names of both Hades and Pluto appear also in the Greek Magical Papyri and curse tablets, with Hades typically referring to the underworld as a place, and Pluto regularly invoked as the partner of Persephone.[76] Five Latin curse tablets from Rome, dating to the mid-1st century BC, promise Persephone and Pluto an offering of "dates, figs, and a black pig" if the curse is fulfilled by the desired deadline. The pig was a characteristic animal sacrifice to chthonic deities, whose victims were almost always black or dark in color.[77]
54
+
55
+ A set of curse tablets written in Doric Greek and found in a tomb addresses a Pasianax, "Lord to All,"[78] sometimes taken as a title of Pluto,[79] but more recently thought to be a magical name for the corpse.[80] Pasianax is found elsewhere as an epithet of Zeus, or in the tablets may invoke a daimon like Abrasax.[81]
56
+
57
+ A sanctuary dedicated to Pluto was called a ploutonion (Latin plutonium). The complex at Eleusis for the mysteries had a ploutonion regarded as the birthplace of the divine child Ploutos, in another instance of conflation or close association of the two gods.[82] Greek inscriptions record an altar of Pluto, which was to be "plastered", that is, resurfaced for a new round of sacrifices at Eleusis.[83] One of the known ploutonia was in the sacred grove between Tralleis and Nysa, where a temple of Pluto and Persephone was located. Visitors sought healing and dream oracles.[84] The ploutonion at Hierapolis, Phrygia, was connected to the rites of Cybele, but during the Roman Imperial era was subsumed by the cult of Apollo, as confirmed by archaeological investigations during the 1960s. It too was a dream oracle.[85] The sites often seem to have been chosen because the presence of naturally occurring mephitic vapors was thought to indicate an opening to the underworld.[86] In Italy, Avernus was considered an entrance to the underworld that produced toxic vapors, but Strabo seems not to think that it was a ploutonion.[87]
58
+
59
+ Kevin Clinton attempted to distinguish the iconography of Hades, Plouton, Ploutos, and the Eleusinian Theos in 5th-century vase painting that depicts scenes from or relating to the mysteries. In Clinton's schema, Plouton is a mature man, sometimes even white-haired; Hades is also usually bearded and mature, but his darkness is emphasized in literary descriptions, represented in art by dark hair. Plouton's most common attribute is a sceptre, but he also often holds a full or overflowing cornucopia; Hades sometimes holds a horn, but it is depicted with no contents and should be understood as a drinking horn. Unlike Plouton, Hades never holds agrarian attributes such as stalks of grain. His chest is usually bare or only partly covered, whereas Plouton is fully robed (exceptions, however, are admitted by the author). Plouton stands, often in the company of both Demeter and Kore, or sometimes one of the goddesses, but Hades almost always sits or reclines, usually with Persephone facing him.[88] "Confusion and disagreement" about the interpretation of these images remain.[89]
60
+
61
+ Attributes of Pluto mentioned in the Orphic Hymn to Pluto are his scepter, keys, throne, and horses. In the hymn, the keys are connected to his capacity for giving wealth to humanity, specifically the agricultural wealth of "the year's fruits."
62
+
63
+ Pausanias explains the significance of Pluto's key in describing a wondrously carved cedar chest at the Temple of Hera in Elis. Numerous deities are depicted, with one panel grouping Dionysus, Persephone, the nymphs and Pluto. Pluto holds a key because "they say that what is called Hades has been locked up by Pluto, and that nobody will return back again therefrom."[91] Natale Conti cites Pausanias in noting that keys are an attribute of Pluto as the scepter is of Jove (Greek Zeus) and the trident of Neptune (Poseidon).[92]
64
+
65
+ A golden key (chrusea klês) was laid on the tongue of initiates by priests at Eleusis[93] and was a symbol of the revelation they were obligated to keep secret.[94] A key is among the attributes of other infernal deities such as Hecate, Anubis, and Persephone, and those who act as guardians or timekeepers, such as Janus and Aion.[95] Aeacus (Aiakos), one of the three mortal kings who became judges in the afterlife, is also a kleidouchos (κλειδοῦχος), "holder of the keys," and a priestly doorkeeper in the court of Pluto and Persephone.[96]
66
+
67
+ According to the Stoic philosopher Cornutus (1st century AD), Pluto wore a wreath of phasganion, more often called xiphion,[97] traditionally identified as a type of gladiolus.[98] Dioscorides recorded medical uses for the plant. For extracting stings and thorns, xiphion was mixed with wine and frankincense to make a cataplasm. The plant was also used as an aphrodisiac[99] and contraceptive.[100] It grew in humid places. In an obscure passage, Cornutus seems to connect Pluto's wearing of phasganion to an etymology for Avernus, which he derives from the word for "air," perhaps through some association with the color glaukos, "bluish grey," "greenish" or "sea-colored," which might describe the plant's leaves. Because the color could describe the sky, Cornutus regularly gives it divine connotations.[101] Pluto's twin sister was named Glauca.
68
+
69
+ Ambiguity of color is characteristic of Pluto. Although both he and his realm are regularly described as dark, black, or gloomy, the god himself is sometimes seen as pale or having a pallor. Martianus Capella (5th century) describes him as both "growing pale in shadow, a fugitive from light" and actively "shedding darkness in the gloom of Tartarean night," crowned with a wreath made of ebony as suitable for the kingdom he governs.[102] The horses of Pluto are usually black, but Ovid describes them as "sky-colored" (caeruleus, from caelum, "sky"), which might be blue, greenish-blue, or dark blue.[103]
70
+
71
+ The Renaissance mythographer Natale Conti says wreaths of narcissus, maidenhair fern (adianthus), and cypress were given to Pluto.[104] In the Homeric Hymn to Demeter, Gaia (Earth) produced the narcissus at Zeus's request as a snare for Persephone; when she grasps it, a chasm opens up and the "Host to Many" (Hades) seizes her.[105] Narcissus wreaths were used in early times to crown Demeter and Persephone, as well as the Furies (Eumenides).[106] The flower was associated with narcotic drugginess (narkê, "torpor"),[107] erotic fascination,[108] and imminent death;[109] to dream of crowning oneself with narcissus was a bad sign.[110] In the myth of Narcissus, the flower is created when a beautiful, self-absorbed youth rejects sexuality and is condemned to perpetual self-love along the Styx.[111]
72
+
73
+ Conti's inclusion of adianthus (Adiantum in modern nomenclature) is less straightforward. The name, meaning "unmoistened" (Greek adianton), was taken in antiquity to refer to the fern's ability to repel water. The plant, which grew in wet places, was also called capillus veneris, "hair of Venus," divinely dry when she emerged from the sea.[112] Historian of medicine John M. Riddle has suggested that the adianthus was one of the ferns Dioscorides called asplenon and prescribed as a contraceptive (atokios).[113] The associations of Proserpine (Persephone) and the maidenhair are alluded to by Samuel Beckett in a 1946 poem, in which the self is a Platonic cave with capillaires, in French both "maidenhair fern" and "blood vessels".[114]
74
+
75
+ The cypress (Greek cyparissus, Latin cupressus) has traditional associations with mourning.[115] In ancient Attica, households in mourning were garlanded with cypress,[116] and it was used to fumigate the air during cremations.[117] In the myth of Cyparissus, a youth was transformed into a cypress, consumed by grief over the accidental death of a pet stag.[118] A "white cypress" is part of the topography of the underworld that recurs in the Orphic gold tablets as a kind of beacon near the entrance, perhaps to be compared with the Tree of Life in various world mythologies. The description of the cypress as "white" (Greek leukē), since the botanical tree is dark, is symbolic, evoking the white garments worn by initiates or the clothing of a corpse, or the pallor of the dead. In Orphic funeral rites, it was forbidden to make coffins of cypress.[119]
76
+
77
+ The tradition of the mystery religions favors Pluto as a loving and faithful partner to Persephone, in contrast to the violence of Hades in early myths, but one ancient myth that preserves a lover for him parallels the abduction and also has a vegetative aspect.[120] A Roman source says that Pluto fell in love with Leuca (Greek Leukē, "White"), the most beautiful of the nymphs, and abducted her to live with him in his realm. After the long span of her life came to its end, he memorialized their love by creating a white tree in the Elysian Fields. The tree was the white poplar (Greek leukē), the leaves of which are white on one side and dark on the other, representing the duality of upper and underworld.[121] A wreath of white poplar leaves was fashioned by Heracles to mark his ascent from the underworld, an aition for why it was worn by initiates[122] and by champion athletes participating in funeral games.[123] Like other plants associated with Pluto, white poplar was regarded as a contraceptive in antiquity.[124] The relation of this tree to the white cypress of the mysteries is debated.[125]
78
+
79
+ The Bibliotheca of Pseudo-Apollodorus uses the name Plouton instead of Hades in relating the tripartite division of sovereignty, the abduction of Persephone, and the visit of Orpheus to the underworld. This version of the theogony for the most part follows Hesiod (see above), but adds that the three brothers were each given a gift by the Cyclopes to use in their battle against the Titans: Zeus thunder and lightning; Poseidon a trident; and Pluto a helmet (kyneê).[126]
80
+
81
+ The helmet Pluto receives is presumably the magical Cap of Invisibility (aidos kyneê), but the Bibliotheca is the only ancient source that explicitly says it belonged to Pluto.[127] The verbal play of aidos, "invisible," and Hades is thought to account for this attribution of the helmet to the ruler of the underworld, since no ancient narratives record his use or possession of it. Later authors such as Rabelais (16th century) do attribute the helmet to Pluto.[128] Erasmus calls it the "helmet of Orcus"[129] and gives it as a figure of speech referring to those who conceal their true nature by a cunning device. Francis Bacon notes the proverbial usage: "the helmet of Pluto, which maketh the politic man go invisible, is secrecy in the counsel, and celerity in the execution."[130]
82
+
83
+ No ancient image of the ruler of the underworld can be said with certainty to show him with a bident,[131] though the ornamented tip of his scepter may have been misunderstood at times as a bident.[132] In the Roman world, the bident (from bi-, "two" + dent-, "teeth") was an agricultural implement. It may also represent one of the three types of lightning wielded by Jupiter, the Roman counterpart of Zeus, and the Etruscan Tinia. The later notion that the ruler of the underworld wielded a trident or bident can perhaps be traced to a line in Seneca's Hercules Furens ("Hercules Enraged"), in which Father Dis, the Roman counterpart of Pluto, uses a three-pronged spear to drive off Hercules as he attempts to invade the underworld. Seneca calls Dis the "Infernal Jove"[133] or the "dire Jove"[134] (the Jove who gives dire or ill omens, dirae), just as in the Greek tradition, Plouton is sometimes identified as a "chthonic Zeus." That the trident and bident might be somewhat interchangeable is suggested by a Byzantine scholiast, who mentions Poseidon being armed with a bident.[135]
84
+
85
+ In the Middle Ages, classical underworld figures began to be depicted with a pitchfork.[136] Early Christian writers had identified the classical underworld with Hell, and its denizens as demons or devils.[137] In the Renaissance, the bident became a conventional attribute of Pluto. In an influential ceiling mural depicting the wedding of Cupid and Psyche, painted by Raphael's workshop for the Villa Farnesina in 1517, Pluto is shown holding the bident, with Cerberus at his side, while Neptune holds the trident.[138] Perhaps influenced by this work, Agostino Carracci originally depicted Pluto with a bident in a preparatory drawing for his painting Pluto (1592), in which the god ended up holding his characteristic key.[139] In Caravaggio's Giove, Nettuno e Plutone (ca. 1597), a ceiling mural based on alchemical allegory, it is Neptune who holds the bident.[140]
86
+
87
+ The name Plouton is first used in Greek literature by Athenian playwrights.[58] In Aristophanes' comedy The Frogs (Batrachoi, 405 BC), in which "the Eleusinian colouring is in fact so pervasive,"[143] the ruler of the underworld is one of the characters, under the name of Plouton. The play depicts a mock descent to the underworld by the god Dionysus to bring back one of the dead tragic playwrights in the hope of restoring Athenian theater to its former glory. Pluto is a silent presence onstage for about 600 lines presiding over a contest among the tragedians, then announces that the winner has the privilege of returning to the upper world.[144] The play also draws on beliefs and imagery from Orphic and Dionysiac cult, and rituals pertaining to Ploutos (Plutus, "wealth").[145] In a fragment from another play by Aristophanes, a character "is comically singing of the excellent aspects of being dead", asking in reference to the tripartition of sovereignty over the world:
88
+
89
+ And where do you think Pluto gets his name [i.e. "rich"],
90
+ if not because he took the best portion?
91
+ ...
92
+ How much better are things below than what Zeus possesses! [146]
93
+
94
+ To Plato, the god of the underworld was "an agent in [the] beneficent cycle of death and rebirth" meriting worship under the name of Plouton, a giver of spiritual wealth.[147] In the dialogue Cratylus, Plato has Socrates explain the etymology of Plouton, saying that Pluto gives wealth (ploutos), and his name means "giver of wealth, which comes out of the earth beneath". Because the name Hades is taken to mean "the invisible", people fear what they cannot see; although they are in error about the nature of this deity's power, Socrates says, "the office and name of the God really correspond":
95
+
96
+ He is the perfect and accomplished Sophist, and the great benefactor of the inhabitants of the other world; and even to us who are upon earth he sends from below exceeding blessings. For he has much more than he wants down there; wherefore he is called Pluto (or the rich). Note also, that he will have nothing to do with men while they are in the body, but only when the soul is liberated from the desires and evils of the body. Now there is a great deal of philosophy and reflection in that; for in their liberated state he can bind them with the desire of virtue, but while they are flustered and maddened by the body, not even father Cronos himself would suffice to keep them with him in his own far-famed chains.[148]
97
+
98
+ Since "the union of body and soul is not better than the loosing,"[149] death is not an evil. Walter Burkert thus sees Pluto as a "god of dissolution."[150] Among the titles of Pluto was Isodaitēs, "divider into equal portions," a title that connects him to the fate goddesses the Moirai.[151] Isodaitēs was also a cult title for Dionysus and Helios.[152]
99
+
100
+ In ordering his ideal city, Plato proposed a calendar in which Pluto was honored as a benefactor in the twelfth month, implicitly ranking him as one of the twelve principal deities.[153] In the Attic calendar, the twelfth month, more or less equivalent to June, was Skirophorion; the name may be connected to the rape of Persephone.[154]
101
+
102
+ In the theogony of Euhemerus (4th century BC), the gods were treated as mortal rulers whose deeds were immortalized by tradition. Ennius translated Euhemerus into Latin about a hundred years later, and a passage from his version was in turn preserved by the early Christian writer Lactantius.[155] Here the union of Saturn (the Roman equivalent of Cronus) and Ops, an Italic goddess of abundance, produces Jupiter (Greek Zeus), Juno (Hera), Neptune, Pluto, and Glauca:
103
+
104
+ Then Saturn took Ops to wife. Titan, the elder brother, demanded the kingship for himself. Vesta their mother, with their sisters Ceres [Demeter] and Ops, persuaded Saturn not to give way to his brother in the matter. Titan was less good-looking than Saturn; for that reason, and also because he could see his mother and sisters working to have it so, he conceded the kingship to Saturn, and came to terms with him: if Saturn had a male child born to him, it would not be reared. This was done to secure reversion of the kingship to Titan's children. They then killed the first son that was born to Saturn. Next came twin children, Jupiter and Juno. Juno was given to Saturn to see while Jupiter was secretly removed and given to Vesta to be brought up without Saturn's knowledge. In the same way without Saturn knowing, Ops bore Neptune and hid him away. In her third labor Ops bore another set of twins, Pluto and Glauce. (Pluto in Latin is Diespiter;[156] some call him Orcus.) Saturn was shown his daughter Glauce but his son Pluto was hidden and removed. Glauce then died young. That is the pedigree, as written, of Jupiter and his brothers; that is how it has been passed down to us in holy scripture.
105
+
106
+ In this theogony, which Ennius introduced into Latin literature, Saturn, "Titan,"[157] Vesta, Ceres, and Ops are siblings; Glauca is the twin of Pluto and dies mysteriously young. There are several mythological figures named Glauca; the sister of Pluto may be the Glauca who in Cicero's account of the three aspects of Diana conceived the third with the equally mysterious Upis.[158] This is the genealogy for Pluto that Boccaccio used in his Genealogia Deorum Gentilium and in his lectures explicating the Divine Comedy of Dante.[159]
107
+
108
+ In Book 3 of the Sibylline Oracles, dating mostly to the 2nd century AD, Rhea gives birth to Pluto as she passes by Dodona, "where the watery paths of the River Europus flowed, and the water ran into the sea, merged with the Peneius. This is also called the Stygian river."[160]
109
+
110
+ The Orphic theogonies are notoriously varied,[161] and Orphic cosmology influenced the varying Gnostic theogonies of late antiquity.[162] Clementine literature (4th century AD) preserves a theogony with explicit Orphic influence that also draws on Hesiod, yielding a distinctive role for Pluto. When the primordial elements came together by orderly cyclonic force, they produced a generative sphere, the "egg" from which the primeval Orphic entity Phanes is born and the world is formed. The release of Phanes and his ascent to the heavenly top of the world-egg causes the matter left in the sphere to settle in relation to weight, creating the tripartite world of the traditional theogonies:[163]
111
+
112
+ Its lower part, the heaviest element, sinks downwards, and is called Pluto because of its gravity, weight, and great quantity (plêthos) of matter. After the separation of this heavy element in the middle part of the egg the waters flow together, which they call Poseidon. The purest and noblest element, the fire, is called Zeus, because its nature is glowing (ζέουσα, zeousa). It flies right up into the air, and draws up the spirit, now called Metis, that was left in the underlying moisture. And when this spirit has reached the summit of the ether, it is devoured by Zeus, who in his turn begets the intelligence (σύνεσις, sunesis), also called Pallas. And by this artistic intelligence the etherial artificer creates the whole world. This world is surrounded by the air, which extends from Zeus, the very hot ether, to the earth; this air is called Hera.[164]
113
+
114
+ This cosmogony interprets Hesiod allegorically, and so the heaviest element is identified not as the Earth, but as the netherworld of Pluto.[165] (In modern geochemistry, plutonium is the heaviest primordial element.) Supposed etymologies are used to make sense of the relation of physical process to divine name; Plouton is here connected to plêthos (abundance).[166]
115
+
116
+ In the Stoic system, Pluto represented the lower region of the air, where according to Seneca (1st century AD) the soul underwent a kind of purgatory before ascending to the ether.[167] Seneca's contemporary Cornutus made use of the traditional etymology of Pluto's name for Stoic theology. The Stoics believed that the form of a word contained the original truth of its meaning, which over time could become corrupted or obscured.[168] Plouton derived from ploutein, "to be wealthy," Cornutus said, because "all things are corruptible and therefore are 'ultimately consigned to him as his property.'"[169]
117
+
118
+ Within the Pythagorean and Neoplatonic traditions, Pluto was allegorized as the region where souls are purified, located between the moon (as represented by Persephone) and the sun.[170] Neoplatonists sometimes interpreted the Eleusinian Mysteries as a fabula of celestial phenomena:
119
+
120
+ Authors tell the fable that Ceres was Proserpina's mother, and that Proserpina while playing one day was kidnapped by Pluto. Her mother searched for her with lighted torches; and it was decreed by Jupiter that the mother should have her daughter for fifteen days in the month, but Pluto for the rest, the other fifteen. This is nothing but that the name Ceres is used to mean the earth, called Ceres on analogy with crees ('you may create'), for all things are created from her. By Proserpina is meant the moon, and her name is on analogy with prope serpens ('creeping near'), for she is moved nearer to the earth than the other planets. She is called earth's daughter, because her substance has more of earth in it than of the other elements. By Pluto is meant the shadow that sometimes obstructs the moon.[171]
121
+
122
+ A dedicatory inscription from Smyrna describes a 1st–2nd century sanctuary to "God Himself" as the most exalted of a group of six deities, including clothed statues of Plouton Helios and Koure Selene, "Pluto the Sun" and "Kore the Moon."[172] The status of Pluto and Kore as a divine couple is marked by what the text describes as a "linen embroidered bridal curtain."[173] The two are placed as bride and groom within an enclosed temple, separately from the other deities cultivated at the sanctuary.
123
+
124
+ Plouton Helios is mentioned in other literary sources in connection with Koure Selene and Helios Apollon; the sun on its nighttime course was sometimes envisioned as traveling through the underworld on its return to the east. Apuleius describes a rite in which the sun appears at midnight to the initiate at the gates of Proserpina; it has been suggested that this midnight sun could be Plouton Helios.[174]
125
+
126
+ The Smyrna inscription also records the presence of Helios Apollon at the sanctuary. As two forms of Helios, Apollo and Pluto pose a dichotomy:
127
+
128
+ It has been argued that the sanctuary was in the keeping of a Pythagorean sodality or "brotherhood". The relation of Orphic beliefs to the mystic strand of Pythagoreanism, or of these to Platonism and Neoplatonism, is complex and much debated.[176]
129
+
130
+ In the Hellenistic era, the title or epithet Plutonius is sometimes affixed to the names of other deities. In the Hermetic Corpus,[177] Jupiter Plutonius "rules over earth and sea, and it is he who nourishes mortal things that have soul and bear fruit."[178]
131
+
132
+ In Ptolemaic Alexandria, at the site of a dream oracle, Serapis was identified with Aion Plutonius.[179] Gilles Quispel conjectured that this figure results from the integration of the Orphic Phanes into Mithraic religion at Alexandria, and that he "assures the eternity of the city," where the birth of Aion was celebrated at the sanctuary of Kore on 6 January.[180] In Latin, Plutonius can be an adjective that simply means "of or pertaining to Pluto."[181]
133
+
134
+ The Neoplatonist Proclus (5th century AD) considered Pluto the third demiurge, a sublunar demiurge who was also identified variously with Poseidon or Hephaestus. This idea is present in Renaissance Neoplatonism, as for instance in the cosmology of Marsilio Ficino (1433–99),[182] who translated Orphic texts into Latin for his own use.[183] Ficino saw the sublunar demiurge as "a daemonic 'many-headed' sophist, a magus, an enchanter, a fashioner of images and reflections, a shape-changer of himself and of others, a poet in a way of being and of not-being, a royal Pluto." This demiurgic figure identified with Pluto is also "'a purifier of souls' who presides over the magic of love and generation and who uses a fantastic counter-art to mock, but also ... to supplement, the divine icastic or truly imitative art of the sublime translunar Demiurge."[184]
135
+
136
+ Christian writers of late antiquity sought to discredit the competing gods of Roman and Hellenistic religions, often adopting the euhemerizing approach in regarding them not as divinities, but as people glorified through stories and cultic practices and thus not true deities worthy of worship. The infernal gods, however, retained their potency, becoming identified with the Devil and treated as demonic forces by Christian apologists.[185]
137
+
138
+ One source of Christian revulsion toward the chthonic gods was the arena. Attendants in divine costume, among them a "Pluto" who escorted corpses out, were part of the ceremonies of the gladiatorial games.[186] Tertullian calls the mallet-wielding figure usually identified as the Etruscan Charun the "brother of Jove,"[187] that is, Hades/Pluto/Dis, an indication that the distinctions among these denizens of the underworld were becoming blurred in a Christian context.[188] Prudentius, in his poetic polemic against the religious traditionalist Symmachus, describes the arena as a place where savage vows were fulfilled on an altar to Pluto (solvit ad aram / Plutonis fera vota), where fallen gladiators were human sacrifices to Dis and Charon received their souls as his payment, to the delight of the underworld Jove (Iovis infernalis).[189]
139
+
140
+ Medieval mythographies, written in Latin, continue the conflation of Greek and Roman deities begun by the ancient Romans themselves. Perhaps because the name Pluto was used in both traditions, it appears widely in these Latin sources for the classical ruler of the underworld, who is also seen as the double, ally, or adjunct to the figure in Christian mythology known variously as the Devil, Satan, or Lucifer. The classical underworld deities became casually interchangeable with Satan as an embodiment of Hell.[190] For instance, in the 9th century, Abbo Cernuus, the only witness whose account of the Siege of Paris survives, called the invading Vikings the "spawn of Pluto."[191]
141
+
142
+ In the Little Book on Images of the Gods, Pluto is described as
143
+
144
+ an intimidating personage sitting on a throne of sulphur, holding the scepter of his realm in his right hand, and with his left strangling a soul. Under his feet three-headed Cerberus held a position, and beside him he had three Harpies. From his golden throne of sulphur flowed four rivers, which were called, as is known, Lethe, Cocytus, Phlegethon and Acheron, tributaries of the Stygian swamp.[192]
145
+
146
+ This work derives from that of the Third Vatican Mythographer, possibly one Albricus or Alberic, who presents often extensive allegories and devotes his longest chapter, including an excursus on the nature of the soul, to Pluto.[193]
147
+
148
+ In Dante's Divine Comedy (written 1308–1321), Pluto presides over the fourth circle of Hell, to which the greedy are condemned.[194] The Italian form of the name is Pluto, taken by some commentators[195] to refer specifically to Plutus as the god of wealth who would preside over the torment of those who hoarded or squandered it in life.[196] Dante's Pluto is greeted as "the great enemy"[197] and utters the famously impenetrable line Papé Satàn, papé Satàn aleppe. Much of this Canto is devoted to the power of Fortuna to give and take away. Entrance into the fourth circle has marked a downward turn in the poet's journey, and the next landmark after he and his guide cross from the circle is the Stygian swamp, through which they pass on their way to the city of Dis (Italian Dite). Dante's clear distinction between Pluto and Dis suggests that he had Plutus in mind in naming the former. The city of Dis is the "citadel of Lower Hell" where the walls are garrisoned by fallen angels and Furies.[198] Pluto is treated likewise as a purely Satanic figure by the 16th-century Italian poet Tasso throughout his epic Jerusalem Delivered,[199] in which "great Dis, great Pluto" is invoked in the company of "all ye devils that lie in deepest hell."[200]
149
+
150
+ Influenced by Ovid and Claudian, Geoffrey Chaucer (1343–1400)[201] developed the myth of Pluto and Proserpina (the Latin name of Persephone) in English literature. Like earlier medieval writers, Chaucer identifies Pluto's realm with Hell as a place of condemnation and torment,[202] and describes it as "derk and lowe" ("dark and low").[203] But Pluto's major appearance in the works of Chaucer comes as a character in "The Merchant's Tale," where Pluto is identified as the "Kyng of Fayerye" (Fairy King).[204] As in the anonymous romance Sir Orfeo (ca. 1300), Pluto and Proserpina rule over a fantastical world that melds classical myth and fairyland.[205] Chaucer has the couple engage in a comic battle of the sexes that undermines the Christian imagery in the tale, which is Chaucer's most sexually explicit.[206] The Scottish poet William Dunbar ca. 1503 also described Pluto as a folkloric supernatural being, "the elrich incubus / in cloke of grene" ("the eldritch incubus in cloak of green"), who appears among the courtiers of Cupid.[207]
151
+
152
+ The name Pluto for the classical ruler of the underworld was further established in English literature by Arthur Golding, whose translation of Ovid's Metamorphoses (1565) was of great influence on William Shakespeare,[208] Christopher Marlowe,[209] and Edmund Spenser.[210][211] Golding translates Ovid's Dis as Pluto,[212] a practice that prevails among English translators, despite John Milton's use of the Latin Dis in Paradise Lost.[213] The Christian perception of the classical underworld as Hell influenced Golding's translation practices; for instance, Ovid's tenebrosa sede tyrannus / exierat ("the tyrant [Dis] had gone out of his shadowy realm") becomes "the prince of fiends forsook his darksome hole".[214]
153
+
154
+ Pluto's court as a literary setting could bring together a motley assortment of characters. In Huon de Méry's 13th-century poem "The Tournament of the Antichrist", Pluto rules over a congregation of "classical gods and demigods, biblical devils, and evil Christians."[215] In the 15th-century dream allegory The Assembly of Gods, the deities and personifications are "apparelled as medieval nobility"[216] basking in the "magnyfycence" of their "lord Pluto," who is clad in a "smoky net" and reeking of sulphur.[217]
155
+
156
+ Throughout the Renaissance, images and ideas from classical antiquity entered popular culture through the new medium of print and through pageants and other public performances at festivals. The Fête-Dieu at Aix-en-Provence in 1462 featured characters costumed as a number of classical deities, including Pluto,[218] and Pluto was the subject of one of seven pageants presented as part of the 1521 Midsummer Eve festival in London.[219] During the 15th century, no mythological theme was brought to the stage more often than Orpheus's descent, with the court of Pluto inspiring fantastical stagecraft.[220] Leonardo da Vinci designed a set with a rotating mountain that opened up to reveal Pluto emerging from the underworld; the drawing survives and was the basis for a modern recreation.[221]
157
+
158
+ The tragic descent of the hero-musician Orpheus to the underworld to retrieve his bride, and his performance at the court of Pluto and Proserpina, offered compelling material for librettists and composers of opera (see List of Orphean operas) and ballet. Pluto also appears in works based on other classical myths of the underworld. As a singing role, Pluto is almost always written for a bass voice, with the low vocal range representing the depths and weight of the underworld, as in Monteverdi and Rinuccini's L'Orfeo (1607) and Il ballo delle ingrate (1608). In their ballo, a form of ballet with vocal numbers, Cupid invokes Pluto from the underworld to lay claim to "ungrateful" women who were immune to love. Pluto's part is considered particularly virtuosic,[222] and a reviewer at the première described the character, who appeared as if from a blazing Inferno, as "formidable and awesome in sight, with garments as given him by poets, but burdened with gold and jewels."[223]
159
+
160
+ The role of Pluto is written for a bass in Peri's Euridice (1600);[224] Caccini's Euridice (1602); Rossi's Orfeo (1647); Cesti's Il pomo d'oro (1668);[225] Sartoris's Orfeo (1672); Lully's Alceste, a tragédie en musique (1674);[226] Charpentier's chamber opera La descente d'Orphée aux enfers (1686);[227] Telemann's Orpheus (1726); and Rameau's Hippolyte et Aricie (1733).[228] Pluto was a baritone in Lully's Proserpine (1680), which includes a duo dramatizing the conflict between the royal underworld couple that is notable for its early use of musical characterization.[229] Perhaps the most famous of the Orpheus operas is Offenbach's satiric Orpheus in the Underworld (1858),[230] in which a tenor sings the role of Pluton, disguised in the giddily convoluted plotting as Aristée (Aristaeus), a farmer.
161
+
162
+ Scenes set in Pluto's realm were orchestrated with instrumentation that became conventionally "hellish", established in Monteverdi's L'Orfeo as two cornets, three trombones, a bassoon, and a régale.[231]
163
+
164
+ Pluto has also been featured as a role in ballet. In Lully's "Ballet of Seven Planets'" interlude from Cavalli's opera Ercole amante ("Hercules in Love"), Louis XIV himself danced as Pluto and other characters; it was a spectacular flop.[232] Pluto appeared in Noverre's lost La descente d'Orphée aux Enfers (1760s). Gaétan Vestris danced the role of the god in Florian Deller's Orefeo ed Euridice (1763).[233] The Persephone choreographed by Robert Joffrey (1952) was based on André Gide's line "king of winters, the infernal Pluto."[234]
165
+
166
+ The abduction of Proserpina by Pluto was the scene from the myth most often depicted by artists, who usually follow Ovid's version. The influential emblem book Iconologia of Cesare Ripa (1593, second edition 1603) presents the allegorical figure of Rape with a shield on which the abduction is painted.[235] Jacob Isaacsz. van Swanenburg, the first teacher of Rembrandt, echoed Ovid in showing Pluto as the target of Cupid's arrow while Venus watches her plan carried out (location of painting unknown). The treatment of the scene by Rubens is similar. Rembrandt incorporates Claudian's more passionate characterizations.[236] The performance of Orpheus in the court of Pluto and Proserpina was also a popular subject.
167
+
168
+ Major artists who produced works depicting Pluto include:
169
+
170
+ After the Renaissance, literary interest in the abduction myth waned until the revival of classical myth among the Romantics. The work of mythographers such as J.G. Frazer and Jane Ellen Harrison helped inspire the recasting of myths in modern terms by Victorian and Modernist writers. In Tess of the d'Urbervilles (1891), Thomas Hardy portrays Alec d'Urberville as "a grotesque parody of Pluto/Dis" exemplifying the late-Victorian culture of male domination, in which women were consigned to "an endless breaking ... on the wheel of biological reproduction."[243] A similar figure is found in The Lost Girl (1920) by D.H. Lawrence, where the character Ciccio[244] acts as Pluto to Alvina's Persephone, "the deathly-lost bride ... paradoxically obliterated and vitalised at the same time by contact with Pluto/Dis" in "a prelude to the grand design of rebirth." The darkness of Pluto is both a source of regeneration, and of "merciless annihilation."[245] Lawrence takes up the theme elsewhere in his work; in The First Lady Chatterley (1926, an early version of Lady Chatterley's Lover), Connie Chatterley sees herself as a Persephone and declares "she'd rather be married to Pluto than Plato," casting her earthy gamekeeper lover as the former and her philosophy-spouting husband as the latter.[246]
171
+
172
+ In Rick Riordan's young adult fantasy series The Heroes of Olympus, the character Hazel Levesque is the daughter of Pluto, god of riches. She is one of seven characters with a parent from classical mythology.[247]
173
+
174
+ Scientific terms derived from the name of Pluto include:
en/4678.html.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ Pluto is a dwarf planet in the Solar System.
2
+
3
+ Pluto or Plouto may also refer to:
en/4679.html.txt ADDED
@@ -0,0 +1,174 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ Pluto (Latin: Plūtō; Greek: Πλούτων, Ploútōn) was the ruler of the underworld in classical mythology. The earlier name for the god was Hades, which became more common as the name of the underworld itself. In ancient Greek religion and mythology, Pluto represents a more positive concept of the god who presides over the afterlife. Ploutōn was frequently conflated with Ploutos, the Greek god of wealth, because mineral wealth was found underground, and because as a chthonic god Pluto ruled the deep earth that contained the seeds necessary for a bountiful harvest.[1] The name Ploutōn came into widespread usage with the Eleusinian Mysteries, in which Pluto was venerated as both a stern ruler and a loving husband to Persephone. The couple received souls in the afterlife and are invoked together in religious inscriptions, being referred to as Plouton and as Kore respectively. Hades, by contrast, had few temples and religious practices associated with him, and he is portrayed as the dark and violent abductor of Persephone.
4
+
5
+ Pluto and Hades differ in character, but they are not distinct figures and share two dominant myths. In Greek cosmogony, the god received the rule of the underworld in a three-way division of sovereignty over the world, with his brother Zeus ruling the sky and his other brother Poseidon sovereign over the sea. His central narrative in myth is of him abducting Persephone to be his wife and the queen of his realm.[2] Plouton as the name of the ruler of the underworld first appears in Greek literature of the Classical period, in the works of the Athenian playwrights and of the philosopher Plato, who is the major Greek source on its significance. Under the name Pluto, the god appears in other myths in a secondary role, mostly as the possessor of a quest-object, and especially in the descent of Orpheus or other heroes to the underworld.[3]
6
+
7
+ Plūtō ([ˈpluːtoː]; genitive Plūtōnis) is the Latinized form of the Greek Plouton. Pluto's Roman equivalent is Dis Pater, whose name is most often taken to mean "Rich Father" and is perhaps a direct translation of Plouton. Pluto was also identified with the obscure Roman Orcus, like Hades the name of both a god of the underworld and the underworld as a place. The borrowed Greek name Pluto is sometimes used for the ruler of the dead in Latin literature, leading some mythology handbooks to assert misleadingly that Pluto was the Roman counterpart of Hades.[4][citation needed] Pluto (Pluton in French and German, Plutone in Italian) becomes the most common name for the classical ruler of the underworld in subsequent Western literature and other art forms.
8
+
9
+ The name Plouton does not appear in Greek literature of the Archaic period.[5] In Hesiod's Theogony, the six children of Cronus and Rhea are Zeus, Hera, Poseidon, Hades, Demeter, and Hestia. The male children divide the world into three realms. Hades takes Persephone by force from her mother Demeter, with the consent of Zeus. Ploutos, "Wealth," appears in the Theogony as the child of Demeter and Iasion: "fine Plutus, who goes upon the whole earth and the broad back of the sea, and whoever meets him and comes into his hands, that man he makes rich, and he bestows much wealth upon him." The union of Demeter and Iasion, described also in the Odyssey,[6] took place in a fallow field that had been ploughed three times, in what seems to be a reference to a ritual copulation or sympathetic magic to ensure the earth's fertility.[7] "The resemblance of the name Ploutos to Plouton ...," it has been noted, "cannot be accidental. Plouton is lord of the dead, but as Persephone's husband he has serious claims to the powers of fertility."[8] Demeter's son Plutus merges in the narrative tradition with her son-in-law Pluto, redefining the implacable chariot-driver Hades whose horses trample the flowering earth.[9]
10
+
11
+ That the underworld god was associated early on with success in agricultural activity is already evident in Hesiod's Works and Days, line 465-469: "Pray to Zeus of the Earth and to pure Demeter to make Demeter's holy grain sound and heavy, when first you begin ploughing, when you hold in your hand the end of the plough-tail and bring down your stick on the backs of the oxen as they draw on the pole-bar by the yoke-straps."[10]
12
+
13
+ Plouton was one of several euphemistic names for Hades, described in the Iliad as the god most hateful to mortals.[11] Plato says that people prefer the name Plouton, "giver of wealth," because the name of Hades is fear-provoking.[12] The name was understood as referring to "the boundless riches of the earth, both the crops on its surface—he was originally a god of the land—and the mines hidden within it."[13] What is sometimes taken as "confusion" of the two gods Plouton and Ploutos ("Wealth") held or acquired a theological significance in antiquity. As a lord of abundance or riches, Pluto expresses the aspect of the underworld god that was positive, symbolized in art by the "horn of plenty" (cornucopia),[14] by means of which Plouton is distinguished from the gloomier Hades.[15]
14
+
15
+ The Roman poet Ennius (ca. 239–169 BC), the leading figure in the Hellenization of Latin literature, considered Pluto a Greek god to be explained in terms of the Roman equivalents Dis Pater and Orcus.[16] It is unclear whether Pluto had a literary presence in Rome before Ennius. Some scholars think that rituals and beliefs pertaining to Pluto entered Roman culture with the establishment of the Saecular Games in 249 BC, and that Dis pater was only a translation of Plouton.[17] In the mid-1st century BC, Cicero identifies Pluto with Dis, explaining that "The earth in all its power and plenty is sacred to Father Dis, a name which is the same as Dives, 'The Wealthy One,' as is the Greek Plouton. This is because everything is born of the earth and returns to it again."[18]
16
+
17
+ During the Roman Imperial era, the Greek geographer Strabo (1st century AD) makes a distinction between Pluto and Hades. In writing of the mineral wealth of ancient Iberia (Roman Spain), he says that among the Turdetani, it is "Pluto, and not Hades, who inhabits the region down below."[19] In the discourse On Mourning by the Greek author Lucian (2nd century AD), Pluto's "wealth" is the dead he rules over in the abyss (chasma); the name Hades is reserved for the underworld itself.[20]
18
+
19
+ In Greek religious practice, Pluto is sometimes seen as the "chthonic Zeus" (Zeus Chthonios[21] or Zeus Catachthonios[22]), or at least as having functions or significance equivalent to those of Zeus but pertaining to the earth or underworld.[23] In ancient Roman and Hellenistic religion, Pluto was identified with a number of other deities, including Summanus, the Roman god of nocturnal thunder;[24] Februus, the Roman god from whose purification rites the month of February takes its name;[25] the syncretic god Serapis, regarded as Pluto's Egyptian equivalent;[26] and the Semitic god Muth (Μούθ). Muth was described by Philo of Byblos as the equivalent of both Thanatos (Death personified) and Pluto.[27] The ancient Greeks did not regard Pluto as "death" per se.[28]
20
+
21
+ The best-known myth involving Pluto or Hades is the abduction of Persephone, also known as Kore ("the Maiden"). The earliest literary versions of the myth are a brief mention in Hesiod's Theogony and the extended narrative of the Homeric Hymn to Demeter; in both these works, the ruler of the underworld is named as Hades ("the Hidden One"). Hades is an unsympathetic figure, and Persephone's unwillingness is emphasized.[29] Increased usage of the name Plouton in religious inscriptions and literary texts reflects the influence of the Eleusinian Mysteries, which treated Pluto and Persephone as a divine couple who received initiates in the afterlife; as such, Pluto was disassociated from the "violent abductor" of Kore.[30] Two early works that give the abductor god's name as Pluto are the Greek mythography traditionally known as the Library of "Apollodorus" (1st century BC)[31] and the Latin Fables of Hyginus (ca. 64 BC–AD 17).[32]
22
+
23
+ The most influential version of the abduction myth is that of Ovid (d. 17 or 18 AD), who tells the story in both the Metamorphoses (Book 5) and the Fasti (Book 4).[33] Another major retelling, also in Latin, is the long unfinished poem De raptu Proserpinae ("On the Abduction of Proserpina") by Claudian (d. 404 AD). Ovid uses the name Dis, not Pluto in these two passages,[34] and Claudian uses Pluto only once; translators and editors, however, sometimes supply the more familiar "Pluto" when other epithets appear in the source text.[35] The abduction myth was a popular subject for Greek and Roman art, and recurs throughout Western art and literature, where the name "Pluto" becomes common (see Pluto in Western art and literature below). Narrative details from Ovid and Claudian influence these later versions in which the abductor is named as Pluto, especially the role of Venus and Cupid in manipulating Pluto with love and desire.[36] Throughout the Middle Ages and Renaissance, and certainly by the time of Natale Conti's influential Mythologiae (1567), the traditions pertaining to the various rulers of the classical underworld coalesced into a single mythology that made few if any distinctions among Hades, Pluto, Dis, and Orcus.
24
+
25
+ Unlike his freely procreating brothers Zeus and Poseidon, Pluto is monogamous, and is rarely said to have children.[37] In Orphic texts,[38] the chthonic nymph Melinoe is the daughter of Persephone by Zeus disguised as Pluto,[39] and the Eumenides ("The Kindly Ones") are the offspring of Persephone and Zeus Chthonios, often identified as Pluto.[40] The Augustan poet Vergil says that Pluto is the father of Allecto the Fury, whom he hates.[41] The lack of a clear distinction between Pluto and "chthonic Zeus" confuses the question of whether in some traditions, now obscure, Persephone bore children to her husband. In the late 4th century AD, Claudian's epic on the abduction motivates Pluto with a desire for children. The poem is unfinished, however, and anything Claudian may have known of these traditions is lost.[42]
26
+
27
+ Justin Martyr (2nd century AD) alludes to children of Pluto, but neither names nor enumerates them.[43] Hesychius (5th century AD) mentions a "son of Pluto."[44] In his 14th-century mythography, Boccaccio records a tradition in which Pluto was the father of the divine personification Veneratio ("Reverence"), noting that she had no mother because Proserpina (the Latin name of Persephone) was sterile.[45]
28
+
29
+ In The Faerie Queene (1590s), Edmund Spenser invents a daughter for Pluto whom he calls Lucifera.[46] The character's name was taken from the 16th-century mythography of Natale Conti, who used it as the Latin translation of Greek phosphor, "light-bearer," a regular epithet of Hecate.[47] Spenser incorporated aspects of the mysteries into The Faerie Queene.[48]
30
+
31
+ Orpheus was regarded as a founder and prophet of the mysteries called "Orphic," "Dionysiac," or "Bacchic." Mythologized for his ability to entrance even animals and trees with his music, he was also credited in antiquity with the authorship of the lyrics that have survived as the Orphic Hymns, among them a hymn to Pluto. Orpheus's voice and lyre-playing represented a medium of revelation or higher knowledge for the mystery cults.[49]
32
+
33
+ In his central myth, Orpheus visits the underworld in the hope of retrieving his bride, Eurydice, relying on the power of his music to charm the king and queen of Hades. Greek narratives of Orpheus's descent and performance typically name the ruler of the underworld as Plouton, as for instance in the Bibliotheca.[50] The myth demonstrates the importance of Pluto "the Rich" as the possessor of a quest-object. Orpheus performing before Pluto and Persephone was a common subject of ancient and later Western literature and art, and one of the most significant mythological themes of the classical tradition.[51]
34
+
35
+ The demonstration of Orpheus's power depends on the normal obduracy of Pluto; the Augustan poet Horace describes him as incapable of tears.[52] Claudian, however, portrays the steely god as succumbing to Orpheus's song so that "with iron cloak he wipes his tears" (ferrugineo lacrimas deterget amictu), an image renewed by Milton in Il Penseroso (106–107): "Such notes ... / Drew iron tears down Pluto's cheek."[53]
36
+
37
+ The Greek writer Lucian (ca. 125–after 180 AD) suggests that Pluto's love for his wife gave the ruler of the underworld a special sympathy or insight into lovers parted by death.[54] In one of Lucian's Dialogues of the Dead, Pluto questions Protesilaus, the first Greek hero killed in the Trojan War, who wishes to return to the world of the living. "You are then in love with life?" Pluto asks. "Such lovers we have here in plenty; but they love an object, which none of them can obtain." Protesilaus explains, like an Orpheus in reverse, that he has left behind a young bride whose memory even the Lethe's waters of forgetting have not erased from him. Pluto assures him that death will reunite them someday, but Protesilaus argues that Pluto himself should understand love and its impatience, and reminds the king of his grant to Orpheus and to Alcestis, who took her husband's place in death and then was permitted at the insistence of Heracles to return to him. When Persephone intercedes for the dead warrior, Pluto grants the request at once, though allowing only one day for the reunion.[55]
38
+
39
+ As Pluto gained importance as an embodiment of agricultural wealth within the Eleusinian Mysteries, from the 5th century BC onward the name Hades was increasingly reserved for the underworld as a place.[56] Neither Hades nor Pluto was one of the traditional Twelve Olympians, and Hades seems to have received limited cult,[57] perhaps only at Elis, where the temple was opened once a year.[58] During the time of Plato, the Athenians periodically honored the god called Plouton with the "strewing of a couch" (tên klinên strôsai).[59] At Eleusis, Plouton had his own priestess.[60] Pluto was worshipped with Persephone as a divine couple at Knidos, Ephesos, Mytilene, and Sparta as well as at Eleusis, where they were known simply as God (Theos) and Goddess (Thea).[61]
40
+
41
+ In the ritual texts of the mystery religions preserved by the so-called Orphic or Bacchic gold tablets, from the late 5th century BC onward[62] the name Hades appears more frequently than Plouton, but in reference to the underground place:[63] Plouton is the ruler who presides over it in a harmonious partnership[64] with Persephone.[65] By the end of the 4th century BC, the name Plouton appears in Greek metrical inscriptions.[66] Two fragmentary tablets greet Pluto and Persephone jointly,[67] and the divine couple appear as welcoming figures in a metrical epitaph:
42
+
43
+ I know that even below the earth, if there is indeed a reward for the worthy ones,the first and foremost honors, nurse,[68] shall be yours, next to Persephone and Pluto.[69]
44
+
45
+ Hesychius identifies Pluto with Eubouleus,[70] but other ancient sources distinguish between these two underworld deities. In the Mysteries Eubouleus plays the role of a torchbearer, possibly a guide for the initiate's return.[71] In the view of Lewis Richard Farnell, Eubouleus was originally a title referring to the "good counsel" the ruler of the underworld was able to give and which was sought at Pluto's dream oracles; by the 2nd century BC, however, he had acquired a separate identity.[72]
46
+
47
+ The Orphic Hymn to Pluto addresses the god as "strong-spirited" and the "All-Receiver" who commands death and is the master of mortals. His titles are given as Zeus Chthonios and Euboulos ("Good Counsel").[73] In the hymn's topography, Pluto's dwelling is in Tartarus, simultaneously a "meadow" and "thick-shaded and dark," where the Acheron encircles "the roots of the earth." Hades is again the name of the place, here described as "windless," and its gates, through which Pluto carried "pure Demeter's daughter" as his bride, are located in an Attic cave within the district of Eleusis. The route from Persephone's meadow to Hades crosses the sea. The hymn concludes:
48
+
49
+ You alone were born to judge deeds obscure and conspicuous.Holiest and illustrious ruler of all, frenzied god,You delight in the worshiper's respect and reverence.Come with favor and joy to the initiates. I summon you.[74]
50
+
51
+ The hymn is one of several examples of Greco-Roman prayer that express a desire for the presence of a deity, and has been compared to a similar epiclesis in the Acts of Thomas.[75]
52
+
53
+ The names of both Hades and Pluto appear also in the Greek Magical Papyri and curse tablets, with Hades typically referring to the underworld as a place, and Pluto regularly invoked as the partner of Persephone.[76] Five Latin curse tablets from Rome, dating to the mid-1st century BC, promise Persephone and Pluto an offering of "dates, figs, and a black pig" if the curse is fulfilled by the desired deadline. The pig was a characteristic animal sacrifice to chthonic deities, whose victims were almost always black or dark in color.[77]
54
+
55
+ A set of curse tablets written in Doric Greek and found in a tomb addresses a Pasianax, "Lord to All,"[78] sometimes taken as a title of Pluto,[79] but more recently thought to be a magical name for the corpse.[80] Pasianax is found elsewhere as an epithet of Zeus, or in the tablets may invoke a daimon like Abrasax.[81]
56
+
57
+ A sanctuary dedicated to Pluto was called a ploutonion (Latin plutonium). The complex at Eleusis for the mysteries had a ploutonion regarded as the birthplace of the divine child Ploutos, in another instance of conflation or close association of the two gods.[82] Greek inscriptions record an altar of Pluto, which was to be "plastered", that is, resurfaced for a new round of sacrifices at Eleusis.[83] One of the known ploutonia was in the sacred grove between Tralleis and Nysa, where a temple of Pluto and Persephone was located. Visitors sought healing and dream oracles.[84] The ploutonion at Hierapolis, Phrygia, was connected to the rites of Cybele, but during the Roman Imperial era was subsumed by the cult of Apollo, as confirmed by archaeological investigations during the 1960s. It too was a dream oracle.[85] The sites often seem to have been chosen because the presence of naturally occurring mephitic vapors was thought to indicate an opening to the underworld.[86] In Italy, Avernus was considered an entrance to the underworld that produced toxic vapors, but Strabo seems not to think that it was a ploutonion.[87]
58
+
59
+ Kevin Clinton attempted to distinguish the iconography of Hades, Plouton, Ploutos, and the Eleusinian Theos in 5th-century vase painting that depicts scenes from or relating to the mysteries. In Clinton's schema, Plouton is a mature man, sometimes even white-haired; Hades is also usually bearded and mature, but his darkness is emphasized in literary descriptions, represented in art by dark hair. Plouton's most common attribute is a sceptre, but he also often holds a full or overflowing cornucopia; Hades sometimes holds a horn, but it is depicted with no contents and should be understood as a drinking horn. Unlike Plouton, Hades never holds agrarian attributes such as stalks of grain. His chest is usually bare or only partly covered, whereas Plouton is fully robed (exceptions, however, are admitted by the author). Plouton stands, often in the company of both Demeter and Kore, or sometimes one of the goddesses, but Hades almost always sits or reclines, usually with Persephone facing him.[88] "Confusion and disagreement" about the interpretation of these images remain.[89]
60
+
61
+ Attributes of Pluto mentioned in the Orphic Hymn to Pluto are his scepter, keys, throne, and horses. In the hymn, the keys are connected to his capacity for giving wealth to humanity, specifically the agricultural wealth of "the year's fruits."
62
+
63
+ Pausanias explains the significance of Pluto's key in describing a wondrously carved cedar chest at the Temple of Hera in Elis. Numerous deities are depicted, with one panel grouping Dionysus, Persephone, the nymphs and Pluto. Pluto holds a key because "they say that what is called Hades has been locked up by Pluto, and that nobody will return back again therefrom."[91] Natale Conti cites Pausanias in noting that keys are an attribute of Pluto as the scepter is of Jove (Greek Zeus) and the trident of Neptune (Poseidon).[92]
64
+
65
+ A golden key (chrusea klês) was laid on the tongue of initiates by priests at Eleusis[93] and was a symbol of the revelation they were obligated to keep secret.[94] A key is among the attributes of other infernal deities such as Hecate, Anubis, and Persephone, and those who act as guardians or timekeepers, such as Janus and Aion.[95] Aeacus (Aiakos), one of the three mortal kings who became judges in the afterlife, is also a kleidouchos (κλειδοῦχος), "holder of the keys," and a priestly doorkeeper in the court of Pluto and Persephone.[96]
66
+
67
+ According to the Stoic philosopher Cornutus (1st century AD), Pluto wore a wreath of phasganion, more often called xiphion,[97] traditionally identified as a type of gladiolus.[98] Dioscorides recorded medical uses for the plant. For extracting stings and thorns, xiphion was mixed with wine and frankincense to make a cataplasm. The plant was also used as an aphrodisiac[99] and contraceptive.[100] It grew in humid places. In an obscure passage, Cornutus seems to connect Pluto's wearing of phasganion to an etymology for Avernus, which he derives from the word for "air," perhaps through some association with the color glaukos, "bluish grey," "greenish" or "sea-colored," which might describe the plant's leaves. Because the color could describe the sky, Cornutus regularly gives it divine connotations.[101] Pluto's twin sister was named Glauca.
68
+
69
+ Ambiguity of color is characteristic of Pluto. Although both he and his realm are regularly described as dark, black, or gloomy, the god himself is sometimes seen as pale or having a pallor. Martianus Capella (5th century) describes him as both "growing pale in shadow, a fugitive from light" and actively "shedding darkness in the gloom of Tartarean night," crowned with a wreath made of ebony as suitable for the kingdom he governs.[102] The horses of Pluto are usually black, but Ovid describes them as "sky-colored" (caeruleus, from caelum, "sky"), which might be blue, greenish-blue, or dark blue.[103]
70
+
71
+ The Renaissance mythographer Natale Conti says wreaths of narcissus, maidenhair fern (adianthus), and cypress were given to Pluto.[104] In the Homeric Hymn to Demeter, Gaia (Earth) produced the narcissus at Zeus's request as a snare for Persephone; when she grasps it, a chasm opens up and the "Host to Many" (Hades) seizes her.[105] Narcissus wreaths were used in early times to crown Demeter and Persephone, as well as the Furies (Eumenides).[106] The flower was associated with narcotic drugginess (narkê, "torpor"),[107] erotic fascination,[108] and imminent death;[109] to dream of crowning oneself with narcissus was a bad sign.[110] In the myth of Narcissus, the flower is created when a beautiful, self-absorbed youth rejects sexuality and is condemned to perpetual self-love along the Styx.[111]
72
+
73
+ Conti's inclusion of adianthus (Adiantum in modern nomenclature) is less straightforward. The name, meaning "unmoistened" (Greek adianton), was taken in antiquity to refer to the fern's ability to repel water. The plant, which grew in wet places, was also called capillus veneris, "hair of Venus," divinely dry when she emerged from the sea.[112] Historian of medicine John M. Riddle has suggested that the adianthus was one of the ferns Dioscorides called asplenon and prescribed as a contraceptive (atokios).[113] The associations of Proserpine (Persephone) and the maidenhair are alluded to by Samuel Beckett in a 1946 poem, in which the self is a Platonic cave with capillaires, in French both "maidenhair fern" and "blood vessels".[114]
74
+
75
+ The cypress (Greek cyparissus, Latin cupressus) has traditional associations with mourning.[115] In ancient Attica, households in mourning were garlanded with cypress,[116] and it was used to fumigate the air during cremations.[117] In the myth of Cyparissus, a youth was transformed into a cypress, consumed by grief over the accidental death of a pet stag.[118] A "white cypress" is part of the topography of the underworld that recurs in the Orphic gold tablets as a kind of beacon near the entrance, perhaps to be compared with the Tree of Life in various world mythologies. The description of the cypress as "white" (Greek leukē), since the botanical tree is dark, is symbolic, evoking the white garments worn by initiates or the clothing of a corpse, or the pallor of the dead. In Orphic funeral rites, it was forbidden to make coffins of cypress.[119]
76
+
77
+ The tradition of the mystery religions favors Pluto as a loving and faithful partner to Persephone, in contrast to the violence of Hades in early myths, but one ancient myth that preserves a lover for him parallels the abduction and also has a vegetative aspect.[120] A Roman source says that Pluto fell in love with Leuca (Greek Leukē, "White"), the most beautiful of the nymphs, and abducted her to live with him in his realm. After the long span of her life came to its end, he memorialized their love by creating a white tree in the Elysian Fields. The tree was the white poplar (Greek leukē), the leaves of which are white on one side and dark on the other, representing the duality of upper and underworld.[121] A wreath of white poplar leaves was fashioned by Heracles to mark his ascent from the underworld, an aition for why it was worn by initiates[122] and by champion athletes participating in funeral games.[123] Like other plants associated with Pluto, white poplar was regarded as a contraceptive in antiquity.[124] The relation of this tree to the white cypress of the mysteries is debated.[125]
78
+
79
+ The Bibliotheca of Pseudo-Apollodorus uses the name Plouton instead of Hades in relating the tripartite division of sovereignty, the abduction of Persephone, and the visit of Orpheus to the underworld. This version of the theogony for the most part follows Hesiod (see above), but adds that the three brothers were each given a gift by the Cyclopes to use in their battle against the Titans: Zeus thunder and lightning; Poseidon a trident; and Pluto a helmet (kyneê).[126]
80
+
81
+ The helmet Pluto receives is presumably the magical Cap of Invisibility (aidos kyneê), but the Bibliotheca is the only ancient source that explicitly says it belonged to Pluto.[127] The verbal play of aidos, "invisible," and Hades is thought to account for this attribution of the helmet to the ruler of the underworld, since no ancient narratives record his use or possession of it. Later authors such as Rabelais (16th century) do attribute the helmet to Pluto.[128] Erasmus calls it the "helmet of Orcus"[129] and gives it as a figure of speech referring to those who conceal their true nature by a cunning device. Francis Bacon notes the proverbial usage: "the helmet of Pluto, which maketh the politic man go invisible, is secrecy in the counsel, and celerity in the execution."[130]
82
+
83
+ No ancient image of the ruler of the underworld can be said with certainty to show him with a bident,[131] though the ornamented tip of his scepter may have been misunderstood at times as a bident.[132] In the Roman world, the bident (from bi-, "two" + dent-, "teeth") was an agricultural implement. It may also represent one of the three types of lightning wielded by Jupiter, the Roman counterpart of Zeus, and the Etruscan Tinia. The later notion that the ruler of the underworld wielded a trident or bident can perhaps be traced to a line in Seneca's Hercules Furens ("Hercules Enraged"), in which Father Dis, the Roman counterpart of Pluto, uses a three-pronged spear to drive off Hercules as he attempts to invade the underworld. Seneca calls Dis the "Infernal Jove"[133] or the "dire Jove"[134] (the Jove who gives dire or ill omens, dirae), just as in the Greek tradition, Plouton is sometimes identified as a "chthonic Zeus." That the trident and bident might be somewhat interchangeable is suggested by a Byzantine scholiast, who mentions Poseidon being armed with a bident.[135]
84
+
85
+ In the Middle Ages, classical underworld figures began to be depicted with a pitchfork.[136] Early Christian writers had identified the classical underworld with Hell, and its denizens as demons or devils.[137] In the Renaissance, the bident became a conventional attribute of Pluto. In an influential ceiling mural depicting the wedding of Cupid and Psyche, painted by Raphael's workshop for the Villa Farnesina in 1517, Pluto is shown holding the bident, with Cerberus at his side, while Neptune holds the trident.[138] Perhaps influenced by this work, Agostino Carracci originally depicted Pluto with a bident in a preparatory drawing for his painting Pluto (1592), in which the god ended up holding his characteristic key.[139] In Caravaggio's Giove, Nettuno e Plutone (ca. 1597), a ceiling mural based on alchemical allegory, it is Neptune who holds the bident.[140]
86
+
87
+ The name Plouton is first used in Greek literature by Athenian playwrights.[58] In Aristophanes' comedy The Frogs (Batrachoi, 405 BC), in which "the Eleusinian colouring is in fact so pervasive,"[143] the ruler of the underworld is one of the characters, under the name of Plouton. The play depicts a mock descent to the underworld by the god Dionysus to bring back one of the dead tragic playwrights in the hope of restoring Athenian theater to its former glory. Pluto is a silent presence onstage for about 600 lines presiding over a contest among the tragedians, then announces that the winner has the privilege of returning to the upper world.[144] The play also draws on beliefs and imagery from Orphic and Dionysiac cult, and rituals pertaining to Ploutos (Plutus, "wealth").[145] In a fragment from another play by Aristophanes, a character "is comically singing of the excellent aspects of being dead", asking in reference to the tripartition of sovereignty over the world:
88
+
89
+ And where do you think Pluto gets his name [i.e. "rich"],
90
+ if not because he took the best portion?
91
+ ...
92
+ How much better are things below than what Zeus possesses! [146]
93
+
94
+ To Plato, the god of the underworld was "an agent in [the] beneficent cycle of death and rebirth" meriting worship under the name of Plouton, a giver of spiritual wealth.[147] In the dialogue Cratylus, Plato has Socrates explain the etymology of Plouton, saying that Pluto gives wealth (ploutos), and his name means "giver of wealth, which comes out of the earth beneath". Because the name Hades is taken to mean "the invisible", people fear what they cannot see; although they are in error about the nature of this deity's power, Socrates says, "the office and name of the God really correspond":
95
+
96
+ He is the perfect and accomplished Sophist, and the great benefactor of the inhabitants of the other world; and even to us who are upon earth he sends from below exceeding blessings. For he has much more than he wants down there; wherefore he is called Pluto (or the rich). Note also, that he will have nothing to do with men while they are in the body, but only when the soul is liberated from the desires and evils of the body. Now there is a great deal of philosophy and reflection in that; for in their liberated state he can bind them with the desire of virtue, but while they are flustered and maddened by the body, not even father Cronos himself would suffice to keep them with him in his own far-famed chains.[148]
97
+
98
+ Since "the union of body and soul is not better than the loosing,"[149] death is not an evil. Walter Burkert thus sees Pluto as a "god of dissolution."[150] Among the titles of Pluto was Isodaitēs, "divider into equal portions," a title that connects him to the fate goddesses the Moirai.[151] Isodaitēs was also a cult title for Dionysus and Helios.[152]
99
+
100
+ In ordering his ideal city, Plato proposed a calendar in which Pluto was honored as a benefactor in the twelfth month, implicitly ranking him as one of the twelve principal deities.[153] In the Attic calendar, the twelfth month, more or less equivalent to June, was Skirophorion; the name may be connected to the rape of Persephone.[154]
101
+
102
+ In the theogony of Euhemerus (4th century BC), the gods were treated as mortal rulers whose deeds were immortalized by tradition. Ennius translated Euhemerus into Latin about a hundred years later, and a passage from his version was in turn preserved by the early Christian writer Lactantius.[155] Here the union of Saturn (the Roman equivalent of Cronus) and Ops, an Italic goddess of abundance, produces Jupiter (Greek Zeus), Juno (Hera), Neptune, Pluto, and Glauca:
103
+
104
+ Then Saturn took Ops to wife. Titan, the elder brother, demanded the kingship for himself. Vesta their mother, with their sisters Ceres [Demeter] and Ops, persuaded Saturn not to give way to his brother in the matter. Titan was less good-looking than Saturn; for that reason, and also because he could see his mother and sisters working to have it so, he conceded the kingship to Saturn, and came to terms with him: if Saturn had a male child born to him, it would not be reared. This was done to secure reversion of the kingship to Titan's children. They then killed the first son that was born to Saturn. Next came twin children, Jupiter and Juno. Juno was given to Saturn to see while Jupiter was secretly removed and given to Vesta to be brought up without Saturn's knowledge. In the same way without Saturn knowing, Ops bore Neptune and hid him away. In her third labor Ops bore another set of twins, Pluto and Glauce. (Pluto in Latin is Diespiter;[156] some call him Orcus.) Saturn was shown his daughter Glauce but his son Pluto was hidden and removed. Glauce then died young. That is the pedigree, as written, of Jupiter and his brothers; that is how it has been passed down to us in holy scripture.
105
+
106
+ In this theogony, which Ennius introduced into Latin literature, Saturn, "Titan,"[157] Vesta, Ceres, and Ops are siblings; Glauca is the twin of Pluto and dies mysteriously young. There are several mythological figures named Glauca; the sister of Pluto may be the Glauca who in Cicero's account of the three aspects of Diana conceived the third with the equally mysterious Upis.[158] This is the genealogy for Pluto that Boccaccio used in his Genealogia Deorum Gentilium and in his lectures explicating the Divine Comedy of Dante.[159]
107
+
108
+ In Book 3 of the Sibylline Oracles, dating mostly to the 2nd century AD, Rhea gives birth to Pluto as she passes by Dodona, "where the watery paths of the River Europus flowed, and the water ran into the sea, merged with the Peneius. This is also called the Stygian river."[160]
109
+
110
+ The Orphic theogonies are notoriously varied,[161] and Orphic cosmology influenced the varying Gnostic theogonies of late antiquity.[162] Clementine literature (4th century AD) preserves a theogony with explicit Orphic influence that also draws on Hesiod, yielding a distinctive role for Pluto. When the primordial elements came together by orderly cyclonic force, they produced a generative sphere, the "egg" from which the primeval Orphic entity Phanes is born and the world is formed. The release of Phanes and his ascent to the heavenly top of the world-egg causes the matter left in the sphere to settle in relation to weight, creating the tripartite world of the traditional theogonies:[163]
111
+
112
+ Its lower part, the heaviest element, sinks downwards, and is called Pluto because of its gravity, weight, and great quantity (plêthos) of matter. After the separation of this heavy element in the middle part of the egg the waters flow together, which they call Poseidon. The purest and noblest element, the fire, is called Zeus, because its nature is glowing (ζέουσα, zeousa). It flies right up into the air, and draws up the spirit, now called Metis, that was left in the underlying moisture. And when this spirit has reached the summit of the ether, it is devoured by Zeus, who in his turn begets the intelligence (σύνεσις, sunesis), also called Pallas. And by this artistic intelligence the etherial artificer creates the whole world. This world is surrounded by the air, which extends from Zeus, the very hot ether, to the earth; this air is called Hera.[164]
113
+
114
+ This cosmogony interprets Hesiod allegorically, and so the heaviest element is identified not as the Earth, but as the netherworld of Pluto.[165] (In modern geochemistry, plutonium is the heaviest primordial element.) Supposed etymologies are used to make sense of the relation of physical process to divine name; Plouton is here connected to plêthos (abundance).[166]
115
+
116
+ In the Stoic system, Pluto represented the lower region of the air, where according to Seneca (1st century AD) the soul underwent a kind of purgatory before ascending to the ether.[167] Seneca's contemporary Cornutus made use of the traditional etymology of Pluto's name for Stoic theology. The Stoics believed that the form of a word contained the original truth of its meaning, which over time could become corrupted or obscured.[168] Plouton derived from ploutein, "to be wealthy," Cornutus said, because "all things are corruptible and therefore are 'ultimately consigned to him as his property.'"[169]
117
+
118
+ Within the Pythagorean and Neoplatonic traditions, Pluto was allegorized as the region where souls are purified, located between the moon (as represented by Persephone) and the sun.[170] Neoplatonists sometimes interpreted the Eleusinian Mysteries as a fabula of celestial phenomena:
119
+
120
+ Authors tell the fable that Ceres was Proserpina's mother, and that Proserpina while playing one day was kidnapped by Pluto. Her mother searched for her with lighted torches; and it was decreed by Jupiter that the mother should have her daughter for fifteen days in the month, but Pluto for the rest, the other fifteen. This is nothing but that the name Ceres is used to mean the earth, called Ceres on analogy with crees ('you may create'), for all things are created from her. By Proserpina is meant the moon, and her name is on analogy with prope serpens ('creeping near'), for she is moved nearer to the earth than the other planets. She is called earth's daughter, because her substance has more of earth in it than of the other elements. By Pluto is meant the shadow that sometimes obstructs the moon.[171]
121
+
122
+ A dedicatory inscription from Smyrna describes a 1st–2nd century sanctuary to "God Himself" as the most exalted of a group of six deities, including clothed statues of Plouton Helios and Koure Selene, "Pluto the Sun" and "Kore the Moon."[172] The status of Pluto and Kore as a divine couple is marked by what the text describes as a "linen embroidered bridal curtain."[173] The two are placed as bride and groom within an enclosed temple, separately from the other deities cultivated at the sanctuary.
123
+
124
+ Plouton Helios is mentioned in other literary sources in connection with Koure Selene and Helios Apollon; the sun on its nighttime course was sometimes envisioned as traveling through the underworld on its return to the east. Apuleius describes a rite in which the sun appears at midnight to the initiate at the gates of Proserpina; it has been suggested that this midnight sun could be Plouton Helios.[174]
125
+
126
+ The Smyrna inscription also records the presence of Helios Apollon at the sanctuary. As two forms of Helios, Apollo and Pluto pose a dichotomy:
127
+
128
+ It has been argued that the sanctuary was in the keeping of a Pythagorean sodality or "brotherhood". The relation of Orphic beliefs to the mystic strand of Pythagoreanism, or of these to Platonism and Neoplatonism, is complex and much debated.[176]
129
+
130
+ In the Hellenistic era, the title or epithet Plutonius is sometimes affixed to the names of other deities. In the Hermetic Corpus,[177] Jupiter Plutonius "rules over earth and sea, and it is he who nourishes mortal things that have soul and bear fruit."[178]
131
+
132
+ In Ptolemaic Alexandria, at the site of a dream oracle, Serapis was identified with Aion Plutonius.[179] Gilles Quispel conjectured that this figure results from the integration of the Orphic Phanes into Mithraic religion at Alexandria, and that he "assures the eternity of the city," where the birth of Aion was celebrated at the sanctuary of Kore on 6 January.[180] In Latin, Plutonius can be an adjective that simply means "of or pertaining to Pluto."[181]
133
+
134
+ The Neoplatonist Proclus (5th century AD) considered Pluto the third demiurge, a sublunar demiurge who was also identified variously with Poseidon or Hephaestus. This idea is present in Renaissance Neoplatonism, as for instance in the cosmology of Marsilio Ficino (1433–99),[182] who translated Orphic texts into Latin for his own use.[183] Ficino saw the sublunar demiurge as "a daemonic 'many-headed' sophist, a magus, an enchanter, a fashioner of images and reflections, a shape-changer of himself and of others, a poet in a way of being and of not-being, a royal Pluto." This demiurgic figure identified with Pluto is also "'a purifier of souls' who presides over the magic of love and generation and who uses a fantastic counter-art to mock, but also ... to supplement, the divine icastic or truly imitative art of the sublime translunar Demiurge."[184]
135
+
136
+ Christian writers of late antiquity sought to discredit the competing gods of Roman and Hellenistic religions, often adopting the euhemerizing approach in regarding them not as divinities, but as people glorified through stories and cultic practices and thus not true deities worthy of worship. The infernal gods, however, retained their potency, becoming identified with the Devil and treated as demonic forces by Christian apologists.[185]
137
+
138
+ One source of Christian revulsion toward the chthonic gods was the arena. Attendants in divine costume, among them a "Pluto" who escorted corpses out, were part of the ceremonies of the gladiatorial games.[186] Tertullian calls the mallet-wielding figure usually identified as the Etruscan Charun the "brother of Jove,"[187] that is, Hades/Pluto/Dis, an indication that the distinctions among these denizens of the underworld were becoming blurred in a Christian context.[188] Prudentius, in his poetic polemic against the religious traditionalist Symmachus, describes the arena as a place where savage vows were fulfilled on an altar to Pluto (solvit ad aram / Plutonis fera vota), where fallen gladiators were human sacrifices to Dis and Charon received their souls as his payment, to the delight of the underworld Jove (Iovis infernalis).[189]
139
+
140
+ Medieval mythographies, written in Latin, continue the conflation of Greek and Roman deities begun by the ancient Romans themselves. Perhaps because the name Pluto was used in both traditions, it appears widely in these Latin sources for the classical ruler of the underworld, who is also seen as the double, ally, or adjunct to the figure in Christian mythology known variously as the Devil, Satan, or Lucifer. The classical underworld deities became casually interchangeable with Satan as an embodiment of Hell.[190] For instance, in the 9th century, Abbo Cernuus, the only witness whose account of the Siege of Paris survives, called the invading Vikings the "spawn of Pluto."[191]
141
+
142
+ In the Little Book on Images of the Gods, Pluto is described as
143
+
144
+ an intimidating personage sitting on a throne of sulphur, holding the scepter of his realm in his right hand, and with his left strangling a soul. Under his feet three-headed Cerberus held a position, and beside him he had three Harpies. From his golden throne of sulphur flowed four rivers, which were called, as is known, Lethe, Cocytus, Phlegethon and Acheron, tributaries of the Stygian swamp.[192]
145
+
146
+ This work derives from that of the Third Vatican Mythographer, possibly one Albricus or Alberic, who presents often extensive allegories and devotes his longest chapter, including an excursus on the nature of the soul, to Pluto.[193]
147
+
148
+ In Dante's Divine Comedy (written 1308–1321), Pluto presides over the fourth circle of Hell, to which the greedy are condemned.[194] The Italian form of the name is Pluto, taken by some commentators[195] to refer specifically to Plutus as the god of wealth who would preside over the torment of those who hoarded or squandered it in life.[196] Dante's Pluto is greeted as "the great enemy"[197] and utters the famously impenetrable line Papé Satàn, papé Satàn aleppe. Much of this Canto is devoted to the power of Fortuna to give and take away. Entrance into the fourth circle has marked a downward turn in the poet's journey, and the next landmark after he and his guide cross from the circle is the Stygian swamp, through which they pass on their way to the city of Dis (Italian Dite). Dante's clear distinction between Pluto and Dis suggests that he had Plutus in mind in naming the former. The city of Dis is the "citadel of Lower Hell" where the walls are garrisoned by fallen angels and Furies.[198] Pluto is treated likewise as a purely Satanic figure by the 16th-century Italian poet Tasso throughout his epic Jerusalem Delivered,[199] in which "great Dis, great Pluto" is invoked in the company of "all ye devils that lie in deepest hell."[200]
149
+
150
+ Influenced by Ovid and Claudian, Geoffrey Chaucer (1343–1400)[201] developed the myth of Pluto and Proserpina (the Latin name of Persephone) in English literature. Like earlier medieval writers, Chaucer identifies Pluto's realm with Hell as a place of condemnation and torment,[202] and describes it as "derk and lowe" ("dark and low").[203] But Pluto's major appearance in the works of Chaucer comes as a character in "The Merchant's Tale," where Pluto is identified as the "Kyng of Fayerye" (Fairy King).[204] As in the anonymous romance Sir Orfeo (ca. 1300), Pluto and Proserpina rule over a fantastical world that melds classical myth and fairyland.[205] Chaucer has the couple engage in a comic battle of the sexes that undermines the Christian imagery in the tale, which is Chaucer's most sexually explicit.[206] The Scottish poet William Dunbar ca. 1503 also described Pluto as a folkloric supernatural being, "the elrich incubus / in cloke of grene" ("the eldritch incubus in cloak of green"), who appears among the courtiers of Cupid.[207]
151
+
152
+ The name Pluto for the classical ruler of the underworld was further established in English literature by Arthur Golding, whose translation of Ovid's Metamorphoses (1565) was of great influence on William Shakespeare,[208] Christopher Marlowe,[209] and Edmund Spenser.[210][211] Golding translates Ovid's Dis as Pluto,[212] a practice that prevails among English translators, despite John Milton's use of the Latin Dis in Paradise Lost.[213] The Christian perception of the classical underworld as Hell influenced Golding's translation practices; for instance, Ovid's tenebrosa sede tyrannus / exierat ("the tyrant [Dis] had gone out of his shadowy realm") becomes "the prince of fiends forsook his darksome hole".[214]
153
+
154
+ Pluto's court as a literary setting could bring together a motley assortment of characters. In Huon de Méry's 13th-century poem "The Tournament of the Antichrist", Pluto rules over a congregation of "classical gods and demigods, biblical devils, and evil Christians."[215] In the 15th-century dream allegory The Assembly of Gods, the deities and personifications are "apparelled as medieval nobility"[216] basking in the "magnyfycence" of their "lord Pluto," who is clad in a "smoky net" and reeking of sulphur.[217]
155
+
156
+ Throughout the Renaissance, images and ideas from classical antiquity entered popular culture through the new medium of print and through pageants and other public performances at festivals. The Fête-Dieu at Aix-en-Provence in 1462 featured characters costumed as a number of classical deities, including Pluto,[218] and Pluto was the subject of one of seven pageants presented as part of the 1521 Midsummer Eve festival in London.[219] During the 15th century, no mythological theme was brought to the stage more often than Orpheus's descent, with the court of Pluto inspiring fantastical stagecraft.[220] Leonardo da Vinci designed a set with a rotating mountain that opened up to reveal Pluto emerging from the underworld; the drawing survives and was the basis for a modern recreation.[221]
157
+
158
+ The tragic descent of the hero-musician Orpheus to the underworld to retrieve his bride, and his performance at the court of Pluto and Proserpina, offered compelling material for librettists and composers of opera (see List of Orphean operas) and ballet. Pluto also appears in works based on other classical myths of the underworld. As a singing role, Pluto is almost always written for a bass voice, with the low vocal range representing the depths and weight of the underworld, as in Monteverdi and Rinuccini's L'Orfeo (1607) and Il ballo delle ingrate (1608). In their ballo, a form of ballet with vocal numbers, Cupid invokes Pluto from the underworld to lay claim to "ungrateful" women who were immune to love. Pluto's part is considered particularly virtuosic,[222] and a reviewer at the première described the character, who appeared as if from a blazing Inferno, as "formidable and awesome in sight, with garments as given him by poets, but burdened with gold and jewels."[223]
159
+
160
+ The role of Pluto is written for a bass in Peri's Euridice (1600);[224] Caccini's Euridice (1602); Rossi's Orfeo (1647); Cesti's Il pomo d'oro (1668);[225] Sartoris's Orfeo (1672); Lully's Alceste, a tragédie en musique (1674);[226] Charpentier's chamber opera La descente d'Orphée aux enfers (1686);[227] Telemann's Orpheus (1726); and Rameau's Hippolyte et Aricie (1733).[228] Pluto was a baritone in Lully's Proserpine (1680), which includes a duo dramatizing the conflict between the royal underworld couple that is notable for its early use of musical characterization.[229] Perhaps the most famous of the Orpheus operas is Offenbach's satiric Orpheus in the Underworld (1858),[230] in which a tenor sings the role of Pluton, disguised in the giddily convoluted plotting as Aristée (Aristaeus), a farmer.
161
+
162
+ Scenes set in Pluto's realm were orchestrated with instrumentation that became conventionally "hellish", established in Monteverdi's L'Orfeo as two cornets, three trombones, a bassoon, and a régale.[231]
163
+
164
+ Pluto has also been featured as a role in ballet. In Lully's "Ballet of Seven Planets'" interlude from Cavalli's opera Ercole amante ("Hercules in Love"), Louis XIV himself danced as Pluto and other characters; it was a spectacular flop.[232] Pluto appeared in Noverre's lost La descente d'Orphée aux Enfers (1760s). Gaétan Vestris danced the role of the god in Florian Deller's Orefeo ed Euridice (1763).[233] The Persephone choreographed by Robert Joffrey (1952) was based on André Gide's line "king of winters, the infernal Pluto."[234]
165
+
166
+ The abduction of Proserpina by Pluto was the scene from the myth most often depicted by artists, who usually follow Ovid's version. The influential emblem book Iconologia of Cesare Ripa (1593, second edition 1603) presents the allegorical figure of Rape with a shield on which the abduction is painted.[235] Jacob Isaacsz. van Swanenburg, the first teacher of Rembrandt, echoed Ovid in showing Pluto as the target of Cupid's arrow while Venus watches her plan carried out (location of painting unknown). The treatment of the scene by Rubens is similar. Rembrandt incorporates Claudian's more passionate characterizations.[236] The performance of Orpheus in the court of Pluto and Proserpina was also a popular subject.
167
+
168
+ Major artists who produced works depicting Pluto include:
169
+
170
+ After the Renaissance, literary interest in the abduction myth waned until the revival of classical myth among the Romantics. The work of mythographers such as J.G. Frazer and Jane Ellen Harrison helped inspire the recasting of myths in modern terms by Victorian and Modernist writers. In Tess of the d'Urbervilles (1891), Thomas Hardy portrays Alec d'Urberville as "a grotesque parody of Pluto/Dis" exemplifying the late-Victorian culture of male domination, in which women were consigned to "an endless breaking ... on the wheel of biological reproduction."[243] A similar figure is found in The Lost Girl (1920) by D.H. Lawrence, where the character Ciccio[244] acts as Pluto to Alvina's Persephone, "the deathly-lost bride ... paradoxically obliterated and vitalised at the same time by contact with Pluto/Dis" in "a prelude to the grand design of rebirth." The darkness of Pluto is both a source of regeneration, and of "merciless annihilation."[245] Lawrence takes up the theme elsewhere in his work; in The First Lady Chatterley (1926, an early version of Lady Chatterley's Lover), Connie Chatterley sees herself as a Persephone and declares "she'd rather be married to Pluto than Plato," casting her earthy gamekeeper lover as the former and her philosophy-spouting husband as the latter.[246]
171
+
172
+ In Rick Riordan's young adult fantasy series The Heroes of Olympus, the character Hazel Levesque is the daughter of Pluto, god of riches. She is one of seven characters with a parent from classical mythology.[247]
173
+
174
+ Scientific terms derived from the name of Pluto include:
en/468.html.txt ADDED
@@ -0,0 +1,137 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+
5
+ Autism is a developmental disorder characterized by difficulties with social interaction and communication, and by restricted and repetitive behavior.[6] Parents often notice signs during the first three years of their child's life.[1][6] These signs often develop gradually, though some children with autism experience worsening in their communication and social skills after reaching developmental milestones at a normal pace.[17]
6
+
7
+ Autism is associated with a combination of genetic and environmental factors.[7] Risk factors during pregnancy include certain infections, such as rubella, toxins including valproic acid, alcohol, cocaine, pesticides, lead, and air pollution, fetal growth restriction, and autoimmune diseases.[18][19][20] Controversies surround other proposed environmental causes; for example, the vaccine hypothesis, which has been disproven.[21] Autism affects information processing in the brain and how nerve cells and their synapses connect and organize; how this occurs is not well understood.[22] The Diagnostic and Statistical Manual of Mental Disorders (DSM-5), combines autism and less severe forms of the condition, including Asperger syndrome and pervasive developmental disorder not otherwise specified (PDD-NOS) into the diagnosis of autism spectrum disorder (ASD).[6][23]
8
+
9
+ Early behavioral interventions or speech therapy can help children with autism gain self-care, social, and communication skills.[9][10] Although there is no known cure,[9] there have been cases of children who recovered.[24] Some autistic adults are unable to live independently.[15] An autistic culture has developed, with some individuals seeking a cure and others believing autism should be accepted as a difference to be accommodated instead of cured.[25][26]
10
+
11
+ Globally, autism is estimated to affect 24.8 million people as of 2015[update].[16] In the 2000s, the number of people affected was estimated at 1–2 per 1,000 people worldwide.[27] In the developed countries, about 1.5% of children are diagnosed with ASD as of 2017[update],[28] from 0.7% in 2000 in the United States.[29] It occurs four-to-five times more often in males than females.[29] The number of people diagnosed has increased dramatically since the 1960s, which may be partly due to changes in diagnostic practice.[27] The question of whether actual rates have increased is unresolved.[27]
12
+
13
+ Autism is a highly variable, neurodevelopmental disorder[30] whose symptoms first appears during infancy or childhood, and generally follows a steady course without remission.[31] People with autism may be severely impaired in some respects but average, or even superior, in others.[32] Overt symptoms gradually begin after the age of six months, become established by age two or three years[33] and tend to continue through adulthood, although often in more muted form.[34] It is distinguished by a characteristic triad of symptoms: impairments in social interaction, impairments in communication, and repetitive behavior. Other aspects, such as atypical eating, are also common but are not essential for diagnosis.[35] Individual symptoms of autism occur in the general population and appear not to associate highly, without a sharp line separating pathologically severe from common traits.[36]
14
+
15
+ Social deficits distinguish autism and the related autism spectrum disorders (ASD; see Classification) from other developmental disorders.[34] People with autism have social impairments and often lack the intuition about others that many people take for granted. Noted autistic Temple Grandin described her inability to understand the social communication of neurotypicals, or people with typical neural development, as leaving her feeling "like an anthropologist on Mars".[37]
16
+
17
+ Unusual social development becomes apparent early in childhood. Autistic infants show less attention to social stimuli, smile and look at others less often, and respond less to their own name. Autistic toddlers differ more strikingly from social norms; for example, they have less eye contact and turn-taking, and do not have the ability to use simple movements to express themselves, such as pointing at things.[38] Three- to five-year-old children with autism are less likely to exhibit social understanding, approach others spontaneously, imitate and respond to emotions, communicate nonverbally, and take turns with others. However, they do form attachments to their primary caregivers.[39] Most children with autism display moderately less attachment security than neurotypical children, although this difference disappears in children with higher mental development or less pronounced autistic traits.[40] Older children and adults with ASD perform worse on tests of face and emotion recognition[41] although this may be partly due to a lower ability to define a person's own emotions.[42]
18
+
19
+ Children with high-functioning autism have more intense and frequent loneliness compared to non-autistic peers, despite the common belief that children with autism prefer to be alone. Making and maintaining friendships often proves to be difficult for those with autism. For them, the quality of friendships, not the number of friends, predicts how lonely they feel. Functional friendships, such as those resulting in invitations to parties, may affect the quality of life more deeply.[43]
20
+
21
+ There are many anecdotal reports, but few systematic studies, of aggression and violence in individuals with ASD. The limited data suggest that, in children with intellectual disability, autism is associated with aggression, destruction of property, and meltdowns.[44]
22
+
23
+ About a third to a half of individuals with autism do not develop enough natural speech to meet their daily communication needs.[45] Differences in communication may be present from the first year of life, and may include delayed onset of babbling, unusual gestures, diminished responsiveness, and vocal patterns that are not synchronized with the caregiver. In the second and third years, children with autism have less frequent and less diverse babbling, consonants, words, and word combinations; their gestures are less often integrated with words. Children with autism are less likely to make requests or share experiences, and are more likely to simply repeat others' words (echolalia)[46][47] or reverse pronouns.[48] Joint attention seems to be necessary for functional speech, and deficits in joint attention seem to distinguish infants with ASD.[23] For example, they may look at a pointing hand instead of the pointed-at object,[38][47] and they consistently fail to point at objects in order to comment on or share an experience.[23] Children with autism may have difficulty with imaginative play and with developing symbols into language.[46][47]
24
+
25
+ In a pair of studies, high-functioning children with autism aged 8–15 performed equally well as, and as adults better than, individually matched controls at basic language tasks involving vocabulary and spelling. Both autistic groups performed worse than controls at complex language tasks such as figurative language, comprehension and inference. As people are often sized up initially from their basic language skills, these studies suggest that people speaking to autistic individuals are more likely to overestimate what their audience comprehends.[49]
26
+
27
+ Autistic individuals can display many forms of repetitive or restricted behavior, which the Repetitive Behavior Scale-Revised (RBS-R) categorizes as follows.[50]
28
+
29
+ No single repetitive or self-injurious behavior seems to be specific to autism, but autism appears to have an elevated pattern of occurrence and severity of these behaviors.[51]
30
+
31
+ Autistic individuals may have symptoms that are independent of the diagnosis, but that can affect the individual or the family.[35]
32
+ An estimated 0.5% to 10% of individuals with ASD show unusual abilities, ranging from splinter skills such as the memorization of trivia to the extraordinarily rare talents of prodigious autistic savants.[52] Many individuals with ASD show superior skills in perception and attention, relative to the general population.[53] Sensory abnormalities are found in over 90% of those with autism, and are considered core features by some,[54] although there is no good evidence that sensory symptoms differentiate autism from other developmental disorders.[55] Differences are greater for under-responsivity (for example, walking into things) than for over-responsivity (for example, distress from loud noises) or for sensation seeking (for example, rhythmic movements).[56] An estimated 60–80% of autistic people have motor signs that include poor muscle tone, poor motor planning, and toe walking;[54] deficits in motor coordination are pervasive across ASD and are greater in autism proper.[57] Unusual eating behavior occurs in about three-quarters of children with ASD, to the extent that it was formerly a diagnostic indicator. Selectivity is the most common problem, although eating rituals and food refusal also occur.[58]
33
+
34
+ There is tentative evidence that autism occurs more frequently in people with gender dysphoria.[59][60]
35
+
36
+ Gastrointestinal problems are one of the most commonly associated medical disorders in people with autism.[61] These are linked to greater social impairment, irritability, behavior and sleep problems, language impairments and mood changes.[61][62]
37
+
38
+ Parents of children with ASD have higher levels of stress.[38] Siblings of children with ASD report greater admiration of and less conflict with the affected sibling than siblings of unaffected children and were similar to siblings of children with Down syndrome in these aspects of the sibling relationship. However, they reported lower levels of closeness and intimacy than siblings of children with Down syndrome; siblings of individuals with ASD have greater risk of negative well-being and poorer sibling relationships as adults.[63]
39
+
40
+ It has long been presumed that there is a common cause at the genetic, cognitive, and neural levels for autism's characteristic triad of symptoms.[64] However, there is increasing suspicion that autism is instead a complex disorder whose core aspects have distinct causes that often co-occur.[64][65]
41
+
42
+ Autism has a strong genetic basis, although the genetics of autism are complex and it is unclear whether ASD is explained more by rare mutations with major effects, or by rare multigene interactions of common genetic variants.[67][68] Complexity arises due to interactions among multiple genes, the environment, and epigenetic factors which do not change DNA sequencing but are heritable and influence gene expression.[34] Many genes have been associated with autism through sequencing the genomes of affected individuals and their parents.[69] Studies of twins suggest that heritability is 0.7 for autism and as high as 0.9 for ASD, and siblings of those with autism are about 25 times more likely to be autistic than the general population.[54] However, most of the mutations that increase autism risk have not been identified. Typically, autism cannot be traced to a Mendelian (single-gene) mutation or to a single chromosome abnormality, and none of the genetic syndromes associated with ASDs have been shown to selectively cause ASD.[67] Numerous candidate genes have been located, with only small effects attributable to any particular gene.[67] Most loci individually explain less than 1% of cases of autism.[70] The large number of autistic individuals with unaffected family members may result from spontaneous structural variation—such as deletions, duplications or inversions in genetic material during meiosis.[71][72] Hence, a substantial fraction of autism cases may be traceable to genetic causes that are highly heritable but not inherited: that is, the mutation that causes the autism is not present in the parental genome.[66] Autism may be underdiagnosed in women and girls due to an assumption that it is primarily a male condition,[73] but genetic phenomena such as imprinting and X linkage have the ability to raise the frequency and severity of conditions in males, and theories have been put forward for a genetic reason why males are diagnosed more often, such as the imprinted brain theory and the extreme male brain theory.[74][75][76]
43
+
44
+ Maternal nutrition and inflammation during preconception and pregnancy influences fetal neurodevelopment. Intrauterine growth restriction is associated with ASD, in both term and preterm infants.[19] Maternal inflammatory and autoimmune diseases may damage fetal tissues, aggravating a genetic problem or damaging the nervous system.[20]
45
+
46
+ Exposure to air pollution during pregnancy, especially heavy metals and particulates, may increase the risk of autism.[77][78] Environmental factors that have been claimed without evidence to contribute to or exacerbate autism include certain foods, infectious diseases, solvents, PCBs, phthalates and phenols used in plastic products, pesticides, brominated flame retardants, alcohol, smoking, illicit drugs, vaccines,[27] and prenatal stress. Some, such as the MMR vaccine, have been completely disproven.[79][80][81][82]
47
+
48
+ Parents may first become aware of autistic symptoms in their child around the time of a routine vaccination. This has led to unsupported theories blaming vaccine "overload", a vaccine preservative, or the MMR vaccine for causing autism.[83] The latter theory was supported by a litigation-funded study that has since been shown to have been "an elaborate fraud".[84] Although these theories lack convincing scientific evidence and are biologically implausible,[83] parental concern about a potential vaccine link with autism has led to lower rates of childhood immunizations, outbreaks of previously controlled childhood diseases in some countries, and the preventable deaths of several children.[85][86]
49
+
50
+ Autism's symptoms result from maturation-related changes in various systems of the brain. How autism occurs is not well understood. Its mechanism can be divided into two areas: the pathophysiology of brain structures and processes associated with autism, and the neuropsychological linkages between brain structures and behaviors.[87] The behaviors appear to have multiple pathophysiologies.[36]
51
+
52
+ There is evidence that gut–brain axis abnormalities may be involved.[61][62][88] A 2015 review proposed that immune dysregulation, gastrointestinal inflammation, malfunction of the autonomic nervous system, gut flora alterations, and food metabolites may cause brain neuroinflammation and dysfunction.[62] A 2016 review concludes that enteric nervous system abnormalities might play a role in neurological disorders such as autism. Neural connections and the immune system are a pathway that may allow diseases originated in the intestine to spread to the brain.[88]
53
+
54
+ Several lines of evidence point to synaptic dysfunction as a cause of autism.[22] Some rare mutations may lead to autism by disrupting some synaptic pathways, such as those involved with cell adhesion.[89] Gene replacement studies in mice suggest that autistic symptoms are closely related to later developmental steps that depend on activity in synapses and on activity-dependent changes.[90] All known teratogens (agents that cause birth defects) related to the risk of autism appear to act during the first eight weeks from conception, and though this does not exclude the possibility that autism can be initiated or affected later, there is strong evidence that autism arises very early in development.[91]
55
+
56
+ Diagnosis is based on behavior, not cause or mechanism.[36][92] Under the DSM-5, autism is characterized by persistent deficits in social communication and interaction across multiple contexts, as well as restricted, repetitive patterns of behavior, interests, or activities. These deficits are present in early childhood, typically before age three, and lead to clinically significant functional impairment.[6] Sample symptoms include lack of social or emotional reciprocity, stereotyped and repetitive use of language or idiosyncratic language, and persistent preoccupation with unusual objects. The disturbance must not be better accounted for by Rett syndrome, intellectual disability or global developmental delay.[6] ICD-10 uses essentially the same definition.[31]
57
+
58
+ Several diagnostic instruments are available. Two are commonly used in autism research: the Autism Diagnostic Interview-Revised (ADI-R) is a semistructured parent interview, and the Autism Diagnostic Observation Schedule (ADOS)[93] uses observation and interaction with the child. The Childhood Autism Rating Scale (CARS) is used widely in clinical environments to assess severity of autism based on observation of children.[38] The Diagnostic interview for social and communication disorders (DISCO) may also be used.[94]
59
+
60
+ A pediatrician commonly performs a preliminary investigation by taking developmental history and physically examining the child. If warranted, diagnosis and evaluations are conducted with help from ASD specialists, observing and assessing cognitive, communication, family, and other factors using standardized tools, and taking into account any associated medical conditions.[95] A pediatric neuropsychologist is often asked to assess behavior and cognitive skills, both to aid diagnosis and to help recommend educational interventions.[96] A differential diagnosis for ASD at this stage might also consider intellectual disability, hearing impairment, and a specific language impairment[95] such as Landau–Kleffner syndrome.[97] The presence of autism can make it harder to diagnose coexisting psychiatric disorders such as depression.[98]
61
+
62
+ Clinical genetics evaluations are often done once ASD is diagnosed, particularly when other symptoms already suggest a genetic cause.[99] Although genetic technology allows clinical geneticists to link an estimated 40% of cases to genetic causes,[100] consensus guidelines in the US and UK are limited to high-resolution chromosome and fragile X testing.[99] A genotype-first model of diagnosis has been proposed, which would routinely assess the genome's copy number variations.[101] As new genetic tests are developed several ethical, legal, and social issues will emerge. Commercial availability of tests may precede adequate understanding of how to use test results, given the complexity of autism's genetics.[102] Metabolic and neuroimaging tests are sometimes helpful, but are not routine.[99]
63
+
64
+ ASD can sometimes be diagnosed by age 14 months, although diagnosis becomes increasingly stable over the first three years of life: for example, a one-year-old who meets diagnostic criteria for ASD is less likely than a three-year-old to continue to do so a few years later.[1] In the UK the National Autism Plan for Children recommends at most 30 weeks from first concern to completed diagnosis and assessment, though few cases are handled that quickly in practice.[95] Although the symptoms of autism and ASD begin early in childhood, they are sometimes missed; years later, adults may seek diagnoses to help them or their friends and family understand themselves, to help their employers make adjustments, or in some locations to claim disability living allowances or other benefits. Girls are often diagnosed later than boys.[103]
65
+
66
+ Underdiagnosis and overdiagnosis are problems in marginal cases, and much of the recent increase in the number of reported ASD cases is likely due to changes in diagnostic practices. The increasing popularity of drug treatment options and the expansion of benefits has given providers incentives to diagnose ASD, resulting in some overdiagnosis of children with uncertain symptoms. Conversely, the cost of screening and diagnosis and the challenge of obtaining payment can inhibit or delay diagnosis.[104] It is particularly hard to diagnose autism among the visually impaired, partly because some of its diagnostic criteria depend on vision, and partly because autistic symptoms overlap with those of common blindness syndromes or blindisms.[105]
67
+
68
+ Autism is one of the five pervasive developmental disorders (PDD), which are characterized by widespread abnormalities of social interactions and communication, and severely restricted interests and highly repetitive behavior.[31] These symptoms do not imply sickness, fragility, or emotional disturbance.[34]
69
+
70
+ Of the five PDD forms, Asperger syndrome is closest to autism in signs and likely causes; Rett syndrome and childhood disintegrative disorder share several signs with autism, but may have unrelated causes; PDD not otherwise specified (PDD-NOS; also called atypical autism) is diagnosed when the criteria are not met for a more specific disorder.[106] Unlike with autism, people with Asperger syndrome have no substantial delay in language development.[107] The terminology of autism can be bewildering, with autism, Asperger syndrome and PDD-NOS often called the autism spectrum disorders (ASD)[9] or sometimes the autistic disorders,[108] whereas autism itself is often called autistic disorder, childhood autism, or infantile autism. In this article, autism refers to the classic autistic disorder; in clinical practice, though, autism, ASD, and PDD are often used interchangeably.[99] ASD, in turn, is a subset of the broader autism phenotype, which describes individuals who may not have ASD but do have autistic-like traits, such as avoiding eye contact.[109]
71
+
72
+ Autism can also be divided into syndromal and non-syndromal autism; the syndromal autism is associated with severe or profound intellectual disability or a congenital syndrome with physical symptoms, such as tuberous sclerosis.[110] Although individuals with Asperger syndrome tend to perform better cognitively than those with autism, the extent of the overlap between Asperger syndrome, HFA, and non-syndromal autism is unclear.[111]
73
+
74
+ Some studies have reported diagnoses of autism in children due to a loss of language or social skills, as opposed to a failure to make progress, typically from 15 to 30 months of age. The validity of this distinction remains controversial; it is possible that regressive autism is a specific subtype,[1][17][46][112] or that there is a continuum of behaviors between autism with and without regression.[113]
75
+
76
+ Research into causes has been hampered by the inability to identify biologically meaningful subgroups within the autistic population[114] and by the traditional boundaries between the disciplines of psychiatry, psychology, neurology and pediatrics.[115] Newer technologies such as fMRI and diffusion tensor imaging can help identify biologically relevant phenotypes (observable traits) that can be viewed on brain scans, to help further neurogenetic studies of autism;[116] one example is lowered activity in the fusiform face area of the brain, which is associated with impaired perception of people versus objects.[22] It has been proposed to classify autism using genetics as well as behavior.[117]
77
+
78
+ Autism has long been thought to cover a wide spectrum, ranging from individuals with severe impairments—who may be silent, developmentally disabled, and prone to frequent repetitive behavior such as hand flapping and rocking—to high functioning individuals who may have active but distinctly odd social approaches, narrowly focused interests, and verbose, pedantic communication.[118] Because the behavior spectrum is continuous, boundaries between diagnostic categories are necessarily somewhat arbitrary.[54] Sometimes the syndrome is divided into low-, medium- or high-functioning autism (LFA, MFA, and HFA), based on IQ thresholds.[119] Some people have called for an end to the terms "high-functioning" and "low-functioning" due to lack of nuance and the potential for a person's needs or abilities to be overlooked.[120][121]
79
+
80
+ About half of parents of children with ASD notice their child's unusual behaviors by age 18 months, and about four-fifths notice by age 24 months.[1] According to an article, failure to meet any of the following milestones "is an absolute indication to proceed with further evaluations. Delay in referral for such testing may delay early diagnosis and treatment and affect the long-term outcome".[35]
81
+
82
+ The United States Preventive Services Task Force in 2016 found it was unclear if screening was beneficial or harmful among children in whom there is no concerns.[123] The Japanese practice is to screen all children for ASD at 18 and 24 months, using autism-specific formal screening tests. In contrast, in the UK, children whose families or doctors recognize possible signs of autism are screened. It is not known which approach is more effective.[22] Screening tools include the Modified Checklist for Autism in Toddlers (M-CHAT), the Early Screening of Autistic Traits Questionnaire, and the First Year Inventory; initial data on M-CHAT and its predecessor, the Checklist for Autism in Toddlers (CHAT), on children aged 18–30 months suggests that it is best used in a clinical setting and that it has low sensitivity (many false-negatives) but good specificity (few false-positives).[1] It may be more accurate to precede these tests with a broadband screener that does not distinguish ASD from other developmental disorders.[124] Screening tools designed for one culture's norms for behaviors like eye contact may be inappropriate for a different culture.[125] Although genetic screening for autism is generally still impractical, it can be considered in some cases, such as children with neurological symptoms and dysmorphic features.[126]
83
+
84
+ While infection with rubella during pregnancy causes fewer than 1% of cases of autism,[127] vaccination against rubella can prevent many of those cases.[128]
85
+
86
+ The main goals when treating children with autism are to lessen associated deficits and family distress, and to increase quality of life and functional independence. In general, higher IQs are correlated with greater responsiveness to treatment and improved treatment outcomes.[130][131] No single treatment is best and treatment is typically tailored to the child's needs.[9] Families and the educational system are the main resources for treatment.[22] Services should be carried out by behavior analysts, special education teachers, speech pathologists, and licensed psychologists. Studies of interventions have methodological problems that prevent definitive conclusions about efficacy.[132] However, the development of evidence-based interventions has advanced in recent years.[130] Although many psychosocial interventions have some positive evidence, suggesting that some form of treatment is preferable to no treatment, the methodological quality of systematic reviews of these studies has generally been poor, their clinical results are mostly tentative, and there is little evidence for the relative effectiveness of treatment options.[133] Intensive, sustained special education programs and behavior therapy early in life can help children acquire self-care, communication, and job skills,[9] and often improve functioning and decrease symptom severity and maladaptive behaviors;[134] claims that intervention by around age three years is crucial are not substantiated.[135] While medications have not been found to help with core symptoms, they may be used for associated symptoms, such as irritability, inattention, or repetitive behavior patterns.[12]
87
+
88
+ Educational interventions often used include applied behavior analysis (ABA), developmental models, structured teaching, speech and language therapy, social skills therapy, and occupational therapy and cognitive behavioral interventions in adults without intellectual disability to reduce depression, anxiety, and obsessive-compulsive disorder.[9][136] Among these approaches, interventions either treat autistic features comprehensively, or focalize treatment on a specific area of deficit.[130] The quality of research for early intensive behavioral intervention (EIBI)—a treatment procedure incorporating over thirty hours per week of the structured type of ABA that is carried out with very young children—is currently low, and more vigorous research designs with larger sample sizes are needed.[137] Two theoretical frameworks outlined for early childhood intervention include structured and naturalistic ABA interventions, and developmental social pragmatic models (DSP).[130] One interventional strategy utilizes a parent training model, which teaches parents how to implement various ABA and DSP techniques, allowing for parents to disseminate interventions themselves.[130] Various DSP programs have been developed to explicitly deliver intervention systems through at-home parent implementation. Despite the recent development of parent training models, these interventions have demonstrated effectiveness in numerous studies, being evaluated as a probable efficacious mode of treatment.[130]
89
+
90
+ Early, intensive ABA therapy has demonstrated effectiveness in enhancing communication and adaptive functioning in preschool children;[9][138] it is also well-established for improving the intellectual performance of that age group.[9][134][138] Similarly, a teacher-implemented intervention that utilizes a more naturalistic form of ABA combined with a developmental social pragmatic approach has been found to be beneficial in improving social-communication skills in young children, although there is less evidence in its treatment of global symptoms.[130] Neuropsychological reports are often poorly communicated to educators, resulting in a gap between what a report recommends and what education is provided.[96] It is not known whether treatment programs for children lead to significant improvements after the children grow up,[134] and the limited research on the effectiveness of adult residential programs shows mixed results.[139] The appropriateness of including children with varying severity of autism spectrum disorders in the general education population is a subject of current debate among educators and researchers.[140]
91
+
92
+ Medications may be used to treat ASD symptoms that interfere with integrating a child into home or school when behavioral treatment fails.[10] They may also be used for associated health problems, such as ADHD or anxiety.[10] More than half of US children diagnosed with ASD are prescribed psychoactive drugs or anticonvulsants, with the most common drug classes being antidepressants, stimulants, and antipsychotics.[13][14] The atypical antipsychotic drugs risperidone and aripiprazole are FDA-approved for treating associated aggressive and self-injurious behaviors.[12][34][141] However, their side effects must be weighed against their potential benefits, and people with autism may respond atypically.[12] Side effects, for example, may include weight gain, tiredness, drooling, and aggression.[12] SSRI antidepressants, such as fluoxetine and fluvoxamine, have been shown to be effective in reducing repetitive and ritualistic behaviors, while the stimulant medication methylphenidate is beneficial for some children with co-morbid inattentiveness or hyperactivity.[9] There is scant reliable research about the effectiveness or safety of drug treatments for adolescents and adults with ASD.[142] No known medication relieves autism's core symptoms of social and communication impairments.[143] Experiments in mice have reversed or reduced some symptoms related to autism by replacing or modulating gene function,[90][144] suggesting the possibility of targeting therapies to specific rare mutations known to cause autism.[89][145]
93
+
94
+ Although many alternative therapies and interventions are available, few are supported by scientific studies.[41][146] Treatment approaches have little empirical support in quality-of-life contexts, and many programs focus on success measures that lack predictive validity and real-world relevance.[43] Some alternative treatments may place the child at risk. The preference that children with autism have for unconventional foods can lead to reduction in bone cortical thickness with this being greater in those on casein-free diets, as a consequence of the low intake of calcium and vitamin D; however, suboptimal bone development in ASD has also been associated with lack of exercise and gastrointestinal disorders.[147] In 2005, botched chelation therapy killed a five-year-old child with autism.[148][149] Chelation is not recommended for people with ASD since the associated risks outweigh any potential benefits.[150] Another alternative medicine practice with no evidence is CEASE therapy, a mixture of homeopathy, supplements, and 'vaccine detoxing'.[151][152]
95
+
96
+ Although popularly used as an alternative treatment for people with autism, as of 2018 there is no good evidence to recommend a gluten- and casein-free diet as a standard treatment.[153][154][155] A 2018 review concluded that it may be a therapeutic option for specific groups of children with autism, such as those with known food intolerances or allergies, or with food intolerance markers. The authors analyzed the prospective trials conducted to date that studied the efficacy of the gluten- and casein-free diet in children with ASD (4 in total). All of them compared gluten- and casein-free diet versus normal diet with a control group (2 double-blind randomized controlled trials, 1 double-blind crossover trial, 1 single-blind trial). In two of the studies, whose duration was 12 and 24 months, a significant improvement in ASD symptoms (efficacy rate 50%) was identified. In the other two studies, whose duration was 3 months, no significant effect was observed.[153] The authors concluded that a longer duration of the diet may be necessary to achieve the improvement of the ASD symptoms.[153] Other problems documented in the trials carried out include transgressions of the diet, small sample size, the heterogeneity of the participants and the possibility of a placebo effect.[155][156] In the subset of people who have gluten sensitivity there is limited evidence that suggests that a gluten-free diet may improve some autistic behaviors.[157][158][159]
97
+
98
+ Results of a systematic review on interventions to address health outcomes among autistic adults found emerging evidence to support mindfulness-based interventions for improving mental health. This includes decreasing stress, anxiety, ruminating thoughts, anger, and aggression.[136] There is tentative evidence that music therapy may improve social interactions, verbal communication, and non-verbal communication skills.[160] There has been early research looking at hyperbaric treatments in children with autism.[161] Studies on pet therapy have shown positive effects.[162]
99
+
100
+ There is no known cure.[9][22] The degree of symptoms can decrease, occasionally to the extent that people lose their diagnosis of ASD;[24] this occurs sometimes after intensive treatment and sometimes not. It is not known how often recovery happens;[134] reported rates in unselected samples have ranged from 3% to 25%.[24] Most children with autism acquire language by age five or younger, though a few have developed communication skills in later years.[163] Many children with autism lack social support, future employment opportunities or self-determination.[43] Although core difficulties tend to persist, symptoms often become less severe with age.[34]
101
+
102
+ Few high-quality studies address long-term prognosis. Some adults show modest improvement in communication skills, but a few decline; no study has focused on autism after midlife.[164] Acquiring language before age six, having an IQ above 50, and having a marketable skill all predict better outcomes; independent living is unlikely with severe autism.[165]
103
+
104
+ Many individuals with autism face significant obstacles in transitioning to adulthood.[166] Compared to the general population individuals with autism are more likely to be unemployed and to have never had a job. About half of people in their 20s with autism are not employed.[167]
105
+
106
+ Most recent reviews tend to estimate a prevalence of 1–2 per 1,000 for autism and close to 6 per 1,000 for ASD as of 2007.[27] A 2016 survey in the United States reported a rate of 25 per 1,000 children for ASD.[168] Globally, autism affects an estimated 24.8 million people as of 2015[update], while Asperger syndrome affects a further 37.2 million.[16] In 2012, the NHS estimated that the overall prevalence of autism among adults aged 18 years and over in the UK was 1.1%.[169] Rates of PDD-NOS's has been estimated at 3.7 per 1,000, Asperger syndrome at roughly 0.6 per 1,000, and childhood disintegrative disorder at 0.02 per 1,000.[170] CDC estimates about 1 out of 59 (1.7%) for 2014, an increase from 1 out of every 68 children (1.5%) for 2010.[171]
107
+
108
+ The number of reported cases of autism increased dramatically in the 1990s and early 2000s. This increase is largely attributable to changes in diagnostic practices, referral patterns, availability of services, age at diagnosis, and public awareness,[170][172] though unidentified environmental risk factors cannot be ruled out.[21] The available evidence does not rule out the possibility that autism's true prevalence has increased;[170] a real increase would suggest directing more attention and funding toward changing environmental factors instead of continuing to focus on genetics.[173]
109
+
110
+ Boys are at higher risk for ASD than girls. The sex ratio averages 4.3:1 and is greatly modified by cognitive impairment: it may be close to 2:1 with intellectual disability and more than 5.5:1 without.[27] Several theories about the higher prevalence in males have been investigated, but the cause of the difference is unconfirmed;[174] one theory is that females are underdiagnosed.[175]
111
+
112
+ Although the evidence does not implicate any single pregnancy-related risk factor as a cause of autism, the risk of autism is associated with advanced age in either parent, and with diabetes, bleeding, and use of psychiatric drugs in the mother during pregnancy.[174][176] The risk is greater with older fathers than with older mothers; two potential explanations are the known increase in mutation burden in older sperm, and the hypothesis that men marry later if they carry genetic liability and show some signs of autism.[30] Most professionals believe that race, ethnicity, and socioeconomic background do not affect the occurrence of autism.[177]
113
+
114
+ Several other conditions are common in children with autism.[22] They include:
115
+
116
+ A few examples of autistic symptoms and treatments were described long before autism was named. The Table Talk of Martin Luther, compiled by his notetaker, Mathesius, contains the story of a 12-year-old boy who may have been severely autistic.[190] Luther reportedly thought the boy was a soulless mass of flesh possessed by the devil, and suggested that he be suffocated, although a later critic has cast doubt on the veracity of this report.[191] The earliest well-documented case of autism is that of Hugh Blair of Borgue, as detailed in a 1747 court case in which his brother successfully petitioned to annul Blair's marriage to gain Blair's inheritance.[192] The Wild Boy of Aveyron, a feral child caught in 1798, showed several signs of autism; the medical student Jean Itard treated him with a behavioral program designed to help him form social attachments and to induce speech via imitation.[189]
117
+
118
+ The New Latin word autismus (English translation autism) was coined by the Swiss psychiatrist Eugen Bleuler in 1910 as he was defining symptoms of schizophrenia. He derived it from the Greek word autós (αὐτός, meaning "self"), and used it to mean morbid self-admiration, referring to "autistic withdrawal of the patient to his fantasies, against which any influence from outside becomes an intolerable disturbance".[193] A Soviet child psychiatrist, Grunya Sukhareva, described a similar syndrome that was published in Russian in 1925, and in German in 1926.[194]
119
+
120
+ The word autism first took its modern sense in 1938 when Hans Asperger of the Vienna University Hospital adopted Bleuler's terminology autistic psychopaths in a lecture in German about child psychology.[195] Asperger was investigating an ASD now known as Asperger syndrome, though for various reasons it was not widely recognized as a separate diagnosis until 1981.[189] Leo Kanner of the Johns Hopkins Hospital first used autism in its modern sense in English when he introduced the label early infantile autism in a 1943 report of 11 children with striking behavioral similarities.[48] Almost all the characteristics described in Kanner's first paper on the subject, notably "autistic aloneness" and "insistence on sameness", are still regarded as typical of the autistic spectrum of disorders.[65] It is not known whether Kanner derived the term independently of Asperger.[196]
121
+
122
+ Donald Triplett was the first person diagnosed with autism.[197] He was diagnosed by Kanner after being first examined in 1938, and was labeled as "case 1".[197] Triplett was noted for his savant abilities, particularly being able to name musical notes played on a piano and to mentally multiply numbers. His father, Oliver, described him as socially withdrawn but interested in number patterns, music notes, letters of the alphabet, and U.S. president pictures. By the age of 2, he had the ability to recite the 23rd Psalm and memorized 25 questions and answers from the Presbyterian catechism. He was also interested in creating musical chords.[198]
123
+
124
+ Kanner's reuse of autism led to decades of confused terminology like infantile schizophrenia, and child psychiatry's focus on maternal deprivation led to misconceptions of autism as an infant's response to "refrigerator mothers". Starting in the late 1960s autism was established as a separate syndrome.[199]
125
+
126
+ As late as the mid-1970s there was little evidence of a genetic role in autism; while in 2007 it was believed to be one of the most heritable psychiatric conditions.[200] Although the rise of parent organizations and the destigmatization of childhood ASD have affected how ASD is viewed,[189] parents continue to feel social stigma in situations where their child's autistic behavior is perceived negatively,[201] and many primary care physicians and medical specialists express some beliefs consistent with outdated autism research.[202]
127
+
128
+ It took until 1980 for the DSM-III to differentiate autism from childhood schizophrenia. In 1987, the DSM-III-R provided a checklist for diagnosing autism. In May 2013, the DSM-5 was released, updating the classification for pervasive developmental disorders. The grouping of disorders, including PDD-NOS, autism, Asperger syndrome, Rett syndrome, and CDD, has been removed and replaced with the general term of Autism Spectrum Disorders. The two categories that exist are impaired social communication and/or interaction, and restricted and/or repetitive behaviors.[203]
129
+
130
+ The Internet has helped autistic individuals bypass nonverbal cues and emotional sharing that they find difficult to deal with, and has given them a way to form online communities and work remotely.[204] Societal and cultural aspects of autism have developed: some in the community seek a cure, while others believe that autism is simply another way of being.[25][26][205]
131
+
132
+ An autistic culture has emerged, accompanied by the autistic rights and neurodiversity movements.[206][207][208] Events include World Autism Awareness Day, Autism Sunday, Autistic Pride Day, Autreat, and others.[209][210][211][212] Organizations dedicated to promoting awareness of autism include Autistic Self Advocacy Network, Aspies For Freedom, Autism National Committee, and Autism Society of America. At the same time, some organizations, including Autism Speaks, have been condemned by disability rights organizations for failing to support autistic people.[213] Social-science scholars study those with autism in hopes to learn more about "autism as a culture, transcultural comparisons... and research on social movements."[214] While most autistic individuals do not have savant skills, many have been successful in their fields.[215][216][217]
133
+
134
+ The autism rights movement is a social movement within the context of disability rights that emphasizes the concept of neurodiversity, viewing the autism spectrum as a result of natural variations in the human brain rather than a disorder to be cured.[208] The autism rights movement advocates for including greater acceptance of autistic behaviors; therapies that focus on coping skills rather than on imitating the behaviors of those without autism,[218] and the recognition of the autistic community as a minority group.[218][219] Autism rights or neurodiversity advocates believe that the autism spectrum is genetic and should be accepted as a natural expression of the human genome. This perspective is distinct from two other likewise distinct views: the medical perspective, that autism is caused by a genetic defect and should be addressed by targeting the autism gene(s), and fringe theories that autism is caused by environmental factors such as vaccines.[208] A common criticism against autistic activists is that the majority of them are "high-functioning" or have Asperger syndrome and do not represent the views of "low-functioning" autistic people.[219]
135
+
136
+ About half of autistics are unemployed, and one third of those with graduate degrees may be unemployed.[220] Among autistics who find work, most are employed in sheltered settings working for wages below the national minimum.[221] While employers state hiring concerns about productivity and supervision, experienced employers of autistics give positive reports of above average memory and detail orientation as well as a high regard for rules and procedure in autistic employees.[220] A majority of the economic burden of autism is caused by decreased earnings in the job market.[222] Some studies also find decreased earning among parents who care for autistic children.[223][224]
137
+
en/4680.html.txt ADDED
@@ -0,0 +1,141 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ Pluto (minor planet designation: 134340 Pluto) is an icy dwarf planet in the Kuiper belt, a ring of bodies beyond the orbit of Neptune. It was the first and the largest Kuiper belt object to be discovered.
4
+
5
+ Pluto was discovered by Clyde Tombaugh in 1930 and declared to be the ninth planet from the Sun. After 1992, its status as a planet was questioned following the discovery of several objects of similar size in the Kuiper belt. In 2005, Eris, a dwarf planet in the scattered disc which is 27% more massive than Pluto, was discovered. This led the International Astronomical Union (IAU) to define the term "planet" formally in 2006, during their 26th General Assembly. That definition excluded Pluto and reclassified it as a dwarf planet.
6
+
7
+ It is the ninth-largest and tenth-most-massive known object directly orbiting the Sun. It is the largest known trans-Neptunian object by volume but is less massive than Eris. Like other Kuiper belt objects, Pluto is primarily made of ice and rock and is relatively small—one-sixth the mass of the Moon and one-third its volume. It has a moderately eccentric and inclined orbit during which it ranges from 30 to 49 astronomical units or AU (4.4–7.4 billion km) from the Sun. This means that Pluto periodically comes closer to the Sun than Neptune, but a stable orbital resonance with Neptune prevents them from colliding. Light from the Sun takes 5.5 hours to reach Pluto at its average distance (39.5 AU).
8
+
9
+ Pluto has five known moons: Charon (the largest, with a diameter just over half that of Pluto), Styx, Nix, Kerberos, and Hydra. Pluto and Charon are sometimes considered a binary system because the barycenter of their orbits does not lie within either body.
10
+
11
+ The New Horizons spacecraft performed a flyby of Pluto on July 14, 2015, becoming the first ever, and to date only, spacecraft to do so. During its brief flyby, New Horizons made detailed measurements and observations of Pluto and its moons. In September 2016, astronomers announced that the reddish-brown cap of the north pole of Charon is composed of tholins, organic macromolecules that may be ingredients for the emergence of life, and produced from methane, nitrogen and other gases released from the atmosphere of Pluto and transferred 19,000 km (12,000 mi) to the orbiting moon.
12
+
13
+ In the 1840s, Urbain Le Verrier used Newtonian mechanics to predict the position of the then-undiscovered planet Neptune after analyzing perturbations in the orbit of Uranus.[14] Subsequent observations of Neptune in the late 19th century led astronomers to speculate that Uranus's orbit was being disturbed by another planet besides Neptune.
14
+
15
+ In 1906, Percival Lowell—a wealthy Bostonian who had founded Lowell Observatory in Flagstaff, Arizona, in 1894—started an extensive project in search of a possible ninth planet, which he termed "Planet X".[15] By 1909, Lowell and William H. Pickering had suggested several possible celestial coordinates for such a planet.[16] Lowell and his observatory conducted his search until his death in 1916, but to no avail. Unknown to Lowell, his surveys had captured two faint images of Pluto on March 19 and April 7, 1915, but they were not recognized for what they were.[16][17] There are fourteen other known precovery observations, with the earliest made by the Yerkes Observatory on August 20, 1909.[18]
16
+
17
+ Percival's widow, Constance Lowell, entered into a ten-year legal battle with the Lowell Observatory over her husband's legacy, and the search for Planet X did not resume until 1929.[19] Vesto Melvin Slipher, the observatory director, gave the job of locating Planet X to 23-year-old Clyde Tombaugh, who had just arrived at the observatory after Slipher had been impressed by a sample of his astronomical drawings.[19]
18
+
19
+ Tombaugh's task was to systematically image the night sky in pairs of photographs, then examine each pair and determine whether any objects had shifted position. Using a blink comparator, he rapidly shifted back and forth between views of each of the plates to create the illusion of movement of any objects that had changed position or appearance between photographs. On February 18, 1930, after nearly a year of searching, Tombaugh discovered a possible moving object on photographic plates taken on January 23 and 29. A lesser-quality photograph taken on January 21 helped confirm the movement.[20] After the observatory obtained further confirmatory photographs, news of the discovery was telegraphed to the Harvard College Observatory on March 13, 1930.[16] Pluto has yet to complete a full orbit of the Sun since its discovery, as one Plutonian year is 247.68 years long.[21]
20
+
21
+ The discovery made headlines around the globe.[22] Lowell Observatory, which had the right to name the new object, received more than 1,000 suggestions from all over the world, ranging from Atlas to Zymal.[23] Tombaugh urged Slipher to suggest a name for the new object quickly before someone else did.[23] Constance Lowell proposed Zeus, then Percival and finally Constance. These suggestions were disregarded.[24]
22
+
23
+ The name Pluto, after the Roman god of the underworld, was proposed by Venetia Burney (1918–2009), an eleven-year-old schoolgirl in Oxford, England, who was interested in classical mythology.[25] She suggested it in a conversation with her grandfather Falconer Madan, a former librarian at the University of Oxford's Bodleian Library, who passed the name to astronomy professor Herbert Hall Turner, who cabled it to colleagues in the United States.[25]
24
+
25
+ Each member of the Lowell Observatory was allowed to vote on a short-list of three potential names: Minerva (which was already the name for an asteroid), Cronus (which had lost reputation through being proposed by the unpopular astronomer Thomas Jefferson Jackson See), and Pluto. Pluto received every vote.[26] The name was announced on May 1, 1930.[25][27] Upon the announcement, Madan gave Venetia £5 (equivalent to 300 GBP, or 450 USD in 2014)[28] as a reward.[25]
26
+
27
+ The final choice of name was helped in part by the fact that the first two letters of Pluto are the initials of Percival Lowell. Pluto's astronomical symbol (, Unicode U+2647, ♇) was then created as a monogram constructed from the letters "PL".[29] Pluto's astrological symbol resembles that of Neptune (), but has a circle in place of the middle prong of the trident ().
28
+
29
+ The name was soon embraced by wider culture. In 1930, Walt Disney was apparently inspired by it when he introduced for Mickey Mouse a canine companion named Pluto, although Disney animator Ben Sharpsteen could not confirm why the name was given.[30] In 1941, Glenn T. Seaborg named the newly created element plutonium after Pluto, in keeping with the tradition of naming elements after newly discovered planets, following uranium, which was named after Uranus, and neptunium, which was named after Neptune.[31]
30
+
31
+ Most languages use the name "Pluto" in various transliterations.[h] In Japanese, Houei Nojiri suggested the translation Meiōsei (冥王星, "Star of the King (God) of the Underworld"), and this was borrowed into Chinese, Korean, and Vietnamese (which instead uses "Sao Diêm Vương", which was derived from the Chinese term 閻王 (Yánwáng), as "minh" is a homophone for the Sino-Vietnamese words for "dark" (冥) and "bright" (明)).[32][33][34] Some Indian languages use the name Pluto, but others, such as Hindi, use the name of Yama, the God of Death in Hindu and Buddhist mythology.[33] Polynesian languages also tend to use the indigenous god of the underworld, as in Māori Whiro.[33]
32
+
33
+ Once Pluto was found, its faintness and lack of a resolvable disc cast doubt on the idea that it was Lowell's Planet X.[15] Estimates of Pluto's mass were revised downward throughout the 20th century.[35]
34
+
35
+ Astronomers initially calculated its mass based on its presumed effect on Neptune and Uranus. In 1931, Pluto was calculated to be roughly the mass of Earth, with further calculations in 1948 bringing the mass down to roughly that of Mars.[37][39] In 1976, Dale Cruikshank, Carl Pilcher and David Morrison of the University of Hawaii calculated Pluto's albedo for the first time, finding that it matched that for methane ice; this meant Pluto had to be exceptionally luminous for its size and therefore could not be more than 1 percent the mass of Earth.[40] (Pluto's albedo is 1.4–1.9 times that of Earth.[2])
36
+
37
+ In 1978, the discovery of Pluto's moon Charon allowed the measurement of Pluto's mass for the first time: roughly 0.2% that of Earth, and far too small to account for the discrepancies in the orbit of Uranus. Subsequent searches for an alternative Planet X, notably by Robert Sutton Harrington,[43] failed. In 1992, Myles Standish used data from Voyager 2's flyby of Neptune in 1989, which had revised the estimates of Neptune's mass downward by 0.5%—an amount comparable to the mass of Mars—to recalculate its gravitational effect on Uranus. With the new figures added in, the discrepancies, and with them the need for a Planet X, vanished.[44] Today, the majority of scientists agree that Planet X, as Lowell defined it, does not exist.[45] Lowell had made a prediction of Planet X's orbit and position in 1915 that was fairly close to Pluto's actual orbit and its position at that time; Ernest W. Brown concluded soon after Pluto's discovery that this was a coincidence.[46]
38
+
39
+ From 1992 onward, many bodies were discovered orbiting in the same volume as Pluto, showing that Pluto is part of a population of objects called the Kuiper belt. This made its official status as a planet controversial, with many questioning whether Pluto should be considered together with or separately from its surrounding population. Museum and planetarium directors occasionally created controversy by omitting Pluto from planetary models of the Solar System. In February 2000 the Hayden Planetarium in New York City displayed a Solar System model of only eight planets, which made headlines almost a year later.[47]
40
+
41
+ Ceres, Pallas, Juno and Vesta lost their planet status after the discovery of many other asteroids. Similarly, objects increasingly closer in size to Pluto were discovered in the Kuiper belt region. On July 29, 2005, astronomers at Caltech announced the discovery of a new trans-Neptunian object, Eris, which was substantially more massive than Pluto and the most massive object discovered in the Solar System since Triton in 1846. Its discoverers and the press initially called it the tenth planet, although there was no official consensus at the time on whether to call it a planet.[48] Others in the astronomical community considered the discovery the strongest argument for reclassifying Pluto as a minor planet.[49]
42
+
43
+ The debate came to a head in August 2006, with an IAU resolution that created an official definition for the term "planet". According to this resolution, there are three conditions for an object in the Solar System to be considered a planet:
44
+
45
+ Pluto fails to meet the third condition.[52] Its mass is substantially less than the combined mass of the other objects in its orbit: 0.07 times, in contrast to Earth, which is 1.7 million times the remaining mass in its orbit (excluding the moon).[53][51] The IAU further decided that bodies that, like Pluto, meet criteria 1 and 2, but do not meet criterion 3 would be called dwarf planets. In September 2006, the IAU included Pluto, and Eris and its moon Dysnomia, in their Minor Planet Catalogue, giving them the official minor planet designations "(134340) Pluto", "(136199) Eris", and "(136199) Eris I Dysnomia".[54] Had Pluto been included upon its discovery in 1930, it would have likely been designated 1164, following 1163 Saga, which was discovered a month earlier.[55]
46
+
47
+ There has been some resistance within the astronomical community toward the reclassification.[56][57][58] Alan Stern, principal investigator with NASA's New Horizons mission to Pluto, derided the IAU resolution, stating that "the definition stinks, for technical reasons".[59] Stern contended that, by the terms of the new definition, Earth, Mars, Jupiter, and Neptune, all of which share their orbits with asteroids, would be excluded.[60] He argued that all big spherical moons, including the Moon, should likewise be considered planets.[61] He also stated that because less than five percent of astronomers voted for it, the decision was not representative of the entire astronomical community.[60] Marc W. Buie, then at the Lowell Observatory, petitioned against the definition.[62] Others have supported the IAU. Mike Brown, the astronomer who discovered Eris, said "through this whole crazy, circus-like procedure, somehow the right answer was stumbled on. It's been a long time coming. Science is self-correcting eventually, even when strong emotions are involved."[63]
48
+
49
+ Public reception to the IAU decision was mixed. A resolution introduced in the California State Assembly facetiously called the IAU decision a "scientific heresy".[64] The New Mexico House of Representatives passed a resolution in honor of Tombaugh, a longtime resident of that state, that declared that Pluto will always be considered a planet while in New Mexican skies and that March 13, 2007, was Pluto Planet Day.[65][66] The Illinois Senate passed a similar resolution in 2009, on the basis that Clyde Tombaugh, the discoverer of Pluto, was born in Illinois. The resolution asserted that Pluto was "unfairly downgraded to a 'dwarf' planet" by the IAU."[67] Some members of the public have also rejected the change, citing the disagreement within the scientific community on the issue, or for sentimental reasons, maintaining that they have always known Pluto as a planet and will continue to do so regardless of the IAU decision.[68]
50
+
51
+ In 2006, in its 17th annual words-of-the-year vote, the American Dialect Society voted plutoed as the word of the year. To "pluto" is to "demote or devalue someone or something".[69]
52
+
53
+ Researchers on both sides of the debate gathered in August 2008, at the Johns Hopkins University Applied Physics Laboratory for a conference that included back-to-back talks on the current IAU definition of a planet.[70] Entitled "The Great Planet Debate",[71] the conference published a post-conference press release indicating that scientists could not come to a consensus about the definition of planet.[72] In June 2008, the IAU had announced in a press release that the term "plutoid" would henceforth be used to refer to Pluto and other objects that have an orbital semi-major axis greater than that of Neptune and enough mass to be of near-spherical shape.[73][74][75]
54
+
55
+ Pluto's orbital period is currently about 248 years. Its orbital characteristics are substantially different from those of the planets, which follow nearly circular orbits around the Sun close to a flat reference plane called the ecliptic. In contrast, Pluto's orbit is moderately inclined relative to the ecliptic (over 17°) and moderately eccentric (elliptical). This eccentricity means a small region of Pluto's orbit lies closer to the Sun than Neptune's. The Pluto–Charon barycenter came to perihelion on September 5, 1989,[3][i] and was last closer to the Sun than Neptune between February 7, 1979, and February 11, 1999.[76]
56
+
57
+ In the long term, Pluto's orbit is chaotic. Computer simulations can be used to predict its position for several million years (both forward and backward in time), but after intervals longer than the Lyapunov time of 10–20 million years, calculations become speculative: Pluto is sensitive to immeasurably small details of the Solar System, hard-to-predict factors that will gradually change Pluto's position in its orbit.[77][78]
58
+
59
+ The semi-major axis of Pluto's orbit varies between about 39.3 and 39.6 au with a period of about 19,951 years, corresponding to an orbital period varying between 246 and 249 years. The semi-major axis and period are presently getting longer.[79]
60
+
61
+ Despite Pluto's orbit appearing to cross that of Neptune when viewed from directly above, the two objects' orbits are aligned so that they can never collide or even approach closely.
62
+
63
+ The two orbits do not intersect. When Pluto is closest to the Sun, and hence closest to Neptune's orbit as viewed from above, it is also the farthest above Neptune's path. Pluto's orbit passes about 8 AU above that of Neptune, preventing a collision.[80][81][82]
64
+
65
+ This alone is not enough to protect Pluto; perturbations from the planets (especially Neptune) could alter Pluto's orbit (such as its orbital precession) over millions of years so that a collision could be possible. However, Pluto is also protected by its 2:3 orbital resonance with Neptune: for every two orbits that Pluto makes around the Sun, Neptune makes three. Each cycle lasts about 495 years. This pattern is such that, in each 495-year cycle, the first time Pluto is near perihelion, Neptune is over 50° behind Pluto. By Pluto's second perihelion, Neptune will have completed a further one and a half of its own orbits, and so will be nearly 130° ahead of Pluto. Pluto and Neptune's minimum separation is over 17 AU, which is greater than Pluto's minimum separation from Uranus (11 AU).[82] The minimum separation between Pluto and Neptune actually occurs near the time of Pluto's aphelion.[79]
66
+
67
+ The 2:3 resonance between the two bodies is highly stable and has been preserved over millions of years.[83] This prevents their orbits from changing relative to one another, and so the two bodies can never pass near each other. Even if Pluto's orbit were not inclined, the two bodies could never collide.[82] The long term stability of the mean-motion resonance is due to phase protection. If Pluto's period is slightly shorter than 3/2 of Neptune, its orbit relative to Neptune will drift, causing it to make closer approaches behind Neptune's orbit. The strong gravitational pull between the two causes angular momentum to be transferred to Pluto, at Neptune's expense. This moves Pluto into a slightly larger orbit, where it travels slightly more slowly, according to Kepler's third law. After many such repetitions, Pluto is sufficiently slowed, and Neptune sufficiently sped up, that Pluto's orbit relative to Neptune drifts in the opposite direction until the process is reversed. The whole process takes about 20,000 years to complete.[82][83][84]
68
+
69
+ Numerical studies have shown that over millions of years, the general nature of the alignment between the orbits of Pluto and Neptune does not change.[80][79] There are several other resonances and interactions that enhance Pluto's stability. These arise principally from two additional mechanisms (besides the 2:3 mean-motion resonance).
70
+
71
+ First, Pluto's argument of perihelion, the angle between the point where it crosses the ecliptic and the point where it is closest to the Sun, librates around 90°.[79] This means that when Pluto is closest to the Sun, it is at its farthest above the plane of the Solar System, preventing encounters with Neptune. This is a consequence of the Kozai mechanism,[80] which relates the eccentricity of an orbit to its inclination to a larger perturbing body—in this case Neptune. Relative to Neptune, the amplitude of libration is 38°, and so the angular separation of Pluto's perihelion to the orbit of Neptune is always greater than 52° (90°–38°). The closest such angular separation occurs every 10,000 years.[83]
72
+
73
+ Second, the longitudes of ascending nodes of the two bodies—the points where they cross the ecliptic—are in near-resonance with the above libration. When the two longitudes are the same—that is, when one could draw a straight line through both nodes and the Sun—Pluto's perihelion lies exactly at 90°, and hence it comes closest to the Sun when it is highest above Neptune's orbit. This is known as the 1:1 superresonance. All the Jovian planets, particularly Jupiter, play a role in the creation of the superresonance.[80]
74
+
75
+ In 2012, it was hypothesized that 15810 Arawn could be a quasi-satellite of Pluto, a specific type of co-orbital configuration.[85] According to the hypothesis, the object would be a quasi-satellite of Pluto for about 350,000 years out of every two-million-year period.[85][86] Measurements made by the New Horizons spacecraft in 2015 made it possible to calculate the orbit of Arawn more accurately.[87] These calculations confirm the overall dynamics described in the hypothesis.[88] However, it is not agreed upon among astronomers whether Arawn should be classified as a quasi-satellite of Pluto based on this motion, since its orbit is primarily controlled by Neptune with only occasional smaller perturbations caused by Pluto.[89][87][88]
76
+
77
+ Pluto's rotation period, its day, is equal to 6.387 Earth days.[2][90] Like Uranus, Pluto rotates on its "side" in its orbital plane, with an axial tilt of 120°, and so its seasonal variation is extreme; at its solstices, one-fourth of its surface is in continuous daylight, whereas another fourth is in continuous darkness.[91] The reason for this unusual orientation has been debated. Research from the University of Arizona has suggested that it may be due to the way that a body's spin will always adjust to minimise energy. This could mean a body reorienting itself to put extraneous mass near the equator and regions lacking mass tend towards the poles. This is called polar wander.[92] According to a paper released from the University of Arizona, this could be caused by masses of frozen nitrogen building up in shadowed areas of the dwarf planet. These masses would cause the body to reorient itself, leading to its unusual axial tilt of 120°. The buildup of nitrogen is due to Pluto's vast distance from the Sun. At the equator, temperatures can drop to −240 °C (−400.0 °F; 33.1 K), causing nitrogen to freeze as water would freeze on Earth. The same effect seen on Pluto would be observed on Earth were the Antarctic ice sheet several times larger.[93]
78
+
79
+ The plains on Pluto's surface are composed of more than 98 percent nitrogen ice, with traces of methane and carbon monoxide.[94] Nitrogen and carbon monoxide are most abundant on the anti-Charon face of Pluto (around 180° longitude, where Tombaugh Regio's western lobe, Sputnik Planitia, is located), whereas methane is most abundant near 300° east.[95] The mountains are made of water ice.[96] Pluto's surface is quite varied, with large differences in both brightness and color.[97] Pluto is one of the most contrastive bodies in the Solar System, with as much contrast as Saturn's moon Iapetus.[98] The color varies from charcoal black, to dark orange and white.[99] Pluto's color is more similar to that of Io with slightly more orange and significantly less red than Mars.[100] Notable geographical features include Tombaugh Regio, or the "Heart" (a large bright area on the side opposite Charon), Cthulhu Macula,[6] or the "Whale" (a large dark area on the trailing hemisphere), and the "Brass Knuckles" (a series of equatorial dark areas on the leading hemisphere).
80
+
81
+ Sputnik Planitia, the western lobe of the "Heart", is a 1,000 km-wide basin of frozen nitrogen and carbon monoxide ices, divided into polygonal cells, which are interpreted as convection cells that carry floating blocks of water ice crust and sublimation pits towards their margins;[101][102][103] there are obvious signs of glacial flows both into and out of the basin.[104][105] It has no craters that were visible to New Horizons, indicating that its surface is less than 10 million years old.[106] Latest studies have shown that the surface has an age of 180000+90000−40000 years.[107]
82
+ The New Horizons science team summarized initial findings as "Pluto displays a surprisingly wide variety of geological landforms, including those resulting from glaciological and surface–atmosphere interactions as well as impact, tectonic, possible cryovolcanic, and mass-wasting processes."[7]
83
+
84
+ In Western parts of Sputnik Planitia there are fields of transverse dunes formed by the winds blowing from the center of Sputnik Planitia in the direction of surrounding mountains. The dune wavelengths are in the range of 0.4–1 km and they are likely consists of methane particles 200–300 μm in size.[108]
85
+
86
+ Pluto's density is 1.860±0.013 g/cm3.[7] Because the decay of radioactive elements would eventually heat the ices enough for the rock to separate from them, scientists expect that Pluto's internal structure is differentiated, with the rocky material having settled into a dense core surrounded by a mantle of water ice. The pre–New Horizons estimate for the diameter of the core 1700 km, 70% of Pluto's diameter.[109] It is possible that such heating continues today, creating a subsurface ocean of liquid water 100 to 180 km thick at the core–mantle boundary.[109][110][111] In September 2016, scientists at Brown University simulated the impact thought to have formed Sputnik Planitia, and showed that it might have been the result of liquid water upwelling from below after the collision, implying the existence of a subsurface ocean at least 100 km deep.[112] Pluto has no magnetic field.[113] In June 2020, astronomers reported evidence that Pluto may have had a subsurface ocean, and consequently may have been habitable, when it was first formed.[114][115]
87
+
88
+ Pluto's diameter is 2376.6±3.2 km[5] and its mass is (1.303±0.003)×1022 kg, 17.7% that of the Moon (0.22% that of Earth).[123] Its surface area is 1.779×107 km2, or roughly the same surface area as Russia. Its surface gravity is 0.063 g (compared to 1 g for Earth and 0.17 g for the Moon).
89
+
90
+ The discovery of Pluto's satellite Charon in 1978 enabled a determination of the mass of the Pluto–Charon system by application of Newton's formulation of Kepler's third law. Observations of Pluto in occultation with Charon allowed scientists to establish Pluto's diameter more accurately, whereas the invention of adaptive optics allowed them to determine its shape more accurately.[124]
91
+
92
+ With less than 0.2 lunar masses, Pluto is much less massive than the terrestrial planets, and also less massive than seven moons: Ganymede, Titan, Callisto, Io, the Moon, Europa, and Triton. The mass is much less than thought before Charon was discovered.
93
+
94
+ Pluto is more than twice the diameter and a dozen times the mass of Ceres, the largest object in the asteroid belt. It is less massive than the dwarf planet Eris, a trans-Neptunian object discovered in 2005, though Pluto has a larger diameter of 2376.6 km[5] compared to Eris's approximate diameter of 2326 km.[125]
95
+
96
+ Determinations of Pluto's size had been complicated by its atmosphere,[119] and hydrocarbon haze.[117] In March 2014, Lellouch, de Bergh et al. published findings regarding methane mixing ratios in Pluto's atmosphere consistent with a Plutonian diameter greater than 2360 km, with a "best guess" of 2368 km.[121] On July 13, 2015, images from NASA's New Horizons mission Long Range Reconnaissance Imager (LORRI), along with data from the other instruments, determined Pluto's diameter to be 2,370 km (1,470 mi),[125][126] which was later revised to be 2,372 km (1,474 mi) on July 24,[122] and later to 2374±8 km.[7] Using radio occultation data from the New Horizons Radio Science Experiment (REX), the diameter was found to be 2376.6±3.2 km.[5]
97
+
98
+ Pluto has a tenuous atmosphere consisting of nitrogen (N2), methane (CH4), and carbon monoxide (CO), which are in equilibrium with their ices on Pluto's surface.[127][128] According to the measurements by New Horizons, the surface pressure is about 1 Pa (10 μbar),[7] roughly one million to 100,000 times less than Earth's atmospheric pressure. It was initially thought that, as Pluto moves away from the Sun, its atmosphere should gradually freeze onto the surface; studies of New Horizons data and ground-based occultations show that Pluto's atmospheric density increases, and that it likely remains gaseous throughout Pluto's orbit.[129][130] New Horizons observations showed that atmospheric escape of nitrogen to be 10,000 times less than expected.[130] Alan Stern has contended that even a small increase in Pluto's surface temperature can lead to exponential increases in Pluto's atmospheric density; from 18 hPa to as much as 280 hPa (three times that of Mars to a quarter that of the Earth). At such densities, nitrogen could flow across the surface as liquid.[130] Just like sweat cools the body as it evaporates from the skin, the sublimation of Pluto's atmosphere cools its surface.[131] The presence of atmospheric gases was traced up to 1670 kilometers high; the atmosphere does not have a sharp upper boundary.
99
+
100
+ The presence of methane, a powerful greenhouse gas, in Pluto's atmosphere creates a temperature inversion, with the average temperature of its atmosphere tens of degrees warmer than its surface,[132] though observations by New Horizons have revealed Pluto's upper atmosphere to be far colder than expected (70 K, as opposed to about 100 K).[130] Pluto's atmosphere is divided into roughly 20 regularly spaced haze layers up to 150 km high,[7] thought to be the result of pressure waves created by airflow across Pluto's mountains.[130]
101
+
102
+ Pluto has five known natural satellites. The closest to Pluto is Charon. First identified in 1978 by astronomer James Christy, Charon is the only moon of Pluto that may be in hydrostatic equilibrium; Charon's mass is sufficient to cause the barycenter of the Pluto–Charon system to be outside Pluto. Beyond Charon there are four much smaller circumbinary moons. In order of distance from Pluto they are Styx, Nix, Kerberos, and Hydra. Nix and Hydra were both discovered in 2005,[133] Kerberos was discovered in 2011,[134] and Styx was discovered in 2012.[135] The satellites' orbits are circular (eccentricity < 0.006) and coplanar with Pluto's equator (inclination < 1°),[136][137] and therefore tilted approximately 120° relative to Pluto's orbit. The Plutonian system is highly compact: the five known satellites orbit within the inner 3% of the region where prograde orbits would be stable.[138]
103
+
104
+ The orbital periods of all Pluto's moons are linked in a system of orbital resonances and near resonances.[137][139] When precession is accounted for, the orbital periods of Styx, Nix, and Hydra are in an exact 18:22:33 ratio.[137] There is a sequence of approximate ratios, 3:4:5:6, between the periods of Styx, Nix, Kerberos, and Hydra with that of Charon; the ratios become closer to being exact the further out the moons are.[137][140]
105
+
106
+ The Pluto–Charon system is one of the few in the Solar System whose barycenter lies outside the primary body; the Patroclus–Menoetius system is a smaller example, and the Sun–Jupiter system is the only larger one.[141] The similarity in size of Charon and Pluto has prompted some astronomers to call it a double dwarf planet.[142] The system is also unusual among planetary systems in that each is tidally locked to the other, which means that Pluto and Charon always have the same hemisphere facing each other. From any position on either body, the other is always at the same position in the sky, or always obscured.[143] This also means that the rotation period of each is equal to the time it takes the entire system to rotate around its barycenter.[90]
107
+
108
+ In 2007, observations by the Gemini Observatory of patches of ammonia hydrates and water crystals on the surface of Charon suggested the presence of active cryo-geysers.[144]
109
+
110
+ Pluto's moons are hypothesized to have been formed by a collision between Pluto and a similar-sized body, early in the history of the Solar System. The collision released material that consolidated into the moons around Pluto.[145]
111
+
112
+ Pluto's origin and identity had long puzzled astronomers. One early hypothesis was that Pluto was an escaped moon of Neptune,[147] knocked out of orbit by its largest current moon, Triton. This idea was eventually rejected after dynamical studies showed it to be impossible because Pluto never approaches Neptune in its orbit.[148]
113
+
114
+ Pluto's true place in the Solar System began to reveal itself only in 1992, when astronomers began to find small icy objects beyond Neptune that were similar to Pluto not only in orbit but also in size and composition. This trans-Neptunian population is thought to be the source of many short-period comets. Pluto is now known to be the largest member of the Kuiper belt,[j] a stable belt of objects located between 30 and 50 AU from the Sun. As of 2011, surveys of the Kuiper belt to magnitude 21 were nearly complete and any remaining Pluto-sized objects are expected to be beyond 100 AU from the Sun.[149] Like other Kuiper-belt objects (KBOs), Pluto shares features with comets; for example, the solar wind is gradually blowing Pluto's surface into space.[150] It has been claimed that if Pluto were placed as near to the Sun as Earth, it would develop a tail, as comets do.[151] This claim has been disputed with the argument that Pluto's escape velocity is too high for this to happen.[152] It has been proposed that Pluto may have formed as a result of the agglomeration of numerous comets and Kuiper-belt objects.[153][154]
115
+
116
+ Though Pluto is the largest Kuiper belt object discovered,[117] Neptune's moon Triton, which is slightly larger than Pluto, is similar to it both geologically and atmospherically, and is thought to be a captured Kuiper belt object.[155] Eris (see above) is about the same size as Pluto (though more massive) but is not strictly considered a member of the Kuiper belt population. Rather, it is considered a member of a linked population called the scattered disc.
117
+
118
+ A large number of Kuiper belt objects, like Pluto, are in a 2:3 orbital resonance with Neptune. KBOs with this orbital resonance are called "plutinos", after Pluto.[156]
119
+
120
+ Like other members of the Kuiper belt, Pluto is thought to be a residual planetesimal; a component of the original protoplanetary disc around the Sun that failed to fully coalesce into a full-fledged planet. Most astronomers agree that Pluto owes its current position to a sudden migration undergone by Neptune early in the Solar System's formation. As Neptune migrated outward, it approached the objects in the proto-Kuiper belt, setting one in orbit around itself (Triton), locking others into resonances, and knocking others into chaotic orbits. The objects in the scattered disc, a dynamically unstable region overlapping the Kuiper belt, are thought to have been placed in their current positions by interactions with Neptune's migrating resonances.[157] A computer model created in 2004 by Alessandro Morbidelli of the Observatoire de la Côte d'Azur in Nice suggested that the migration of Neptune into the Kuiper belt may have been triggered by the formation of a 1:2 resonance between Jupiter and Saturn, which created a gravitational push that propelled both Uranus and Neptune into higher orbits and caused them to switch places, ultimately doubling Neptune's distance from the Sun. The resultant expulsion of objects from the proto-Kuiper belt could also explain the Late Heavy Bombardment 600 million years after the Solar System's formation and the origin of the Jupiter trojans.[158] It is possible that Pluto had a near-circular orbit about 33 AU from the Sun before Neptune's migration perturbed it into a resonant capture.[159] The Nice model requires that there were about a thousand Pluto-sized bodies in the original planetesimal disk, which included Triton and Eris.[158]
121
+
122
+ Pluto's distance from Earth makes its in-depth study and exploration difficult. On July 14, 2015, NASA's New Horizons space probe flew through the Pluto system, providing much information about it.[160]
123
+
124
+ Pluto's visual apparent magnitude averages 15.1, brightening to 13.65 at perihelion.[2] To see it, a telescope is required; around 30 cm (12 in) aperture being desirable.[161] It looks star-like and without a visible disk even in large telescopes, because its angular diameter is only 0.11".
125
+
126
+ The earliest maps of Pluto, made in the late 1980s, were brightness maps created from close observations of eclipses by its largest moon, Charon. Observations were made of the change in the total average brightness of the Pluto–Charon system during the eclipses. For example, eclipsing a bright spot on Pluto makes a bigger total brightness change than eclipsing a dark spot. Computer processing of many such observations can be used to create a brightness map. This method can also track changes in brightness over time.[162][163]
127
+
128
+ Better maps were produced from images taken by the Hubble Space Telescope (HST), which offered higher resolution, and showed considerably more detail,[98] resolving variations several hundred kilometers across, including polar regions and large bright spots.[100] These maps were produced by complex computer processing, which finds the best-fit projected maps for the few pixels of the Hubble images.[164] These remained the most detailed maps of Pluto until the flyby of New Horizons in July 2015, because the two cameras on the HST used for these maps were no longer in service.[164]
129
+
130
+ The New Horizons spacecraft, which flew by Pluto in July 2015, is the first and so far only attempt to explore Pluto directly. Launched in 2006, it captured its first (distant) images of Pluto in late September 2006 during a test of the Long Range Reconnaissance Imager.[165] The images, taken from a distance of approximately 4.2 billion kilometers, confirmed the spacecraft's ability to track distant targets, critical for maneuvering toward Pluto and other Kuiper belt objects. In early 2007 the craft made use of a gravity assist from Jupiter.
131
+
132
+ New Horizons made its closest approach to Pluto on July 14, 2015, after a 3,462-day journey across the Solar System. Scientific observations of Pluto began five months before the closest approach and continued for at least a month after the encounter. Observations were conducted using a remote sensing package that included imaging instruments and a radio science investigation tool, as well as spectroscopic and other experiments. The scientific goals of New Horizons were to characterize the global geology and morphology of Pluto and its moon Charon, map their surface composition, and analyze Pluto's neutral atmosphere and its escape rate. On October 25, 2016, at 05:48 pm ET, the last bit of data (of a total of 50 billion bits of data; or 6.25 gigabytes) was received from New Horizons from its close encounter with Pluto.[166][167][168][169]
133
+
134
+ Since the New Horizons flyby, scientists have advocated for an orbiter mission that would return to Pluto to fulfill new science objectives.[170] They include mapping the surface at 9.1 m (30 ft) per pixel, observations of Pluto's smaller satellites, observations of how Pluto changes as it rotates on its axis, and topographic mapping of Pluto's regions that are covered in long-term darkness due to its axial tilt. The last objective could be accomplished using laser pulses to generate a complete topographic map of Pluto. New Horizons principal investigator Alan Stern has advocated for a Cassini-style orbiter that would launch around 2030 (the 100th anniversary of Pluto's discovery) and use Charon's gravity to adjust its orbit as needed to fulfill science objectives after arriving at the Pluto system.[171] The orbiter could then use Charon's gravity to leave the Pluto system and study more KBOs after all Pluto science objectives are completed. A conceptual study funded by the NASA Innovative Advanced Concepts (NIAC) program describes a fusion-enabled Pluto orbiter and lander based on the Princeton field-reversed configuration reactor.[172][173]
135
+
136
+ The equatorial region of the sub-Charon hemisphere of Pluto has only been imaged at low resolution, as New Horizons made its closest approach to the anti-Charon hemisphere.
137
+
138
+ New Horizons imaged all of Pluto's northern hemisphere, and the equatorial regions down to about 30° South. Higher southern latitudes have only been observed, at very low resolution, from Earth. Images from the Hubble Space Telescope in 1996 cover 85% of Pluto and show large albedo features down to about 75° South. This is enough to show the extent of the temperate-zone maculae. Later images had slightly better resolution, due to minor improvements in Hubble instrumentation, but didn't reach quite as far south.
139
+
140
+ Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Virgo Supercluster → Laniakea Supercluster → Observable universe → UniverseEach arrow (→) may be read as "within" or "part of".
141
+
en/4681.html.txt ADDED
@@ -0,0 +1,141 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ Pluto (minor planet designation: 134340 Pluto) is an icy dwarf planet in the Kuiper belt, a ring of bodies beyond the orbit of Neptune. It was the first and the largest Kuiper belt object to be discovered.
4
+
5
+ Pluto was discovered by Clyde Tombaugh in 1930 and declared to be the ninth planet from the Sun. After 1992, its status as a planet was questioned following the discovery of several objects of similar size in the Kuiper belt. In 2005, Eris, a dwarf planet in the scattered disc which is 27% more massive than Pluto, was discovered. This led the International Astronomical Union (IAU) to define the term "planet" formally in 2006, during their 26th General Assembly. That definition excluded Pluto and reclassified it as a dwarf planet.
6
+
7
+ It is the ninth-largest and tenth-most-massive known object directly orbiting the Sun. It is the largest known trans-Neptunian object by volume but is less massive than Eris. Like other Kuiper belt objects, Pluto is primarily made of ice and rock and is relatively small—one-sixth the mass of the Moon and one-third its volume. It has a moderately eccentric and inclined orbit during which it ranges from 30 to 49 astronomical units or AU (4.4–7.4 billion km) from the Sun. This means that Pluto periodically comes closer to the Sun than Neptune, but a stable orbital resonance with Neptune prevents them from colliding. Light from the Sun takes 5.5 hours to reach Pluto at its average distance (39.5 AU).
8
+
9
+ Pluto has five known moons: Charon (the largest, with a diameter just over half that of Pluto), Styx, Nix, Kerberos, and Hydra. Pluto and Charon are sometimes considered a binary system because the barycenter of their orbits does not lie within either body.
10
+
11
+ The New Horizons spacecraft performed a flyby of Pluto on July 14, 2015, becoming the first ever, and to date only, spacecraft to do so. During its brief flyby, New Horizons made detailed measurements and observations of Pluto and its moons. In September 2016, astronomers announced that the reddish-brown cap of the north pole of Charon is composed of tholins, organic macromolecules that may be ingredients for the emergence of life, and produced from methane, nitrogen and other gases released from the atmosphere of Pluto and transferred 19,000 km (12,000 mi) to the orbiting moon.
12
+
13
+ In the 1840s, Urbain Le Verrier used Newtonian mechanics to predict the position of the then-undiscovered planet Neptune after analyzing perturbations in the orbit of Uranus.[14] Subsequent observations of Neptune in the late 19th century led astronomers to speculate that Uranus's orbit was being disturbed by another planet besides Neptune.
14
+
15
+ In 1906, Percival Lowell—a wealthy Bostonian who had founded Lowell Observatory in Flagstaff, Arizona, in 1894—started an extensive project in search of a possible ninth planet, which he termed "Planet X".[15] By 1909, Lowell and William H. Pickering had suggested several possible celestial coordinates for such a planet.[16] Lowell and his observatory conducted his search until his death in 1916, but to no avail. Unknown to Lowell, his surveys had captured two faint images of Pluto on March 19 and April 7, 1915, but they were not recognized for what they were.[16][17] There are fourteen other known precovery observations, with the earliest made by the Yerkes Observatory on August 20, 1909.[18]
16
+
17
+ Percival's widow, Constance Lowell, entered into a ten-year legal battle with the Lowell Observatory over her husband's legacy, and the search for Planet X did not resume until 1929.[19] Vesto Melvin Slipher, the observatory director, gave the job of locating Planet X to 23-year-old Clyde Tombaugh, who had just arrived at the observatory after Slipher had been impressed by a sample of his astronomical drawings.[19]
18
+
19
+ Tombaugh's task was to systematically image the night sky in pairs of photographs, then examine each pair and determine whether any objects had shifted position. Using a blink comparator, he rapidly shifted back and forth between views of each of the plates to create the illusion of movement of any objects that had changed position or appearance between photographs. On February 18, 1930, after nearly a year of searching, Tombaugh discovered a possible moving object on photographic plates taken on January 23 and 29. A lesser-quality photograph taken on January 21 helped confirm the movement.[20] After the observatory obtained further confirmatory photographs, news of the discovery was telegraphed to the Harvard College Observatory on March 13, 1930.[16] Pluto has yet to complete a full orbit of the Sun since its discovery, as one Plutonian year is 247.68 years long.[21]
20
+
21
+ The discovery made headlines around the globe.[22] Lowell Observatory, which had the right to name the new object, received more than 1,000 suggestions from all over the world, ranging from Atlas to Zymal.[23] Tombaugh urged Slipher to suggest a name for the new object quickly before someone else did.[23] Constance Lowell proposed Zeus, then Percival and finally Constance. These suggestions were disregarded.[24]
22
+
23
+ The name Pluto, after the Roman god of the underworld, was proposed by Venetia Burney (1918–2009), an eleven-year-old schoolgirl in Oxford, England, who was interested in classical mythology.[25] She suggested it in a conversation with her grandfather Falconer Madan, a former librarian at the University of Oxford's Bodleian Library, who passed the name to astronomy professor Herbert Hall Turner, who cabled it to colleagues in the United States.[25]
24
+
25
+ Each member of the Lowell Observatory was allowed to vote on a short-list of three potential names: Minerva (which was already the name for an asteroid), Cronus (which had lost reputation through being proposed by the unpopular astronomer Thomas Jefferson Jackson See), and Pluto. Pluto received every vote.[26] The name was announced on May 1, 1930.[25][27] Upon the announcement, Madan gave Venetia £5 (equivalent to 300 GBP, or 450 USD in 2014)[28] as a reward.[25]
26
+
27
+ The final choice of name was helped in part by the fact that the first two letters of Pluto are the initials of Percival Lowell. Pluto's astronomical symbol (, Unicode U+2647, ♇) was then created as a monogram constructed from the letters "PL".[29] Pluto's astrological symbol resembles that of Neptune (), but has a circle in place of the middle prong of the trident ().
28
+
29
+ The name was soon embraced by wider culture. In 1930, Walt Disney was apparently inspired by it when he introduced for Mickey Mouse a canine companion named Pluto, although Disney animator Ben Sharpsteen could not confirm why the name was given.[30] In 1941, Glenn T. Seaborg named the newly created element plutonium after Pluto, in keeping with the tradition of naming elements after newly discovered planets, following uranium, which was named after Uranus, and neptunium, which was named after Neptune.[31]
30
+
31
+ Most languages use the name "Pluto" in various transliterations.[h] In Japanese, Houei Nojiri suggested the translation Meiōsei (冥王星, "Star of the King (God) of the Underworld"), and this was borrowed into Chinese, Korean, and Vietnamese (which instead uses "Sao Diêm Vương", which was derived from the Chinese term 閻王 (Yánwáng), as "minh" is a homophone for the Sino-Vietnamese words for "dark" (冥) and "bright" (明)).[32][33][34] Some Indian languages use the name Pluto, but others, such as Hindi, use the name of Yama, the God of Death in Hindu and Buddhist mythology.[33] Polynesian languages also tend to use the indigenous god of the underworld, as in Māori Whiro.[33]
32
+
33
+ Once Pluto was found, its faintness and lack of a resolvable disc cast doubt on the idea that it was Lowell's Planet X.[15] Estimates of Pluto's mass were revised downward throughout the 20th century.[35]
34
+
35
+ Astronomers initially calculated its mass based on its presumed effect on Neptune and Uranus. In 1931, Pluto was calculated to be roughly the mass of Earth, with further calculations in 1948 bringing the mass down to roughly that of Mars.[37][39] In 1976, Dale Cruikshank, Carl Pilcher and David Morrison of the University of Hawaii calculated Pluto's albedo for the first time, finding that it matched that for methane ice; this meant Pluto had to be exceptionally luminous for its size and therefore could not be more than 1 percent the mass of Earth.[40] (Pluto's albedo is 1.4–1.9 times that of Earth.[2])
36
+
37
+ In 1978, the discovery of Pluto's moon Charon allowed the measurement of Pluto's mass for the first time: roughly 0.2% that of Earth, and far too small to account for the discrepancies in the orbit of Uranus. Subsequent searches for an alternative Planet X, notably by Robert Sutton Harrington,[43] failed. In 1992, Myles Standish used data from Voyager 2's flyby of Neptune in 1989, which had revised the estimates of Neptune's mass downward by 0.5%—an amount comparable to the mass of Mars—to recalculate its gravitational effect on Uranus. With the new figures added in, the discrepancies, and with them the need for a Planet X, vanished.[44] Today, the majority of scientists agree that Planet X, as Lowell defined it, does not exist.[45] Lowell had made a prediction of Planet X's orbit and position in 1915 that was fairly close to Pluto's actual orbit and its position at that time; Ernest W. Brown concluded soon after Pluto's discovery that this was a coincidence.[46]
38
+
39
+ From 1992 onward, many bodies were discovered orbiting in the same volume as Pluto, showing that Pluto is part of a population of objects called the Kuiper belt. This made its official status as a planet controversial, with many questioning whether Pluto should be considered together with or separately from its surrounding population. Museum and planetarium directors occasionally created controversy by omitting Pluto from planetary models of the Solar System. In February 2000 the Hayden Planetarium in New York City displayed a Solar System model of only eight planets, which made headlines almost a year later.[47]
40
+
41
+ Ceres, Pallas, Juno and Vesta lost their planet status after the discovery of many other asteroids. Similarly, objects increasingly closer in size to Pluto were discovered in the Kuiper belt region. On July 29, 2005, astronomers at Caltech announced the discovery of a new trans-Neptunian object, Eris, which was substantially more massive than Pluto and the most massive object discovered in the Solar System since Triton in 1846. Its discoverers and the press initially called it the tenth planet, although there was no official consensus at the time on whether to call it a planet.[48] Others in the astronomical community considered the discovery the strongest argument for reclassifying Pluto as a minor planet.[49]
42
+
43
+ The debate came to a head in August 2006, with an IAU resolution that created an official definition for the term "planet". According to this resolution, there are three conditions for an object in the Solar System to be considered a planet:
44
+
45
+ Pluto fails to meet the third condition.[52] Its mass is substantially less than the combined mass of the other objects in its orbit: 0.07 times, in contrast to Earth, which is 1.7 million times the remaining mass in its orbit (excluding the moon).[53][51] The IAU further decided that bodies that, like Pluto, meet criteria 1 and 2, but do not meet criterion 3 would be called dwarf planets. In September 2006, the IAU included Pluto, and Eris and its moon Dysnomia, in their Minor Planet Catalogue, giving them the official minor planet designations "(134340) Pluto", "(136199) Eris", and "(136199) Eris I Dysnomia".[54] Had Pluto been included upon its discovery in 1930, it would have likely been designated 1164, following 1163 Saga, which was discovered a month earlier.[55]
46
+
47
+ There has been some resistance within the astronomical community toward the reclassification.[56][57][58] Alan Stern, principal investigator with NASA's New Horizons mission to Pluto, derided the IAU resolution, stating that "the definition stinks, for technical reasons".[59] Stern contended that, by the terms of the new definition, Earth, Mars, Jupiter, and Neptune, all of which share their orbits with asteroids, would be excluded.[60] He argued that all big spherical moons, including the Moon, should likewise be considered planets.[61] He also stated that because less than five percent of astronomers voted for it, the decision was not representative of the entire astronomical community.[60] Marc W. Buie, then at the Lowell Observatory, petitioned against the definition.[62] Others have supported the IAU. Mike Brown, the astronomer who discovered Eris, said "through this whole crazy, circus-like procedure, somehow the right answer was stumbled on. It's been a long time coming. Science is self-correcting eventually, even when strong emotions are involved."[63]
48
+
49
+ Public reception to the IAU decision was mixed. A resolution introduced in the California State Assembly facetiously called the IAU decision a "scientific heresy".[64] The New Mexico House of Representatives passed a resolution in honor of Tombaugh, a longtime resident of that state, that declared that Pluto will always be considered a planet while in New Mexican skies and that March 13, 2007, was Pluto Planet Day.[65][66] The Illinois Senate passed a similar resolution in 2009, on the basis that Clyde Tombaugh, the discoverer of Pluto, was born in Illinois. The resolution asserted that Pluto was "unfairly downgraded to a 'dwarf' planet" by the IAU."[67] Some members of the public have also rejected the change, citing the disagreement within the scientific community on the issue, or for sentimental reasons, maintaining that they have always known Pluto as a planet and will continue to do so regardless of the IAU decision.[68]
50
+
51
+ In 2006, in its 17th annual words-of-the-year vote, the American Dialect Society voted plutoed as the word of the year. To "pluto" is to "demote or devalue someone or something".[69]
52
+
53
+ Researchers on both sides of the debate gathered in August 2008, at the Johns Hopkins University Applied Physics Laboratory for a conference that included back-to-back talks on the current IAU definition of a planet.[70] Entitled "The Great Planet Debate",[71] the conference published a post-conference press release indicating that scientists could not come to a consensus about the definition of planet.[72] In June 2008, the IAU had announced in a press release that the term "plutoid" would henceforth be used to refer to Pluto and other objects that have an orbital semi-major axis greater than that of Neptune and enough mass to be of near-spherical shape.[73][74][75]
54
+
55
+ Pluto's orbital period is currently about 248 years. Its orbital characteristics are substantially different from those of the planets, which follow nearly circular orbits around the Sun close to a flat reference plane called the ecliptic. In contrast, Pluto's orbit is moderately inclined relative to the ecliptic (over 17°) and moderately eccentric (elliptical). This eccentricity means a small region of Pluto's orbit lies closer to the Sun than Neptune's. The Pluto–Charon barycenter came to perihelion on September 5, 1989,[3][i] and was last closer to the Sun than Neptune between February 7, 1979, and February 11, 1999.[76]
56
+
57
+ In the long term, Pluto's orbit is chaotic. Computer simulations can be used to predict its position for several million years (both forward and backward in time), but after intervals longer than the Lyapunov time of 10–20 million years, calculations become speculative: Pluto is sensitive to immeasurably small details of the Solar System, hard-to-predict factors that will gradually change Pluto's position in its orbit.[77][78]
58
+
59
+ The semi-major axis of Pluto's orbit varies between about 39.3 and 39.6 au with a period of about 19,951 years, corresponding to an orbital period varying between 246 and 249 years. The semi-major axis and period are presently getting longer.[79]
60
+
61
+ Despite Pluto's orbit appearing to cross that of Neptune when viewed from directly above, the two objects' orbits are aligned so that they can never collide or even approach closely.
62
+
63
+ The two orbits do not intersect. When Pluto is closest to the Sun, and hence closest to Neptune's orbit as viewed from above, it is also the farthest above Neptune's path. Pluto's orbit passes about 8 AU above that of Neptune, preventing a collision.[80][81][82]
64
+
65
+ This alone is not enough to protect Pluto; perturbations from the planets (especially Neptune) could alter Pluto's orbit (such as its orbital precession) over millions of years so that a collision could be possible. However, Pluto is also protected by its 2:3 orbital resonance with Neptune: for every two orbits that Pluto makes around the Sun, Neptune makes three. Each cycle lasts about 495 years. This pattern is such that, in each 495-year cycle, the first time Pluto is near perihelion, Neptune is over 50° behind Pluto. By Pluto's second perihelion, Neptune will have completed a further one and a half of its own orbits, and so will be nearly 130° ahead of Pluto. Pluto and Neptune's minimum separation is over 17 AU, which is greater than Pluto's minimum separation from Uranus (11 AU).[82] The minimum separation between Pluto and Neptune actually occurs near the time of Pluto's aphelion.[79]
66
+
67
+ The 2:3 resonance between the two bodies is highly stable and has been preserved over millions of years.[83] This prevents their orbits from changing relative to one another, and so the two bodies can never pass near each other. Even if Pluto's orbit were not inclined, the two bodies could never collide.[82] The long term stability of the mean-motion resonance is due to phase protection. If Pluto's period is slightly shorter than 3/2 of Neptune, its orbit relative to Neptune will drift, causing it to make closer approaches behind Neptune's orbit. The strong gravitational pull between the two causes angular momentum to be transferred to Pluto, at Neptune's expense. This moves Pluto into a slightly larger orbit, where it travels slightly more slowly, according to Kepler's third law. After many such repetitions, Pluto is sufficiently slowed, and Neptune sufficiently sped up, that Pluto's orbit relative to Neptune drifts in the opposite direction until the process is reversed. The whole process takes about 20,000 years to complete.[82][83][84]
68
+
69
+ Numerical studies have shown that over millions of years, the general nature of the alignment between the orbits of Pluto and Neptune does not change.[80][79] There are several other resonances and interactions that enhance Pluto's stability. These arise principally from two additional mechanisms (besides the 2:3 mean-motion resonance).
70
+
71
+ First, Pluto's argument of perihelion, the angle between the point where it crosses the ecliptic and the point where it is closest to the Sun, librates around 90°.[79] This means that when Pluto is closest to the Sun, it is at its farthest above the plane of the Solar System, preventing encounters with Neptune. This is a consequence of the Kozai mechanism,[80] which relates the eccentricity of an orbit to its inclination to a larger perturbing body—in this case Neptune. Relative to Neptune, the amplitude of libration is 38°, and so the angular separation of Pluto's perihelion to the orbit of Neptune is always greater than 52° (90°–38°). The closest such angular separation occurs every 10,000 years.[83]
72
+
73
+ Second, the longitudes of ascending nodes of the two bodies—the points where they cross the ecliptic—are in near-resonance with the above libration. When the two longitudes are the same—that is, when one could draw a straight line through both nodes and the Sun—Pluto's perihelion lies exactly at 90°, and hence it comes closest to the Sun when it is highest above Neptune's orbit. This is known as the 1:1 superresonance. All the Jovian planets, particularly Jupiter, play a role in the creation of the superresonance.[80]
74
+
75
+ In 2012, it was hypothesized that 15810 Arawn could be a quasi-satellite of Pluto, a specific type of co-orbital configuration.[85] According to the hypothesis, the object would be a quasi-satellite of Pluto for about 350,000 years out of every two-million-year period.[85][86] Measurements made by the New Horizons spacecraft in 2015 made it possible to calculate the orbit of Arawn more accurately.[87] These calculations confirm the overall dynamics described in the hypothesis.[88] However, it is not agreed upon among astronomers whether Arawn should be classified as a quasi-satellite of Pluto based on this motion, since its orbit is primarily controlled by Neptune with only occasional smaller perturbations caused by Pluto.[89][87][88]
76
+
77
+ Pluto's rotation period, its day, is equal to 6.387 Earth days.[2][90] Like Uranus, Pluto rotates on its "side" in its orbital plane, with an axial tilt of 120°, and so its seasonal variation is extreme; at its solstices, one-fourth of its surface is in continuous daylight, whereas another fourth is in continuous darkness.[91] The reason for this unusual orientation has been debated. Research from the University of Arizona has suggested that it may be due to the way that a body's spin will always adjust to minimise energy. This could mean a body reorienting itself to put extraneous mass near the equator and regions lacking mass tend towards the poles. This is called polar wander.[92] According to a paper released from the University of Arizona, this could be caused by masses of frozen nitrogen building up in shadowed areas of the dwarf planet. These masses would cause the body to reorient itself, leading to its unusual axial tilt of 120°. The buildup of nitrogen is due to Pluto's vast distance from the Sun. At the equator, temperatures can drop to −240 °C (−400.0 °F; 33.1 K), causing nitrogen to freeze as water would freeze on Earth. The same effect seen on Pluto would be observed on Earth were the Antarctic ice sheet several times larger.[93]
78
+
79
+ The plains on Pluto's surface are composed of more than 98 percent nitrogen ice, with traces of methane and carbon monoxide.[94] Nitrogen and carbon monoxide are most abundant on the anti-Charon face of Pluto (around 180° longitude, where Tombaugh Regio's western lobe, Sputnik Planitia, is located), whereas methane is most abundant near 300° east.[95] The mountains are made of water ice.[96] Pluto's surface is quite varied, with large differences in both brightness and color.[97] Pluto is one of the most contrastive bodies in the Solar System, with as much contrast as Saturn's moon Iapetus.[98] The color varies from charcoal black, to dark orange and white.[99] Pluto's color is more similar to that of Io with slightly more orange and significantly less red than Mars.[100] Notable geographical features include Tombaugh Regio, or the "Heart" (a large bright area on the side opposite Charon), Cthulhu Macula,[6] or the "Whale" (a large dark area on the trailing hemisphere), and the "Brass Knuckles" (a series of equatorial dark areas on the leading hemisphere).
80
+
81
+ Sputnik Planitia, the western lobe of the "Heart", is a 1,000 km-wide basin of frozen nitrogen and carbon monoxide ices, divided into polygonal cells, which are interpreted as convection cells that carry floating blocks of water ice crust and sublimation pits towards their margins;[101][102][103] there are obvious signs of glacial flows both into and out of the basin.[104][105] It has no craters that were visible to New Horizons, indicating that its surface is less than 10 million years old.[106] Latest studies have shown that the surface has an age of 180000+90000−40000 years.[107]
82
+ The New Horizons science team summarized initial findings as "Pluto displays a surprisingly wide variety of geological landforms, including those resulting from glaciological and surface–atmosphere interactions as well as impact, tectonic, possible cryovolcanic, and mass-wasting processes."[7]
83
+
84
+ In Western parts of Sputnik Planitia there are fields of transverse dunes formed by the winds blowing from the center of Sputnik Planitia in the direction of surrounding mountains. The dune wavelengths are in the range of 0.4–1 km and they are likely consists of methane particles 200–300 μm in size.[108]
85
+
86
+ Pluto's density is 1.860±0.013 g/cm3.[7] Because the decay of radioactive elements would eventually heat the ices enough for the rock to separate from them, scientists expect that Pluto's internal structure is differentiated, with the rocky material having settled into a dense core surrounded by a mantle of water ice. The pre–New Horizons estimate for the diameter of the core 1700 km, 70% of Pluto's diameter.[109] It is possible that such heating continues today, creating a subsurface ocean of liquid water 100 to 180 km thick at the core–mantle boundary.[109][110][111] In September 2016, scientists at Brown University simulated the impact thought to have formed Sputnik Planitia, and showed that it might have been the result of liquid water upwelling from below after the collision, implying the existence of a subsurface ocean at least 100 km deep.[112] Pluto has no magnetic field.[113] In June 2020, astronomers reported evidence that Pluto may have had a subsurface ocean, and consequently may have been habitable, when it was first formed.[114][115]
87
+
88
+ Pluto's diameter is 2376.6±3.2 km[5] and its mass is (1.303±0.003)×1022 kg, 17.7% that of the Moon (0.22% that of Earth).[123] Its surface area is 1.779×107 km2, or roughly the same surface area as Russia. Its surface gravity is 0.063 g (compared to 1 g for Earth and 0.17 g for the Moon).
89
+
90
+ The discovery of Pluto's satellite Charon in 1978 enabled a determination of the mass of the Pluto–Charon system by application of Newton's formulation of Kepler's third law. Observations of Pluto in occultation with Charon allowed scientists to establish Pluto's diameter more accurately, whereas the invention of adaptive optics allowed them to determine its shape more accurately.[124]
91
+
92
+ With less than 0.2 lunar masses, Pluto is much less massive than the terrestrial planets, and also less massive than seven moons: Ganymede, Titan, Callisto, Io, the Moon, Europa, and Triton. The mass is much less than thought before Charon was discovered.
93
+
94
+ Pluto is more than twice the diameter and a dozen times the mass of Ceres, the largest object in the asteroid belt. It is less massive than the dwarf planet Eris, a trans-Neptunian object discovered in 2005, though Pluto has a larger diameter of 2376.6 km[5] compared to Eris's approximate diameter of 2326 km.[125]
95
+
96
+ Determinations of Pluto's size had been complicated by its atmosphere,[119] and hydrocarbon haze.[117] In March 2014, Lellouch, de Bergh et al. published findings regarding methane mixing ratios in Pluto's atmosphere consistent with a Plutonian diameter greater than 2360 km, with a "best guess" of 2368 km.[121] On July 13, 2015, images from NASA's New Horizons mission Long Range Reconnaissance Imager (LORRI), along with data from the other instruments, determined Pluto's diameter to be 2,370 km (1,470 mi),[125][126] which was later revised to be 2,372 km (1,474 mi) on July 24,[122] and later to 2374±8 km.[7] Using radio occultation data from the New Horizons Radio Science Experiment (REX), the diameter was found to be 2376.6±3.2 km.[5]
97
+
98
+ Pluto has a tenuous atmosphere consisting of nitrogen (N2), methane (CH4), and carbon monoxide (CO), which are in equilibrium with their ices on Pluto's surface.[127][128] According to the measurements by New Horizons, the surface pressure is about 1 Pa (10 μbar),[7] roughly one million to 100,000 times less than Earth's atmospheric pressure. It was initially thought that, as Pluto moves away from the Sun, its atmosphere should gradually freeze onto the surface; studies of New Horizons data and ground-based occultations show that Pluto's atmospheric density increases, and that it likely remains gaseous throughout Pluto's orbit.[129][130] New Horizons observations showed that atmospheric escape of nitrogen to be 10,000 times less than expected.[130] Alan Stern has contended that even a small increase in Pluto's surface temperature can lead to exponential increases in Pluto's atmospheric density; from 18 hPa to as much as 280 hPa (three times that of Mars to a quarter that of the Earth). At such densities, nitrogen could flow across the surface as liquid.[130] Just like sweat cools the body as it evaporates from the skin, the sublimation of Pluto's atmosphere cools its surface.[131] The presence of atmospheric gases was traced up to 1670 kilometers high; the atmosphere does not have a sharp upper boundary.
99
+
100
+ The presence of methane, a powerful greenhouse gas, in Pluto's atmosphere creates a temperature inversion, with the average temperature of its atmosphere tens of degrees warmer than its surface,[132] though observations by New Horizons have revealed Pluto's upper atmosphere to be far colder than expected (70 K, as opposed to about 100 K).[130] Pluto's atmosphere is divided into roughly 20 regularly spaced haze layers up to 150 km high,[7] thought to be the result of pressure waves created by airflow across Pluto's mountains.[130]
101
+
102
+ Pluto has five known natural satellites. The closest to Pluto is Charon. First identified in 1978 by astronomer James Christy, Charon is the only moon of Pluto that may be in hydrostatic equilibrium; Charon's mass is sufficient to cause the barycenter of the Pluto–Charon system to be outside Pluto. Beyond Charon there are four much smaller circumbinary moons. In order of distance from Pluto they are Styx, Nix, Kerberos, and Hydra. Nix and Hydra were both discovered in 2005,[133] Kerberos was discovered in 2011,[134] and Styx was discovered in 2012.[135] The satellites' orbits are circular (eccentricity < 0.006) and coplanar with Pluto's equator (inclination < 1°),[136][137] and therefore tilted approximately 120° relative to Pluto's orbit. The Plutonian system is highly compact: the five known satellites orbit within the inner 3% of the region where prograde orbits would be stable.[138]
103
+
104
+ The orbital periods of all Pluto's moons are linked in a system of orbital resonances and near resonances.[137][139] When precession is accounted for, the orbital periods of Styx, Nix, and Hydra are in an exact 18:22:33 ratio.[137] There is a sequence of approximate ratios, 3:4:5:6, between the periods of Styx, Nix, Kerberos, and Hydra with that of Charon; the ratios become closer to being exact the further out the moons are.[137][140]
105
+
106
+ The Pluto–Charon system is one of the few in the Solar System whose barycenter lies outside the primary body; the Patroclus–Menoetius system is a smaller example, and the Sun–Jupiter system is the only larger one.[141] The similarity in size of Charon and Pluto has prompted some astronomers to call it a double dwarf planet.[142] The system is also unusual among planetary systems in that each is tidally locked to the other, which means that Pluto and Charon always have the same hemisphere facing each other. From any position on either body, the other is always at the same position in the sky, or always obscured.[143] This also means that the rotation period of each is equal to the time it takes the entire system to rotate around its barycenter.[90]
107
+
108
+ In 2007, observations by the Gemini Observatory of patches of ammonia hydrates and water crystals on the surface of Charon suggested the presence of active cryo-geysers.[144]
109
+
110
+ Pluto's moons are hypothesized to have been formed by a collision between Pluto and a similar-sized body, early in the history of the Solar System. The collision released material that consolidated into the moons around Pluto.[145]
111
+
112
+ Pluto's origin and identity had long puzzled astronomers. One early hypothesis was that Pluto was an escaped moon of Neptune,[147] knocked out of orbit by its largest current moon, Triton. This idea was eventually rejected after dynamical studies showed it to be impossible because Pluto never approaches Neptune in its orbit.[148]
113
+
114
+ Pluto's true place in the Solar System began to reveal itself only in 1992, when astronomers began to find small icy objects beyond Neptune that were similar to Pluto not only in orbit but also in size and composition. This trans-Neptunian population is thought to be the source of many short-period comets. Pluto is now known to be the largest member of the Kuiper belt,[j] a stable belt of objects located between 30 and 50 AU from the Sun. As of 2011, surveys of the Kuiper belt to magnitude 21 were nearly complete and any remaining Pluto-sized objects are expected to be beyond 100 AU from the Sun.[149] Like other Kuiper-belt objects (KBOs), Pluto shares features with comets; for example, the solar wind is gradually blowing Pluto's surface into space.[150] It has been claimed that if Pluto were placed as near to the Sun as Earth, it would develop a tail, as comets do.[151] This claim has been disputed with the argument that Pluto's escape velocity is too high for this to happen.[152] It has been proposed that Pluto may have formed as a result of the agglomeration of numerous comets and Kuiper-belt objects.[153][154]
115
+
116
+ Though Pluto is the largest Kuiper belt object discovered,[117] Neptune's moon Triton, which is slightly larger than Pluto, is similar to it both geologically and atmospherically, and is thought to be a captured Kuiper belt object.[155] Eris (see above) is about the same size as Pluto (though more massive) but is not strictly considered a member of the Kuiper belt population. Rather, it is considered a member of a linked population called the scattered disc.
117
+
118
+ A large number of Kuiper belt objects, like Pluto, are in a 2:3 orbital resonance with Neptune. KBOs with this orbital resonance are called "plutinos", after Pluto.[156]
119
+
120
+ Like other members of the Kuiper belt, Pluto is thought to be a residual planetesimal; a component of the original protoplanetary disc around the Sun that failed to fully coalesce into a full-fledged planet. Most astronomers agree that Pluto owes its current position to a sudden migration undergone by Neptune early in the Solar System's formation. As Neptune migrated outward, it approached the objects in the proto-Kuiper belt, setting one in orbit around itself (Triton), locking others into resonances, and knocking others into chaotic orbits. The objects in the scattered disc, a dynamically unstable region overlapping the Kuiper belt, are thought to have been placed in their current positions by interactions with Neptune's migrating resonances.[157] A computer model created in 2004 by Alessandro Morbidelli of the Observatoire de la Côte d'Azur in Nice suggested that the migration of Neptune into the Kuiper belt may have been triggered by the formation of a 1:2 resonance between Jupiter and Saturn, which created a gravitational push that propelled both Uranus and Neptune into higher orbits and caused them to switch places, ultimately doubling Neptune's distance from the Sun. The resultant expulsion of objects from the proto-Kuiper belt could also explain the Late Heavy Bombardment 600 million years after the Solar System's formation and the origin of the Jupiter trojans.[158] It is possible that Pluto had a near-circular orbit about 33 AU from the Sun before Neptune's migration perturbed it into a resonant capture.[159] The Nice model requires that there were about a thousand Pluto-sized bodies in the original planetesimal disk, which included Triton and Eris.[158]
121
+
122
+ Pluto's distance from Earth makes its in-depth study and exploration difficult. On July 14, 2015, NASA's New Horizons space probe flew through the Pluto system, providing much information about it.[160]
123
+
124
+ Pluto's visual apparent magnitude averages 15.1, brightening to 13.65 at perihelion.[2] To see it, a telescope is required; around 30 cm (12 in) aperture being desirable.[161] It looks star-like and without a visible disk even in large telescopes, because its angular diameter is only 0.11".
125
+
126
+ The earliest maps of Pluto, made in the late 1980s, were brightness maps created from close observations of eclipses by its largest moon, Charon. Observations were made of the change in the total average brightness of the Pluto–Charon system during the eclipses. For example, eclipsing a bright spot on Pluto makes a bigger total brightness change than eclipsing a dark spot. Computer processing of many such observations can be used to create a brightness map. This method can also track changes in brightness over time.[162][163]
127
+
128
+ Better maps were produced from images taken by the Hubble Space Telescope (HST), which offered higher resolution, and showed considerably more detail,[98] resolving variations several hundred kilometers across, including polar regions and large bright spots.[100] These maps were produced by complex computer processing, which finds the best-fit projected maps for the few pixels of the Hubble images.[164] These remained the most detailed maps of Pluto until the flyby of New Horizons in July 2015, because the two cameras on the HST used for these maps were no longer in service.[164]
129
+
130
+ The New Horizons spacecraft, which flew by Pluto in July 2015, is the first and so far only attempt to explore Pluto directly. Launched in 2006, it captured its first (distant) images of Pluto in late September 2006 during a test of the Long Range Reconnaissance Imager.[165] The images, taken from a distance of approximately 4.2 billion kilometers, confirmed the spacecraft's ability to track distant targets, critical for maneuvering toward Pluto and other Kuiper belt objects. In early 2007 the craft made use of a gravity assist from Jupiter.
131
+
132
+ New Horizons made its closest approach to Pluto on July 14, 2015, after a 3,462-day journey across the Solar System. Scientific observations of Pluto began five months before the closest approach and continued for at least a month after the encounter. Observations were conducted using a remote sensing package that included imaging instruments and a radio science investigation tool, as well as spectroscopic and other experiments. The scientific goals of New Horizons were to characterize the global geology and morphology of Pluto and its moon Charon, map their surface composition, and analyze Pluto's neutral atmosphere and its escape rate. On October 25, 2016, at 05:48 pm ET, the last bit of data (of a total of 50 billion bits of data; or 6.25 gigabytes) was received from New Horizons from its close encounter with Pluto.[166][167][168][169]
133
+
134
+ Since the New Horizons flyby, scientists have advocated for an orbiter mission that would return to Pluto to fulfill new science objectives.[170] They include mapping the surface at 9.1 m (30 ft) per pixel, observations of Pluto's smaller satellites, observations of how Pluto changes as it rotates on its axis, and topographic mapping of Pluto's regions that are covered in long-term darkness due to its axial tilt. The last objective could be accomplished using laser pulses to generate a complete topographic map of Pluto. New Horizons principal investigator Alan Stern has advocated for a Cassini-style orbiter that would launch around 2030 (the 100th anniversary of Pluto's discovery) and use Charon's gravity to adjust its orbit as needed to fulfill science objectives after arriving at the Pluto system.[171] The orbiter could then use Charon's gravity to leave the Pluto system and study more KBOs after all Pluto science objectives are completed. A conceptual study funded by the NASA Innovative Advanced Concepts (NIAC) program describes a fusion-enabled Pluto orbiter and lander based on the Princeton field-reversed configuration reactor.[172][173]
135
+
136
+ The equatorial region of the sub-Charon hemisphere of Pluto has only been imaged at low resolution, as New Horizons made its closest approach to the anti-Charon hemisphere.
137
+
138
+ New Horizons imaged all of Pluto's northern hemisphere, and the equatorial regions down to about 30° South. Higher southern latitudes have only been observed, at very low resolution, from Earth. Images from the Hubble Space Telescope in 1996 cover 85% of Pluto and show large albedo features down to about 75° South. This is enough to show the extent of the temperate-zone maculae. Later images had slightly better resolution, due to minor improvements in Hubble instrumentation, but didn't reach quite as far south.
139
+
140
+ Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Virgo Supercluster → Laniakea Supercluster → Observable universe → UniverseEach arrow (→) may be read as "within" or "part of".
141
+
en/4682.html.txt ADDED
@@ -0,0 +1,171 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+
5
+ Poetry (derived from the Greek poiesis, "making") is a form of literature that uses aesthetic and often rhythmic[1][2][3] qualities of language—such as phonaesthetics, sound symbolism, and metre—to evoke meanings in addition to, or in place of, the prosaic ostensible meaning.
6
+
7
+ Poetry has a long history – dating back to prehistoric times with hunting poetry in Africa, and to panegyric and elegiac court poetry of the empires of the Nile, Niger, and Volta River valleys.[4] Some of the earliest written poetry in Africa occurs among the Pyramid Texts written during the 25th century BCE. The earliest surviving Western Asian epic poetry, the Epic of Gilgamesh, was written in Sumerian.
8
+
9
+ Early poems in the Eurasian continent evolved from folk songs such as the Chinese Shijing; or from a need to retell oral epics, as with the Sanskrit Vedas, the Zoroastrian Gathas, and the Homeric epics, the Iliad and the Odyssey. Ancient Greek attempts to define poetry, such as Aristotle's Poetics, focused on the uses of speech in rhetoric, drama, song, and comedy. Later attempts concentrated on features such as repetition, verse form, and rhyme, and emphasized the aesthetics which distinguish poetry from more objectively-informative prosaic writing.
10
+
11
+ Poetry uses forms and conventions to suggest differential interpretations of words, or to evoke emotive responses. Devices such as assonance, alliteration, onomatopoeia, and rhythm may convey musical or incantatory effects. The use of ambiguity, symbolism, irony, and other stylistic elements of poetic diction often leaves a poem open to multiple interpretations. Similarly, figures of speech such as metaphor, simile, and metonymy[5] establish a resonance between otherwise disparate images—a layering of meanings, forming connections previously not perceived. Kindred forms of resonance may exist, between individual verses, in their patterns of rhyme or rhythm.
12
+
13
+ Some poetry types are unique to particular cultures and genres and respond to characteristics of the language in which the poet writes. Readers accustomed to identifying poetry with Dante, Goethe, Mickiewicz, or Rumi may think of it as written in lines based on rhyme and regular meter. There are, however, traditions, such as Biblical poetry, that use other means to create rhythm and euphony. Much modern poetry reflects a critique of poetic tradition,[6] testing the principle of euphony itself or altogether forgoing rhyme or set rhythm.[7][8]
14
+ In an increasingly globalized world, poets often adapt forms, styles, and techniques from diverse cultures and languages.
15
+
16
+ A Western cultural tradition (which extends at least from Homer to Rilke) associates the production of poetry with inspiration – often by a Muse (either classical or contemporary).
17
+
18
+ Some scholars believe that the art of poetry may predate literacy.[9][10]
19
+ Others, however, suggest that poetry did not necessarily predate writing.[11][need quotation to verify]
20
+
21
+ The oldest surviving epic poem, the Epic of Gilgamesh, dates from the 3rd millennium BCE in Sumer (in Mesopotamia, now Iraq), and was written in cuneiform script on clay tablets and, later, on papyrus.[12] A tablet #2461 dating to c. 2000 BCE describes an annual rite in which the king symbolically married and mated with the goddess Inanna to ensure fertility and prosperity; some have labelled it the world's oldest love poem.[13][14] An example of Egyptian epic poetry is The Story of Sinuhe (c. 1800 BCE).
22
+
23
+ Other ancient epic poetry includes the Greek epics, the Iliad and the Odyssey; the Avestan books, the Gathic Avesta and the Yasna; the Roman national epic, Virgil's Aeneid (written between 29 and 19 BCE); and the Indian epics, the Ramayana and the Mahabharata. Epic poetry, including the Odyssey, the Gathas, and the Indian Vedas, appears to have been composed in poetic form as an aid to memorization and oral transmission in prehistoric and ancient societies.[11][15]
24
+
25
+ Other forms of poetry developed directly from folk songs. The earliest entries in the oldest extant collection of Chinese poetry, the Shijing, were initially lyrics.[16]
26
+
27
+ The efforts of ancient thinkers to determine what makes poetry distinctive as a form, and what distinguishes good poetry from bad, resulted in "poetics"—the study of the aesthetics of poetry.[17] Some ancient societies, such as China's through her Shijing (Classic of Poetry), developed canons of poetic works that had ritual as well as aesthetic importance.[18] More recently, thinkers have struggled to find a definition that could encompass formal differences as great as those between Chaucer's Canterbury Tales and Matsuo Bashō's Oku no Hosomichi, as well as differences in content spanning Tanakh religious poetry, love poetry, and rap.[19]
28
+
29
+ Classical thinkers in the West employed classification as a way to define and assess the quality of poetry. Notably, the existing fragments of Aristotle's Poetics describe three genres of poetry—the epic, the comic, and the tragic—and develop rules to distinguish the highest-quality poetry in each genre, based on the perceived underlying purposes of the genre.[20] Later aestheticians identified three major genres: epic poetry, lyric poetry, and dramatic poetry, treating comedy and tragedy as subgenres of dramatic poetry.[21]
30
+
31
+ Aristotle's work was influential throughout the Middle East during the Islamic Golden Age,[22] as well as in Europe during the Renaissance.[23] Later poets and aestheticians often distinguished poetry from, and defined it in opposition to prose, which they generally understood as writing with a proclivity to logical explication and a linear narrative structure.[24]
32
+
33
+ This does not imply that poetry is illogical or lacks narration, but rather that poetry is an attempt to render the beautiful or sublime without the burden of engaging the logical or narrative thought-process. English Romantic poet John Keats termed this escape from logic "Negative capability".[25] This "romantic" approach views form as a key element of successful poetry because form is abstract and distinct from the underlying notional logic. This approach remained influential into the 20th century.[26]
34
+
35
+ During this period,[when?] there was also substantially more interaction among the various poetic traditions, in part due to the spread of European colonialism and the attendant rise in global trade.[27] In addition to a boom in translation, during the Romantic period numerous ancient works were rediscovered.[28]
36
+
37
+ Some 20th-century literary theorists rely less on the ostensible opposition of prose and poetry, instead focusing on the poet as simply one who creates using language, and poetry as what the poet creates.[29] The underlying concept of the poet as creator is not uncommon, and some modernist poets essentially do not distinguish between the creation of a poem with words, and creative acts in other media. Yet other modernists challenge the very attempt to define poetry as misguided.[30]
38
+
39
+ The rejection of traditional forms and structures for poetry that began in the first half of the 20th century coincided with a questioning of the purpose and meaning of traditional definitions of poetry and of distinctions between poetry and prose, particularly given examples of poetic prose and prosaic poetry. Numerous modernist poets have written in non-traditional forms or in what traditionally would have been considered prose, although their writing was generally infused with poetic diction and often with rhythm and tone established by non-metrical means. While there was a substantial formalist reaction within the modernist schools to the breakdown of structure, this reaction focused as much on the development of new formal structures and syntheses as on the revival of older forms and structures.[31]
40
+
41
+ Recently,[when?] postmodernism has come to regard more completely prose and poetry as distinct entities, and also different genres of poetry as having meaning only as cultural artifacts. Postmodernism goes beyond modernism's emphasis on the creative role of the poet, to emphasize the role of the reader of a text (hermeneutics), and to highlight the complex cultural web within which a poem is read.[32] Today, throughout the world, poetry often incorporates poetic form and diction from other cultures and from the past, further confounding attempts at definition and classification that once made sense within a tradition such as the Western canon.[33]
42
+
43
+ The early 21st-century poetic tradition appears to continue to strongly orient itself to earlier precursor poetic traditions such as those initiated by Whitman, Emerson, and Wordsworth. The literary critic Geoffrey Hartman (1929–2016) used the phrase "the anxiety of demand" to describe the contemporary response to older poetic traditions as "being fearful that the fact no longer has a form",[34] building on a trope introduced by Emerson. Emerson had maintained that in the debate concerning poetic structure where either "form" or "fact" could predominate, that one need simply "Ask the fact for the form." This has been challenged at various levels by other literary scholars such as Bloom (1930–2019), who has stated: "The generation of poets who stand together now, mature and ready to write the major American verse of the twenty-first century, may yet be seen as what Stevens called 'a great shadow's last embellishment,' the shadow being Emerson's."[35]
44
+
45
+ Prosody is the study of the meter, rhythm, and intonation of a poem. Rhythm and meter are different, although closely related.[36] Meter is the definitive pattern established for a verse (such as iambic pentameter), while rhythm is the actual sound that results from a line of poetry. Prosody also may be used more specifically to refer to the scanning of poetic lines to show meter.[37]
46
+
47
+ The methods for creating poetic rhythm vary across languages and between poetic traditions. Languages are often described as having timing set primarily by accents, syllables, or moras, depending on how rhythm is established, though a language can be influenced by multiple approaches. Japanese is a mora-timed language. Latin, Catalan, French, Leonese, Galician and Spanish are called syllable-timed languages. Stress-timed languages include English, Russian and, generally, German.[38] Varying intonation also affects how rhythm is perceived. Languages can rely on either pitch or tone. Some languages with a pitch accent are Vedic Sanskrit or Ancient Greek. Tonal languages include Chinese, Vietnamese and most Subsaharan languages.[39]
48
+
49
+ Metrical rhythm generally involves precise arrangements of stresses or syllables into repeated patterns called feet within a line. In Modern English verse the pattern of stresses primarily differentiate feet, so rhythm based on meter in Modern English is most often founded on the pattern of stressed and unstressed syllables (alone or elided).[40] In the classical languages, on the other hand, while the metrical units are similar, vowel length rather than stresses define the meter.[41] Old English poetry used a metrical pattern involving varied numbers of syllables but a fixed number of strong stresses in each line.[42]
50
+
51
+ The chief device of ancient Hebrew Biblical poetry, including many of the psalms, was parallelism, a rhetorical structure in which successive lines reflected each other in grammatical structure, sound structure, notional content, or all three. Parallelism lent itself to antiphonal or call-and-response performance, which could also be reinforced by intonation. Thus, Biblical poetry relies much less on metrical feet to create rhythm, but instead creates rhythm based on much larger sound units of lines, phrases and sentences.[43] Some classical poetry forms, such as Venpa of the Tamil language, had rigid grammars (to the point that they could be expressed as a context-free grammar) which ensured a rhythm.[44]
52
+
53
+ Classical Chinese poetics, based on the tone system of Middle Chinese, recognized two kinds of tones: the level (平 píng) tone and the oblique (仄 zè) tones, a category consisting of the rising (上 sháng) tone, the departing (去 qù) tone and the entering (入 rù) tone. Certain forms of poetry placed constraints on which syllables were required to be level and which oblique.
54
+
55
+ The formal patterns of meter used in Modern English verse to create rhythm no longer dominate contemporary English poetry. In the case of free verse, rhythm is often organized based on looser units of cadence rather than a regular meter. Robinson Jeffers, Marianne Moore, and William Carlos Williams are three notable poets who reject the idea that regular accentual meter is critical to English poetry.[45] Jeffers experimented with sprung rhythm as an alternative to accentual rhythm.[46]
56
+
57
+ In the Western poetic tradition, meters are customarily grouped according to a characteristic metrical foot and the number of feet per line.[48] The number of metrical feet in a line are described using Greek terminology: tetrameter for four feet and hexameter for six feet, for example.[49] Thus, "iambic pentameter" is a meter comprising five feet per line, in which the predominant kind of foot is the "iamb". This metric system originated in ancient Greek poetry, and was used by poets such as Pindar and Sappho, and by the great tragedians of Athens. Similarly, "dactylic hexameter", comprises six feet per line, of which the dominant kind of foot is the "dactyl". Dactylic hexameter was the traditional meter of Greek epic poetry, the earliest extant examples of which are the works of Homer and Hesiod.[50] Iambic pentameter and dactylic hexameter were later used by a number of poets, including William Shakespeare and Henry Wadsworth Longfellow, respectively.[51] The most common metrical feet in English are:[52]
58
+
59
+ There are a wide range of names for other types of feet, right up to a choriamb, a four syllable metric foot with a stressed syllable followed by two unstressed syllables and closing with a stressed syllable. The choriamb is derived from some ancient Greek and Latin poetry.[50] Languages which utilize vowel length or intonation rather than or in addition to syllabic accents in determining meter, such as Ottoman Turkish or Vedic, often have concepts similar to the iamb and dactyl to describe common combinations of long and short sounds.[54]
60
+
61
+ Each of these types of feet has a certain "feel," whether alone or in combination with other feet. The iamb, for example, is the most natural form of rhythm in the English language, and generally produces a subtle but stable verse.[55] Scanning meter can often show the basic or fundamental pattern underlying a verse, but does not show the varying degrees of stress, as well as the differing pitches and lengths of syllables.[56]
62
+
63
+ There is debate over how useful a multiplicity of different "feet" is in describing meter. For example, Robert Pinsky has argued that while dactyls are important in classical verse, English dactylic verse uses dactyls very irregularly and can be better described based on patterns of iambs and anapests, feet which he considers natural to the language.[57] Actual rhythm is significantly more complex than the basic scanned meter described above, and many scholars have sought to develop systems that would scan such complexity. Vladimir Nabokov noted that overlaid on top of the regular pattern of stressed and unstressed syllables in a line of verse was a separate pattern of accents resulting from the natural pitch of the spoken words, and suggested that the term "scud" be used to distinguish an unaccented stress from an accented stress.[58]
64
+
65
+ Different traditions and genres of poetry tend to use different meters, ranging from the Shakespearean iambic pentameter and the Homeric dactylic hexameter to the anapestic tetrameter used in many nursery rhymes. However, a number of variations to the established meter are common, both to provide emphasis or attention to a given foot or line and to avoid boring repetition. For example, the stress in a foot may be inverted, a caesura (or pause) may be added (sometimes in place of a foot or stress), or the final foot in a line may be given a feminine ending to soften it or be replaced by a spondee to emphasize it and create a hard stop. Some patterns (such as iambic pentameter) tend to be fairly regular, while other patterns, such as dactylic hexameter, tend to be highly irregular.[59] Regularity can vary between language. In addition, different patterns often develop distinctively in different languages, so that, for example, iambic tetrameter in Russian will generally reflect a regularity in the use of accents to reinforce the meter, which does not occur, or occurs to a much lesser extent, in English.[60]
66
+
67
+ Some common metrical patterns, with notable examples of poets and poems who use them, include:
68
+
69
+ Rhyme, alliteration, assonance and consonance are ways of creating repetitive patterns of sound. They may be used as an independent structural element in a poem, to reinforce rhythmic patterns, or as an ornamental element.[66] They can also carry a meaning separate from the repetitive sound patterns created. For example, Chaucer used heavy alliteration to mock Old English verse and to paint a character as archaic.[67]
70
+
71
+ Rhyme consists of identical ("hard-rhyme") or similar ("soft-rhyme") sounds placed at the ends of lines or at predictable locations within lines ("internal rhyme"). Languages vary in the richness of their rhyming structures; Italian, for example, has a rich rhyming structure permitting maintenance of a limited set of rhymes throughout a lengthy poem. The richness results from word endings that follow regular forms. English, with its irregular word endings adopted from other languages, is less rich in rhyme.[68] The degree of richness of a language's rhyming structures plays a substantial role in determining what poetic forms are commonly used in that language.[69]
72
+
73
+ Alliteration is the repetition of letters or letter-sounds at the beginning of two or more words immediately succeeding each other, or at short intervals; or the recurrence of the same letter in accented parts of words. Alliteration and assonance played a key role in structuring early Germanic, Norse and Old English forms of poetry. The alliterative patterns of early Germanic poetry interweave meter and alliteration as a key part of their structure, so that the metrical pattern determines when the listener expects instances of alliteration to occur. This can be compared to an ornamental use of alliteration in most Modern European poetry, where alliterative patterns are not formal or carried through full stanzas. Alliteration is particularly useful in languages with less rich rhyming structures.
74
+
75
+ Assonance, where the use of similar vowel sounds within a word rather than similar sounds at the beginning or end of a word, was widely used in skaldic poetry but goes back to the Homeric epic.[70] Because verbs carry much of the pitch in the English language, assonance can loosely evoke the tonal elements of Chinese poetry and so is useful in translating Chinese poetry.[71] Consonance occurs where a consonant sound is repeated throughout a sentence without putting the sound only at the front of a word. Consonance provokes a more subtle effect than alliteration and so is less useful as a structural element.[69]
76
+
77
+ In many languages, including modern European languages and Arabic, poets use rhyme in set patterns as a structural element for specific poetic forms, such as ballads, sonnets and rhyming couplets. However, the use of structural rhyme is not universal even within the European tradition. Much modern poetry avoids traditional rhyme schemes. Classical Greek and Latin poetry did not use rhyme.[72] Rhyme entered European poetry in the High Middle Ages, in part under the influence of the Arabic language in Al Andalus (modern Spain).[73] Arabic language poets used rhyme extensively from the first development of literary Arabic in the sixth century, as in their long, rhyming qasidas.[74] Some rhyming schemes have become associated with a specific language, culture or period, while other rhyming schemes have achieved use across languages, cultures or time periods. Some forms of poetry carry a consistent and well-defined rhyming scheme, such as the chant royal or the rubaiyat, while other poetic forms have variable rhyme schemes.[75]
78
+
79
+ Most rhyme schemes are described using letters that correspond to sets of rhymes, so if the first, second and fourth lines of a quatrain rhyme with each other and the third line do not rhyme, the quatrain is said to have an "aa-ba" rhyme scheme. This rhyme scheme is the one used, for example, in the rubaiyat form.[76] Similarly, an "a-bb-a" quatrain (what is known as "enclosed rhyme") is used in such forms as the Petrarchan sonnet.[77] Some types of more complicated rhyming schemes have developed names of their own, separate from the "a-bc" convention, such as the ottava rima and terza rima.[78] The types and use of differing rhyming schemes are discussed further in the main article.
80
+
81
+ Poetic form is more flexible in modernist and post-modernist poetry and continues to be less structured than in previous literary eras. Many modern poets eschew recognizable structures or forms and write in free verse. But poetry remains distinguished from prose by its form; some regard for basic formal structures of poetry will be found in even the best free verse, however much such structures may appear to have been ignored.[79] Similarly, in the best poetry written in classic styles there will be departures from strict form for emphasis or effect.[80]
82
+
83
+ Among major structural elements used in poetry are the line, the stanza or verse paragraph, and larger combinations of stanzas or lines such as cantos. Also sometimes used are broader visual presentations of words and calligraphy. These basic units of poetic form are often combined into larger structures, called poetic forms or poetic modes (see the following section), as in the sonnet.
84
+
85
+ Poetry is often separated into lines on a page, in a process known as lineation. These lines may be based on the number of metrical feet or may emphasize a rhyming pattern at the ends of lines. Lines may serve other functions, particularly where the poem is not written in a formal metrical pattern. Lines can separate, compare or contrast thoughts expressed in different units, or can highlight a change in tone.[81] See the article on line breaks for information about the division between lines.
86
+
87
+ Lines of poems are often organized into stanzas, which are denominated by the number of lines included. Thus a collection of two lines is a couplet (or distich), three lines a triplet (or tercet), four lines a quatrain, and so on. These lines may or may not relate to each other by rhyme or rhythm. For example, a couplet may be two lines with identical meters which rhyme or two lines held together by a common meter alone.[82]
88
+
89
+ Other poems may be organized into verse paragraphs, in which regular rhymes with established rhythms are not used, but the poetic tone is instead established by a collection of rhythms, alliterations, and rhymes established in paragraph form.[83] Many medieval poems were written in verse paragraphs, even where regular rhymes and rhythms were used.[84]
90
+
91
+ In many forms of poetry, stanzas are interlocking, so that the rhyming scheme or other structural elements of one stanza determine those of succeeding stanzas. Examples of such interlocking stanzas include, for example, the ghazal and the villanelle, where a refrain (or, in the case of the villanelle, refrains) is established in the first stanza which then repeats in subsequent stanzas. Related to the use of interlocking stanzas is their use to separate thematic parts of a poem. For example, the strophe, antistrophe and epode of the ode form are often separated into one or more stanzas.[85]
92
+
93
+ In some cases, particularly lengthier formal poetry such as some forms of epic poetry, stanzas themselves are constructed according to strict rules and then combined. In skaldic poetry, the dróttkvætt stanza had eight lines, each having three "lifts" produced with alliteration or assonance. In addition to two or three alliterations, the odd-numbered lines had partial rhyme of consonants with dissimilar vowels, not necessarily at the beginning of the word; the even lines contained internal rhyme in set syllables (not necessarily at the end of the word). Each half-line had exactly six syllables, and each line ended in a trochee. The arrangement of dróttkvætts followed far less rigid rules than the construction of the individual dróttkvætts.[86]
94
+
95
+ Even before the advent of printing, the visual appearance of poetry often added meaning or depth. Acrostic poems conveyed meanings in the initial letters of lines or in letters at other specific places in a poem.[87] In Arabic, Hebrew and Chinese poetry, the visual presentation of finely calligraphed poems has played an important part in the overall effect of many poems.[88]
96
+
97
+ With the advent of printing, poets gained greater control over the mass-produced visual presentations of their work. Visual elements have become an important part of the poet's toolbox, and many poets have sought to use visual presentation for a wide range of purposes. Some Modernist poets have made the placement of individual lines or groups of lines on the page an integral part of the poem's composition. At times, this complements the poem's rhythm through visual caesuras of various lengths, or creates juxtapositions so as to accentuate meaning, ambiguity or irony, or simply to create an aesthetically pleasing form. In its most extreme form, this can lead to concrete poetry or asemic writing.[89][90]
98
+
99
+ Poetic diction treats the manner in which language is used, and refers not only to the sound but also to the underlying meaning and its interaction with sound and form.[91] Many languages and poetic forms have very specific poetic dictions, to the point where distinct grammars and dialects are used specifically for poetry.[92][93] Registers in poetry can range from strict employment of ordinary speech patterns, as favoured in much late-20th-century prosody,[94] through to highly ornate uses of language, as in medieval and Renaissance poetry.[95]
100
+
101
+ Poetic diction can include rhetorical devices such as simile and metaphor, as well as tones of voice, such as irony. Aristotle wrote in the Poetics that "the greatest thing by far is to be a master of metaphor."[96] Since the rise of Modernism, some poets have opted for a poetic diction that de-emphasizes rhetorical devices, attempting instead the direct presentation of things and experiences and the exploration of tone.[97] On the other hand, Surrealists have pushed rhetorical devices to their limits, making frequent use of catachresis.[98]
102
+
103
+ Allegorical stories are central to the poetic diction of many cultures, and were prominent in the West during classical times, the late Middle Ages and the Renaissance. Aesop's Fables, repeatedly rendered in both verse and prose since first being recorded about 500 BCE, are perhaps the richest single source of allegorical poetry through the ages.[99] Other notables examples include the Roman de la Rose, a 13th-century French poem, William Langland's Piers Ploughman in the 14th century, and Jean de la Fontaine's Fables (influenced by Aesop's) in the 17th century. Rather than being fully allegorical, however, a poem may contain symbols or allusions that deepen the meaning or effect of its words without constructing a full allegory.[100]
104
+
105
+ Another element of poetic diction can be the use of vivid imagery for effect. The juxtaposition of unexpected or impossible images is, for example, a particularly strong element in surrealist poetry and haiku.[101] Vivid images are often endowed with symbolism or metaphor. Many poetic dictions use repetitive phrases for effect, either a short phrase (such as Homer's "rosy-fingered dawn" or "the wine-dark sea") or a longer refrain. Such repetition can add a somber tone to a poem, or can be laced with irony as the context of the words changes.[102]
106
+
107
+ Specific poetic forms have been developed by many cultures. In more developed, closed or "received" poetic forms, the rhyming scheme, meter and other elements of a poem are based on sets of rules, ranging from the relatively loose rules that govern the construction of an elegy to the highly formalized structure of the ghazal or villanelle.[103] Described below are some common forms of poetry widely used across a number of languages. Additional forms of poetry may be found in the discussions of the poetry of particular cultures or periods and in the glossary.
108
+
109
+ Among the most common forms of poetry, popular from the Late Middle Ages on, is the sonnet, which by the 13th century had become standardized as fourteen lines following a set rhyme scheme and logical structure. By the 14th century and the Italian Renaissance, the form had further crystallized under the pen of Petrarch, whose sonnets were translated in the 16th century by Sir Thomas Wyatt, who is credited with introducing the sonnet form into English literature.[104] A traditional Italian or Petrarchan sonnet follows the rhyme scheme ABBA, ABBA, CDECDE, though some variation, perhaps the most common being CDCDCD, especially within the final six lines (or sestet), is common.[105] The English (or Shakespearean) sonnet follows the rhyme scheme ABAB, CDCD, EFEF, GG, introducing a third quatrain (grouping of four lines), a final couplet, and a greater amount of variety with regard to rhyme than is usually found in its Italian predecessors. By convention, sonnets in English typically use iambic pentameter, while in the Romance languages, the hendecasyllable and Alexandrine are the most widely used meters.
110
+
111
+ Sonnets of all types often make use of a volta, or "turn," a point in the poem at which an idea is turned on its head, a question is answered (or introduced), or the subject matter is further complicated. This volta can often take the form of a "but" statement contradicting or complicating the content of the earlier lines. In the Petrarchan sonnet, the turn tends to fall around the division between the first two quatrains and the sestet, while English sonnets usually place it at or near the beginning of the closing couplet.
112
+
113
+ Sonnets are particularly associated with high poetic diction, vivid imagery, and romantic love, largely due to the influence of Petrarch as well as of early English practitioners such as Edmund Spenser (who gave his name to the Spenserian sonnet), Michael Drayton, and Shakespeare, whose sonnets are among the most famous in English poetry, with twenty being included in the Oxford Book of English Verse.[106] However, the twists and turns associated with the volta allow for a logical flexibility applicable to many subjects.[107] Poets from the earliest centuries of the sonnet to the present have utilized the form to address topics related to politics (John Milton, Percy Bysshe Shelley, Claude McKay), theology (John Donne, Gerard Manley Hopkins), war (Wilfred Owen, e.e. cummings), and gender and sexuality (Carol Ann Duffy). Further, postmodern authors such as Ted Berrigan and John Berryman have challenged the traditional definitions of the sonnet form, rendering entire sequences of "sonnets" that often lack rhyme, a clear logical progression, or even a consistent count of fourteen lines.
114
+
115
+ Shi (simplified Chinese: 诗; traditional Chinese: 詩; pinyin: shī; Wade–Giles: shih) Is the main type of Classical Chinese poetry.[108] Within this form of poetry the most important variations are "folk song" styled verse (yuefu), "old style" verse (gushi), "modern style" verse (jintishi). In all cases, rhyming is obligatory. The Yuefu is a folk ballad or a poem written in the folk ballad style, and the number of lines and the length of the lines could be irregular. For the other variations of shi poetry, generally either a four line (quatrain, or jueju) or else an eight-line poem is normal; either way with the even numbered lines rhyming. The line length is scanned by an according number of characters (according to the convention that one character equals one syllable), and are predominantly either five or seven characters long, with a caesura before the final three syllables. The lines are generally end-stopped, considered as a series of couplets, and exhibit verbal parallelism as a key poetic device.[109] The "old style" verse (Gushi) is less formally strict than the jintishi, or regulated verse, which, despite the name "new style" verse actually had its theoretical basis laid as far back as Shen Yue (441–513 CE), although not considered to have reached its full development until the time of Chen Zi'ang (661–702 CE).[110] A good example of a poet known for his Gushi poems is Li Bai (701–762 CE). Among its other rules, the jintishi rules regulate the tonal variations within a poem, including the use of set patterns of the four tones of Middle Chinese. The basic form of jintishi (sushi) has eight lines in four couplets, with parallelism between the lines in the second and third couplets. The couplets with parallel lines contain contrasting content but an identical grammatical relationship between words. Jintishi often have a rich poetic diction, full of allusion, and can have a wide range of subject, including history and politics.[111][112] One of the masters of the form was Du Fu (712–770 CE), who wrote during the Tang Dynasty (8th century).[113]
116
+
117
+ The villanelle is a nineteen-line poem made up of five triplets with a closing quatrain; the poem is characterized by having two refrains, initially used in the first and third lines of the first stanza, and then alternately used at the close of each subsequent stanza until the final quatrain, which is concluded by the two refrains. The remaining lines of the poem have an a-b alternating rhyme.[114] The villanelle has been used regularly in the English language since the late 19th century by such poets as Dylan Thomas,[115] W. H. Auden,[116] and Elizabeth Bishop.[117]
118
+
119
+ A limerick is a poem that consists of five lines and is often humorous. Rhythm is very important in limericks for the first, second and fifth lines must have seven to ten syllables. However, the third and fourth lines only need five to seven. All of the lines must rhyme and have the same rhythm.
120
+
121
+ Tanka is a form of unrhymed Japanese poetry, with five sections totalling 31 on (phonological units identical to morae), structured in a 5-7-5-7-7 pattern.[118] There is generally a shift in tone and subject matter between the upper 5-7-5 phrase and the lower 7-7 phrase. Tanka were written as early as the Asuka period by such poets as Kakinomoto no Hitomaro (fl. late 7th century), at a time when Japan was emerging from a period where much of its poetry followed Chinese form.[119] Tanka was originally the shorter form of Japanese formal poetry (which was generally referred to as "waka"), and was used more heavily to explore personal rather than public themes. By the tenth century, tanka had become the dominant form of Japanese poetry, to the point where the originally general term waka ("Japanese poetry") came to be used exclusively for tanka. Tanka are still widely written today.[120]
122
+
123
+ Haiku is a popular form of unrhymed Japanese poetry, which evolved in the 17th century from the hokku, or opening verse of a renku.[121] Generally written in a single vertical line, the haiku contains three sections totalling 17 on (morae), structured in a 5-7-5 pattern. Traditionally, haiku contain a kireji, or cutting word, usually placed at the end of one of the poem's three sections, and a kigo, or season-word.[122] The most famous exponent of the haiku was Matsuo Bashō (1644–1694). An example of his writing:[123]
124
+
125
+ The khlong (โคลง, [kʰlōːŋ]) is among the oldest Thai poetic forms. This is reflected in its requirements on the tone markings of certain syllables, which must be marked with mai ek (ไม้เอก, Thai pronunciation: [máj èːk], ◌่) or mai tho (ไม้โท, [máj tʰōː], ◌้). This was likely derived from when the Thai language had three tones (as opposed to today's five, a split which occurred during the Ayutthaya Kingdom period), two of which corresponded directly to the aforementioned marks. It is usually regarded as an advanced and sophisticated poetic form.[124]
126
+
127
+ In khlong, a stanza (bot, บท, Thai pronunciation: [bòt]) has a number of lines (bat, บาท, Thai pronunciation: [bàːt], from Pali and Sanskrit pāda), depending on the type. The bat are subdivided into two wak (วรรค, Thai pronunciation: [wák], from Sanskrit varga).[note 1] The first wak has five syllables, the second has a variable number, also depending on the type, and may be optional. The type of khlong is named by the number of bat in a stanza; it may also be divided into two main types: khlong suphap (โคลงสุภาพ, [kʰlōːŋ sù.pʰâːp]) and khlong dan (โคลงดั้น, [kʰlōːŋ dân]). The two differ in the number of syllables in the second wak of the final bat and inter-stanza rhyming rules.[124]
128
+
129
+ The khlong si suphap (โคลงสี่สุภาพ, [kʰlōːŋ sìː sù.pʰâːp]) is the most common form still currently employed. It has four bat per stanza (si translates as four). The first wak of each bat has five syllables. The second wak has two or four syllables in the first and third bat, two syllables in the second, and four syllables in the fourth. Mai ek is required for seven syllables and Mai tho is required for four, as shown below. "Dead word" syllables are allowed in place of syllables which require mai ek, and changing the spelling of words to satisfy the criteria is usually acceptable.
130
+
131
+ Odes were first developed by poets writing in ancient Greek, such as Pindar, and Latin, such as Horace. Forms of odes appear in many of the cultures that were influenced by the Greeks and Latins.[125] The ode generally has three parts: a strophe, an antistrophe, and an epode. The antistrophes of the ode possess similar metrical structures and, depending on the tradition, similar rhyme structures. In contrast, the epode is written with a different scheme and structure. Odes have a formal poetic diction and generally deal with a serious subject. The strophe and antistrophe look at the subject from different, often conflicting, perspectives, with the epode moving to a higher level to either view or resolve the underlying issues. Odes are often intended to be recited or sung by two choruses (or individuals), with the first reciting the strophe, the second the antistrophe, and both together the epode.[126] Over time, differing forms for odes have developed with considerable variations in form and structure, but generally showing the original influence of the Pindaric or Horatian ode. One non-Western form which resembles the ode is the qasida in Persian poetry.[127]
132
+
133
+ The ghazal (also ghazel, gazel, gazal, or gozol) is a form of poetry common in Arabic, Bengali, Persian and Urdu. In classic form, the ghazal has from five to fifteen rhyming couplets that share a refrain at the end of the second line. This refrain may be of one or several syllables and is preceded by a rhyme. Each line has an identical meter. The ghazal often reflects on a theme of unattainable love or divinity.[128]
134
+
135
+ As with other forms with a long history in many languages, many variations have been developed, including forms with a quasi-musical poetic diction in Urdu.[129] Ghazals have a classical affinity with Sufism, and a number of major Sufi religious works are written in ghazal form. The relatively steady meter and the use of the refrain produce an incantatory effect, which complements Sufi mystical themes well.[130] Among the masters of the form is Rumi, a 13th-century Persian poet.[131]
136
+ One of the most famous poet in this type of poetry is Hafez, whose poems often include the theme of exposing hypocrisy. His life and poems have been the subject of much analysis, commentary and interpretation, influencing post-fourteenth century Persian writing more than any other author.[132][133] The West-östlicher Diwan of Johann Wolfgang von Goethe, a collection of lyrical poems, is inspired by the Persian poet Hafez.[134][135][136]
137
+
138
+ In addition to specific forms of poems, poetry is often thought of in terms of different genres and subgenres. A poetic genre is generally a tradition or classification of poetry based on the subject matter, style, or other broader literary characteristics.[137] Some commentators view genres as natural forms of literature. Others view the study of genres as the study of how different works relate and refer to other works.[138]
139
+
140
+ Narrative poetry is a genre of poetry that tells a story. Broadly it subsumes epic poetry, but the term "narrative poetry" is often reserved for smaller works, generally with more appeal to human interest. Narrative poetry may be the oldest type of poetry. Many scholars of Homer have concluded that his Iliad and Odyssey were composed of compilations of shorter narrative poems that related individual episodes. Much narrative poetry—such as Scottish and English ballads, and Baltic and Slavic heroic poems—is performance poetry with roots in a preliterate oral tradition. It has been speculated that some features that distinguish poetry from prose, such as meter, alliteration and kennings, once served as memory aids for bards who recited traditional tales.[139]
141
+
142
+ Notable narrative poets have included Ovid, Dante, Juan Ruiz, William Langland, Chaucer, Fernando de Rojas, Luís de Camões, Shakespeare, Alexander Pope, Robert Burns, Adam Mickiewicz, Alexander Pushkin, Edgar Allan Poe, Alfred Tennyson, and Anne Carson.
143
+
144
+ Lyric poetry is a genre that, unlike epic and dramatic poetry, does not attempt to tell a story but instead is of a more personal nature. Poems in this genre tend to be shorter, melodic, and contemplative. Rather than depicting characters and actions, it portrays the poet's own feelings, states of mind, and perceptions.[140] Notable poets in this genre include Christine de Pizan, John Donne, Charles Baudelaire, Gerard Manley Hopkins, Antonio Machado, and Edna St. Vincent Millay.
145
+
146
+ Epic poetry is a genre of poetry, and a major form of narrative literature. This genre is often defined as lengthy poems concerning events of a heroic or important nature to the culture of the time. It recounts, in a continuous narrative, the life and works of a heroic or mythological person or group of persons.[141] Examples of epic poems are Homer's Iliad and Odyssey, Virgil's Aeneid, the Nibelungenlied, Luís de Camões' Os Lusíadas, the Cantar de Mio Cid, the Epic of Gilgamesh, the Mahabharata, Valmiki's Ramayana, Ferdowsi's Shahnama, Nizami (or Nezami)'s Khamse (Five Books), and the Epic of King Gesar. While the composition of epic poetry, and of long poems generally, became less common in the west after the early 20th century, some notable epics have continued to be written. Derek Walcott won a Nobel prize to a great extent on the basis of his epic, Omeros.[142]
147
+
148
+ Poetry can be a powerful vehicle for satire. The Romans had a strong tradition of satirical poetry, often written for political purposes. A notable example is the Roman poet Juvenal's satires.[143]
149
+
150
+ The same is true of the English satirical tradition. John Dryden (a Tory), the first Poet Laureate, produced in 1682 Mac Flecknoe, subtitled "A Satire on the True Blue Protestant Poet, T.S." (a reference to Thomas Shadwell).[144] Another master of 17th-century English satirical poetry was John Wilmot, 2nd Earl of Rochester.[145] Satirical poets outside England include Poland's Ignacy Krasicki, Azerbaijan's Sabir and Portugal's Manuel Maria Barbosa du Bocage.
151
+
152
+ An elegy is a mournful, melancholy or plaintive poem, especially a lament for the dead or a funeral song. The term "elegy," which originally denoted a type of poetic meter (elegiac meter), commonly describes a poem of mourning. An elegy may also reflect something that seems to the author to be strange or mysterious. The elegy, as a reflection on a death, on a sorrow more generally, or on something mysterious, may be classified as a form of lyric poetry.[146][147]
153
+
154
+ Notable practitioners of elegiac poetry have included Propertius, Jorge Manrique, Jan Kochanowski, Chidiock Tichborne, Edmund Spenser, Ben Jonson, John Milton, Thomas Gray, Charlotte Turner Smith, William Cullen Bryant, Percy Bysshe Shelley, Johann Wolfgang von Goethe, Evgeny Baratynsky, Alfred Tennyson, Walt Whitman, Antonio Machado, Juan Ramón Jiménez, Giannina Braschi, William Butler Yeats, Rainer Maria Rilke, and Virginia Woolf.
155
+
156
+ The fable is an ancient literary genre, often (though not invariably) set in verse. It is a succinct story that features anthropomorphised animals, legendary creatures, plants, inanimate objects, or forces of nature that illustrate a moral lesson (a "moral"). Verse fables have used a variety of meter and rhyme patterns.[148]
157
+
158
+ Notable verse fabulists have included Aesop, Vishnu Sarma, Phaedrus, Marie de France, Robert Henryson, Biernat of Lublin, Jean de La Fontaine, Ignacy Krasicki, Félix María de Samaniego, Tomás de Iriarte, Ivan Krylov and Ambrose Bierce.
159
+
160
+ Dramatic poetry is drama written in verse to be spoken or sung, and appears in varying, sometimes related forms in many cultures. Greek tragedy in verse dates to the 6th century B.C., and may have been an influence on the development of Sanskrit drama,[149] just as Indian drama in turn appears to have influenced the development of the bianwen verse dramas in China, forerunners of Chinese Opera.[150] East Asian verse dramas also include Japanese Noh. Examples of dramatic poetry in Persian literature include Nizami's two famous dramatic works, Layla and Majnun and Khosrow and Shirin, Ferdowsi's tragedies such as Rostam and Sohrab, Rumi's Masnavi, Gorgani's tragedy of Vis and Ramin, and Vahshi's tragedy of Farhad.
161
+
162
+ Speculative poetry, also known as fantastic poetry (of which weird or macabre poetry is a major sub-classification), is a poetic genre which deals thematically with subjects which are "beyond reality", whether via extrapolation as in science fiction or via weird and horrific themes as in horror fiction. Such poetry appears regularly in modern science fiction and horror fiction magazines. Edgar Allan Poe is sometimes seen as the "father of speculative poetry".[151] Poe's most remarkable achievement in the genre was his anticipation, by three-quarters of a century, of the Big Bang theory of the universe's origin, in his then much-derided 1848 essay (which, due to its very speculative nature, he termed a "prose poem"), Eureka: A Prose Poem.[152][153]
163
+
164
+ Prose poetry is a hybrid genre that shows attributes of both prose and poetry. It may be indistinguishable from the micro-story (a.k.a. the "short short story", "flash fiction"). While some examples of earlier prose strike modern readers as poetic, prose poetry is commonly regarded as having originated in 19th-century France, where its practitioners included Aloysius Bertrand, Charles Baudelaire, Arthur Rimbaud and Stéphane Mallarmé.[154] Since the late 1980s especially, prose poetry has gained increasing popularity, with entire journals, such as The Prose Poem: An International Journal,[155] Contemporary Haibun Online,[156] and Haibun Today[157] devoted to that genre and its hybrids. Latin American poets of the 20th century who wrote prose poems include Octavio Paz and Giannina Braschi[158][159]
165
+
166
+ Light poetry, or light verse, is poetry that attempts to be humorous. Poems considered "light" are usually brief, and can be on a frivolous or serious subject, and often feature word play, including puns, adventurous rhyme and heavy alliteration. Although a few free verse poets have excelled at light verse outside the formal verse tradition, light verse in English usually obeys at least some formal conventions. Common forms include the limerick, the clerihew, and the double dactyl.
167
+
168
+ While light poetry is sometimes condemned as doggerel, or thought of as poetry composed casually, humor often makes a serious point in a subtle or subversive way. Many of the most renowned "serious" poets have also excelled at light verse. Notable writers of light poetry include Lewis Carroll, Ogden Nash, X. J. Kennedy, Willard R. Espy, and Wendy Cope.
169
+
170
+ Slam poetry as a genre originated in 1986 in Chicago, Illinois, when Marc Kelly Smith organized the first slam.[160][161] Slam performers comment emotively, aloud before an audience, on personal, social, or other matters. Slam focuses on the aesthetics of word play, intonation, and voice inflection. Slam poetry is often competitive, at dedicated "poetry slam" contests.[162]
171
+
en/4683.html.txt ADDED
@@ -0,0 +1,171 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+
5
+ Poetry (derived from the Greek poiesis, "making") is a form of literature that uses aesthetic and often rhythmic[1][2][3] qualities of language—such as phonaesthetics, sound symbolism, and metre—to evoke meanings in addition to, or in place of, the prosaic ostensible meaning.
6
+
7
+ Poetry has a long history – dating back to prehistoric times with hunting poetry in Africa, and to panegyric and elegiac court poetry of the empires of the Nile, Niger, and Volta River valleys.[4] Some of the earliest written poetry in Africa occurs among the Pyramid Texts written during the 25th century BCE. The earliest surviving Western Asian epic poetry, the Epic of Gilgamesh, was written in Sumerian.
8
+
9
+ Early poems in the Eurasian continent evolved from folk songs such as the Chinese Shijing; or from a need to retell oral epics, as with the Sanskrit Vedas, the Zoroastrian Gathas, and the Homeric epics, the Iliad and the Odyssey. Ancient Greek attempts to define poetry, such as Aristotle's Poetics, focused on the uses of speech in rhetoric, drama, song, and comedy. Later attempts concentrated on features such as repetition, verse form, and rhyme, and emphasized the aesthetics which distinguish poetry from more objectively-informative prosaic writing.
10
+
11
+ Poetry uses forms and conventions to suggest differential interpretations of words, or to evoke emotive responses. Devices such as assonance, alliteration, onomatopoeia, and rhythm may convey musical or incantatory effects. The use of ambiguity, symbolism, irony, and other stylistic elements of poetic diction often leaves a poem open to multiple interpretations. Similarly, figures of speech such as metaphor, simile, and metonymy[5] establish a resonance between otherwise disparate images—a layering of meanings, forming connections previously not perceived. Kindred forms of resonance may exist, between individual verses, in their patterns of rhyme or rhythm.
12
+
13
+ Some poetry types are unique to particular cultures and genres and respond to characteristics of the language in which the poet writes. Readers accustomed to identifying poetry with Dante, Goethe, Mickiewicz, or Rumi may think of it as written in lines based on rhyme and regular meter. There are, however, traditions, such as Biblical poetry, that use other means to create rhythm and euphony. Much modern poetry reflects a critique of poetic tradition,[6] testing the principle of euphony itself or altogether forgoing rhyme or set rhythm.[7][8]
14
+ In an increasingly globalized world, poets often adapt forms, styles, and techniques from diverse cultures and languages.
15
+
16
+ A Western cultural tradition (which extends at least from Homer to Rilke) associates the production of poetry with inspiration – often by a Muse (either classical or contemporary).
17
+
18
+ Some scholars believe that the art of poetry may predate literacy.[9][10]
19
+ Others, however, suggest that poetry did not necessarily predate writing.[11][need quotation to verify]
20
+
21
+ The oldest surviving epic poem, the Epic of Gilgamesh, dates from the 3rd millennium BCE in Sumer (in Mesopotamia, now Iraq), and was written in cuneiform script on clay tablets and, later, on papyrus.[12] A tablet #2461 dating to c. 2000 BCE describes an annual rite in which the king symbolically married and mated with the goddess Inanna to ensure fertility and prosperity; some have labelled it the world's oldest love poem.[13][14] An example of Egyptian epic poetry is The Story of Sinuhe (c. 1800 BCE).
22
+
23
+ Other ancient epic poetry includes the Greek epics, the Iliad and the Odyssey; the Avestan books, the Gathic Avesta and the Yasna; the Roman national epic, Virgil's Aeneid (written between 29 and 19 BCE); and the Indian epics, the Ramayana and the Mahabharata. Epic poetry, including the Odyssey, the Gathas, and the Indian Vedas, appears to have been composed in poetic form as an aid to memorization and oral transmission in prehistoric and ancient societies.[11][15]
24
+
25
+ Other forms of poetry developed directly from folk songs. The earliest entries in the oldest extant collection of Chinese poetry, the Shijing, were initially lyrics.[16]
26
+
27
+ The efforts of ancient thinkers to determine what makes poetry distinctive as a form, and what distinguishes good poetry from bad, resulted in "poetics"—the study of the aesthetics of poetry.[17] Some ancient societies, such as China's through her Shijing (Classic of Poetry), developed canons of poetic works that had ritual as well as aesthetic importance.[18] More recently, thinkers have struggled to find a definition that could encompass formal differences as great as those between Chaucer's Canterbury Tales and Matsuo Bashō's Oku no Hosomichi, as well as differences in content spanning Tanakh religious poetry, love poetry, and rap.[19]
28
+
29
+ Classical thinkers in the West employed classification as a way to define and assess the quality of poetry. Notably, the existing fragments of Aristotle's Poetics describe three genres of poetry—the epic, the comic, and the tragic—and develop rules to distinguish the highest-quality poetry in each genre, based on the perceived underlying purposes of the genre.[20] Later aestheticians identified three major genres: epic poetry, lyric poetry, and dramatic poetry, treating comedy and tragedy as subgenres of dramatic poetry.[21]
30
+
31
+ Aristotle's work was influential throughout the Middle East during the Islamic Golden Age,[22] as well as in Europe during the Renaissance.[23] Later poets and aestheticians often distinguished poetry from, and defined it in opposition to prose, which they generally understood as writing with a proclivity to logical explication and a linear narrative structure.[24]
32
+
33
+ This does not imply that poetry is illogical or lacks narration, but rather that poetry is an attempt to render the beautiful or sublime without the burden of engaging the logical or narrative thought-process. English Romantic poet John Keats termed this escape from logic "Negative capability".[25] This "romantic" approach views form as a key element of successful poetry because form is abstract and distinct from the underlying notional logic. This approach remained influential into the 20th century.[26]
34
+
35
+ During this period,[when?] there was also substantially more interaction among the various poetic traditions, in part due to the spread of European colonialism and the attendant rise in global trade.[27] In addition to a boom in translation, during the Romantic period numerous ancient works were rediscovered.[28]
36
+
37
+ Some 20th-century literary theorists rely less on the ostensible opposition of prose and poetry, instead focusing on the poet as simply one who creates using language, and poetry as what the poet creates.[29] The underlying concept of the poet as creator is not uncommon, and some modernist poets essentially do not distinguish between the creation of a poem with words, and creative acts in other media. Yet other modernists challenge the very attempt to define poetry as misguided.[30]
38
+
39
+ The rejection of traditional forms and structures for poetry that began in the first half of the 20th century coincided with a questioning of the purpose and meaning of traditional definitions of poetry and of distinctions between poetry and prose, particularly given examples of poetic prose and prosaic poetry. Numerous modernist poets have written in non-traditional forms or in what traditionally would have been considered prose, although their writing was generally infused with poetic diction and often with rhythm and tone established by non-metrical means. While there was a substantial formalist reaction within the modernist schools to the breakdown of structure, this reaction focused as much on the development of new formal structures and syntheses as on the revival of older forms and structures.[31]
40
+
41
+ Recently,[when?] postmodernism has come to regard more completely prose and poetry as distinct entities, and also different genres of poetry as having meaning only as cultural artifacts. Postmodernism goes beyond modernism's emphasis on the creative role of the poet, to emphasize the role of the reader of a text (hermeneutics), and to highlight the complex cultural web within which a poem is read.[32] Today, throughout the world, poetry often incorporates poetic form and diction from other cultures and from the past, further confounding attempts at definition and classification that once made sense within a tradition such as the Western canon.[33]
42
+
43
+ The early 21st-century poetic tradition appears to continue to strongly orient itself to earlier precursor poetic traditions such as those initiated by Whitman, Emerson, and Wordsworth. The literary critic Geoffrey Hartman (1929–2016) used the phrase "the anxiety of demand" to describe the contemporary response to older poetic traditions as "being fearful that the fact no longer has a form",[34] building on a trope introduced by Emerson. Emerson had maintained that in the debate concerning poetic structure where either "form" or "fact" could predominate, that one need simply "Ask the fact for the form." This has been challenged at various levels by other literary scholars such as Bloom (1930–2019), who has stated: "The generation of poets who stand together now, mature and ready to write the major American verse of the twenty-first century, may yet be seen as what Stevens called 'a great shadow's last embellishment,' the shadow being Emerson's."[35]
44
+
45
+ Prosody is the study of the meter, rhythm, and intonation of a poem. Rhythm and meter are different, although closely related.[36] Meter is the definitive pattern established for a verse (such as iambic pentameter), while rhythm is the actual sound that results from a line of poetry. Prosody also may be used more specifically to refer to the scanning of poetic lines to show meter.[37]
46
+
47
+ The methods for creating poetic rhythm vary across languages and between poetic traditions. Languages are often described as having timing set primarily by accents, syllables, or moras, depending on how rhythm is established, though a language can be influenced by multiple approaches. Japanese is a mora-timed language. Latin, Catalan, French, Leonese, Galician and Spanish are called syllable-timed languages. Stress-timed languages include English, Russian and, generally, German.[38] Varying intonation also affects how rhythm is perceived. Languages can rely on either pitch or tone. Some languages with a pitch accent are Vedic Sanskrit or Ancient Greek. Tonal languages include Chinese, Vietnamese and most Subsaharan languages.[39]
48
+
49
+ Metrical rhythm generally involves precise arrangements of stresses or syllables into repeated patterns called feet within a line. In Modern English verse the pattern of stresses primarily differentiate feet, so rhythm based on meter in Modern English is most often founded on the pattern of stressed and unstressed syllables (alone or elided).[40] In the classical languages, on the other hand, while the metrical units are similar, vowel length rather than stresses define the meter.[41] Old English poetry used a metrical pattern involving varied numbers of syllables but a fixed number of strong stresses in each line.[42]
50
+
51
+ The chief device of ancient Hebrew Biblical poetry, including many of the psalms, was parallelism, a rhetorical structure in which successive lines reflected each other in grammatical structure, sound structure, notional content, or all three. Parallelism lent itself to antiphonal or call-and-response performance, which could also be reinforced by intonation. Thus, Biblical poetry relies much less on metrical feet to create rhythm, but instead creates rhythm based on much larger sound units of lines, phrases and sentences.[43] Some classical poetry forms, such as Venpa of the Tamil language, had rigid grammars (to the point that they could be expressed as a context-free grammar) which ensured a rhythm.[44]
52
+
53
+ Classical Chinese poetics, based on the tone system of Middle Chinese, recognized two kinds of tones: the level (平 píng) tone and the oblique (仄 zè) tones, a category consisting of the rising (上 sháng) tone, the departing (去 qù) tone and the entering (入 rù) tone. Certain forms of poetry placed constraints on which syllables were required to be level and which oblique.
54
+
55
+ The formal patterns of meter used in Modern English verse to create rhythm no longer dominate contemporary English poetry. In the case of free verse, rhythm is often organized based on looser units of cadence rather than a regular meter. Robinson Jeffers, Marianne Moore, and William Carlos Williams are three notable poets who reject the idea that regular accentual meter is critical to English poetry.[45] Jeffers experimented with sprung rhythm as an alternative to accentual rhythm.[46]
56
+
57
+ In the Western poetic tradition, meters are customarily grouped according to a characteristic metrical foot and the number of feet per line.[48] The number of metrical feet in a line are described using Greek terminology: tetrameter for four feet and hexameter for six feet, for example.[49] Thus, "iambic pentameter" is a meter comprising five feet per line, in which the predominant kind of foot is the "iamb". This metric system originated in ancient Greek poetry, and was used by poets such as Pindar and Sappho, and by the great tragedians of Athens. Similarly, "dactylic hexameter", comprises six feet per line, of which the dominant kind of foot is the "dactyl". Dactylic hexameter was the traditional meter of Greek epic poetry, the earliest extant examples of which are the works of Homer and Hesiod.[50] Iambic pentameter and dactylic hexameter were later used by a number of poets, including William Shakespeare and Henry Wadsworth Longfellow, respectively.[51] The most common metrical feet in English are:[52]
58
+
59
+ There are a wide range of names for other types of feet, right up to a choriamb, a four syllable metric foot with a stressed syllable followed by two unstressed syllables and closing with a stressed syllable. The choriamb is derived from some ancient Greek and Latin poetry.[50] Languages which utilize vowel length or intonation rather than or in addition to syllabic accents in determining meter, such as Ottoman Turkish or Vedic, often have concepts similar to the iamb and dactyl to describe common combinations of long and short sounds.[54]
60
+
61
+ Each of these types of feet has a certain "feel," whether alone or in combination with other feet. The iamb, for example, is the most natural form of rhythm in the English language, and generally produces a subtle but stable verse.[55] Scanning meter can often show the basic or fundamental pattern underlying a verse, but does not show the varying degrees of stress, as well as the differing pitches and lengths of syllables.[56]
62
+
63
+ There is debate over how useful a multiplicity of different "feet" is in describing meter. For example, Robert Pinsky has argued that while dactyls are important in classical verse, English dactylic verse uses dactyls very irregularly and can be better described based on patterns of iambs and anapests, feet which he considers natural to the language.[57] Actual rhythm is significantly more complex than the basic scanned meter described above, and many scholars have sought to develop systems that would scan such complexity. Vladimir Nabokov noted that overlaid on top of the regular pattern of stressed and unstressed syllables in a line of verse was a separate pattern of accents resulting from the natural pitch of the spoken words, and suggested that the term "scud" be used to distinguish an unaccented stress from an accented stress.[58]
64
+
65
+ Different traditions and genres of poetry tend to use different meters, ranging from the Shakespearean iambic pentameter and the Homeric dactylic hexameter to the anapestic tetrameter used in many nursery rhymes. However, a number of variations to the established meter are common, both to provide emphasis or attention to a given foot or line and to avoid boring repetition. For example, the stress in a foot may be inverted, a caesura (or pause) may be added (sometimes in place of a foot or stress), or the final foot in a line may be given a feminine ending to soften it or be replaced by a spondee to emphasize it and create a hard stop. Some patterns (such as iambic pentameter) tend to be fairly regular, while other patterns, such as dactylic hexameter, tend to be highly irregular.[59] Regularity can vary between language. In addition, different patterns often develop distinctively in different languages, so that, for example, iambic tetrameter in Russian will generally reflect a regularity in the use of accents to reinforce the meter, which does not occur, or occurs to a much lesser extent, in English.[60]
66
+
67
+ Some common metrical patterns, with notable examples of poets and poems who use them, include:
68
+
69
+ Rhyme, alliteration, assonance and consonance are ways of creating repetitive patterns of sound. They may be used as an independent structural element in a poem, to reinforce rhythmic patterns, or as an ornamental element.[66] They can also carry a meaning separate from the repetitive sound patterns created. For example, Chaucer used heavy alliteration to mock Old English verse and to paint a character as archaic.[67]
70
+
71
+ Rhyme consists of identical ("hard-rhyme") or similar ("soft-rhyme") sounds placed at the ends of lines or at predictable locations within lines ("internal rhyme"). Languages vary in the richness of their rhyming structures; Italian, for example, has a rich rhyming structure permitting maintenance of a limited set of rhymes throughout a lengthy poem. The richness results from word endings that follow regular forms. English, with its irregular word endings adopted from other languages, is less rich in rhyme.[68] The degree of richness of a language's rhyming structures plays a substantial role in determining what poetic forms are commonly used in that language.[69]
72
+
73
+ Alliteration is the repetition of letters or letter-sounds at the beginning of two or more words immediately succeeding each other, or at short intervals; or the recurrence of the same letter in accented parts of words. Alliteration and assonance played a key role in structuring early Germanic, Norse and Old English forms of poetry. The alliterative patterns of early Germanic poetry interweave meter and alliteration as a key part of their structure, so that the metrical pattern determines when the listener expects instances of alliteration to occur. This can be compared to an ornamental use of alliteration in most Modern European poetry, where alliterative patterns are not formal or carried through full stanzas. Alliteration is particularly useful in languages with less rich rhyming structures.
74
+
75
+ Assonance, where the use of similar vowel sounds within a word rather than similar sounds at the beginning or end of a word, was widely used in skaldic poetry but goes back to the Homeric epic.[70] Because verbs carry much of the pitch in the English language, assonance can loosely evoke the tonal elements of Chinese poetry and so is useful in translating Chinese poetry.[71] Consonance occurs where a consonant sound is repeated throughout a sentence without putting the sound only at the front of a word. Consonance provokes a more subtle effect than alliteration and so is less useful as a structural element.[69]
76
+
77
+ In many languages, including modern European languages and Arabic, poets use rhyme in set patterns as a structural element for specific poetic forms, such as ballads, sonnets and rhyming couplets. However, the use of structural rhyme is not universal even within the European tradition. Much modern poetry avoids traditional rhyme schemes. Classical Greek and Latin poetry did not use rhyme.[72] Rhyme entered European poetry in the High Middle Ages, in part under the influence of the Arabic language in Al Andalus (modern Spain).[73] Arabic language poets used rhyme extensively from the first development of literary Arabic in the sixth century, as in their long, rhyming qasidas.[74] Some rhyming schemes have become associated with a specific language, culture or period, while other rhyming schemes have achieved use across languages, cultures or time periods. Some forms of poetry carry a consistent and well-defined rhyming scheme, such as the chant royal or the rubaiyat, while other poetic forms have variable rhyme schemes.[75]
78
+
79
+ Most rhyme schemes are described using letters that correspond to sets of rhymes, so if the first, second and fourth lines of a quatrain rhyme with each other and the third line do not rhyme, the quatrain is said to have an "aa-ba" rhyme scheme. This rhyme scheme is the one used, for example, in the rubaiyat form.[76] Similarly, an "a-bb-a" quatrain (what is known as "enclosed rhyme") is used in such forms as the Petrarchan sonnet.[77] Some types of more complicated rhyming schemes have developed names of their own, separate from the "a-bc" convention, such as the ottava rima and terza rima.[78] The types and use of differing rhyming schemes are discussed further in the main article.
80
+
81
+ Poetic form is more flexible in modernist and post-modernist poetry and continues to be less structured than in previous literary eras. Many modern poets eschew recognizable structures or forms and write in free verse. But poetry remains distinguished from prose by its form; some regard for basic formal structures of poetry will be found in even the best free verse, however much such structures may appear to have been ignored.[79] Similarly, in the best poetry written in classic styles there will be departures from strict form for emphasis or effect.[80]
82
+
83
+ Among major structural elements used in poetry are the line, the stanza or verse paragraph, and larger combinations of stanzas or lines such as cantos. Also sometimes used are broader visual presentations of words and calligraphy. These basic units of poetic form are often combined into larger structures, called poetic forms or poetic modes (see the following section), as in the sonnet.
84
+
85
+ Poetry is often separated into lines on a page, in a process known as lineation. These lines may be based on the number of metrical feet or may emphasize a rhyming pattern at the ends of lines. Lines may serve other functions, particularly where the poem is not written in a formal metrical pattern. Lines can separate, compare or contrast thoughts expressed in different units, or can highlight a change in tone.[81] See the article on line breaks for information about the division between lines.
86
+
87
+ Lines of poems are often organized into stanzas, which are denominated by the number of lines included. Thus a collection of two lines is a couplet (or distich), three lines a triplet (or tercet), four lines a quatrain, and so on. These lines may or may not relate to each other by rhyme or rhythm. For example, a couplet may be two lines with identical meters which rhyme or two lines held together by a common meter alone.[82]
88
+
89
+ Other poems may be organized into verse paragraphs, in which regular rhymes with established rhythms are not used, but the poetic tone is instead established by a collection of rhythms, alliterations, and rhymes established in paragraph form.[83] Many medieval poems were written in verse paragraphs, even where regular rhymes and rhythms were used.[84]
90
+
91
+ In many forms of poetry, stanzas are interlocking, so that the rhyming scheme or other structural elements of one stanza determine those of succeeding stanzas. Examples of such interlocking stanzas include, for example, the ghazal and the villanelle, where a refrain (or, in the case of the villanelle, refrains) is established in the first stanza which then repeats in subsequent stanzas. Related to the use of interlocking stanzas is their use to separate thematic parts of a poem. For example, the strophe, antistrophe and epode of the ode form are often separated into one or more stanzas.[85]
92
+
93
+ In some cases, particularly lengthier formal poetry such as some forms of epic poetry, stanzas themselves are constructed according to strict rules and then combined. In skaldic poetry, the dróttkvætt stanza had eight lines, each having three "lifts" produced with alliteration or assonance. In addition to two or three alliterations, the odd-numbered lines had partial rhyme of consonants with dissimilar vowels, not necessarily at the beginning of the word; the even lines contained internal rhyme in set syllables (not necessarily at the end of the word). Each half-line had exactly six syllables, and each line ended in a trochee. The arrangement of dróttkvætts followed far less rigid rules than the construction of the individual dróttkvætts.[86]
94
+
95
+ Even before the advent of printing, the visual appearance of poetry often added meaning or depth. Acrostic poems conveyed meanings in the initial letters of lines or in letters at other specific places in a poem.[87] In Arabic, Hebrew and Chinese poetry, the visual presentation of finely calligraphed poems has played an important part in the overall effect of many poems.[88]
96
+
97
+ With the advent of printing, poets gained greater control over the mass-produced visual presentations of their work. Visual elements have become an important part of the poet's toolbox, and many poets have sought to use visual presentation for a wide range of purposes. Some Modernist poets have made the placement of individual lines or groups of lines on the page an integral part of the poem's composition. At times, this complements the poem's rhythm through visual caesuras of various lengths, or creates juxtapositions so as to accentuate meaning, ambiguity or irony, or simply to create an aesthetically pleasing form. In its most extreme form, this can lead to concrete poetry or asemic writing.[89][90]
98
+
99
+ Poetic diction treats the manner in which language is used, and refers not only to the sound but also to the underlying meaning and its interaction with sound and form.[91] Many languages and poetic forms have very specific poetic dictions, to the point where distinct grammars and dialects are used specifically for poetry.[92][93] Registers in poetry can range from strict employment of ordinary speech patterns, as favoured in much late-20th-century prosody,[94] through to highly ornate uses of language, as in medieval and Renaissance poetry.[95]
100
+
101
+ Poetic diction can include rhetorical devices such as simile and metaphor, as well as tones of voice, such as irony. Aristotle wrote in the Poetics that "the greatest thing by far is to be a master of metaphor."[96] Since the rise of Modernism, some poets have opted for a poetic diction that de-emphasizes rhetorical devices, attempting instead the direct presentation of things and experiences and the exploration of tone.[97] On the other hand, Surrealists have pushed rhetorical devices to their limits, making frequent use of catachresis.[98]
102
+
103
+ Allegorical stories are central to the poetic diction of many cultures, and were prominent in the West during classical times, the late Middle Ages and the Renaissance. Aesop's Fables, repeatedly rendered in both verse and prose since first being recorded about 500 BCE, are perhaps the richest single source of allegorical poetry through the ages.[99] Other notables examples include the Roman de la Rose, a 13th-century French poem, William Langland's Piers Ploughman in the 14th century, and Jean de la Fontaine's Fables (influenced by Aesop's) in the 17th century. Rather than being fully allegorical, however, a poem may contain symbols or allusions that deepen the meaning or effect of its words without constructing a full allegory.[100]
104
+
105
+ Another element of poetic diction can be the use of vivid imagery for effect. The juxtaposition of unexpected or impossible images is, for example, a particularly strong element in surrealist poetry and haiku.[101] Vivid images are often endowed with symbolism or metaphor. Many poetic dictions use repetitive phrases for effect, either a short phrase (such as Homer's "rosy-fingered dawn" or "the wine-dark sea") or a longer refrain. Such repetition can add a somber tone to a poem, or can be laced with irony as the context of the words changes.[102]
106
+
107
+ Specific poetic forms have been developed by many cultures. In more developed, closed or "received" poetic forms, the rhyming scheme, meter and other elements of a poem are based on sets of rules, ranging from the relatively loose rules that govern the construction of an elegy to the highly formalized structure of the ghazal or villanelle.[103] Described below are some common forms of poetry widely used across a number of languages. Additional forms of poetry may be found in the discussions of the poetry of particular cultures or periods and in the glossary.
108
+
109
+ Among the most common forms of poetry, popular from the Late Middle Ages on, is the sonnet, which by the 13th century had become standardized as fourteen lines following a set rhyme scheme and logical structure. By the 14th century and the Italian Renaissance, the form had further crystallized under the pen of Petrarch, whose sonnets were translated in the 16th century by Sir Thomas Wyatt, who is credited with introducing the sonnet form into English literature.[104] A traditional Italian or Petrarchan sonnet follows the rhyme scheme ABBA, ABBA, CDECDE, though some variation, perhaps the most common being CDCDCD, especially within the final six lines (or sestet), is common.[105] The English (or Shakespearean) sonnet follows the rhyme scheme ABAB, CDCD, EFEF, GG, introducing a third quatrain (grouping of four lines), a final couplet, and a greater amount of variety with regard to rhyme than is usually found in its Italian predecessors. By convention, sonnets in English typically use iambic pentameter, while in the Romance languages, the hendecasyllable and Alexandrine are the most widely used meters.
110
+
111
+ Sonnets of all types often make use of a volta, or "turn," a point in the poem at which an idea is turned on its head, a question is answered (or introduced), or the subject matter is further complicated. This volta can often take the form of a "but" statement contradicting or complicating the content of the earlier lines. In the Petrarchan sonnet, the turn tends to fall around the division between the first two quatrains and the sestet, while English sonnets usually place it at or near the beginning of the closing couplet.
112
+
113
+ Sonnets are particularly associated with high poetic diction, vivid imagery, and romantic love, largely due to the influence of Petrarch as well as of early English practitioners such as Edmund Spenser (who gave his name to the Spenserian sonnet), Michael Drayton, and Shakespeare, whose sonnets are among the most famous in English poetry, with twenty being included in the Oxford Book of English Verse.[106] However, the twists and turns associated with the volta allow for a logical flexibility applicable to many subjects.[107] Poets from the earliest centuries of the sonnet to the present have utilized the form to address topics related to politics (John Milton, Percy Bysshe Shelley, Claude McKay), theology (John Donne, Gerard Manley Hopkins), war (Wilfred Owen, e.e. cummings), and gender and sexuality (Carol Ann Duffy). Further, postmodern authors such as Ted Berrigan and John Berryman have challenged the traditional definitions of the sonnet form, rendering entire sequences of "sonnets" that often lack rhyme, a clear logical progression, or even a consistent count of fourteen lines.
114
+
115
+ Shi (simplified Chinese: 诗; traditional Chinese: 詩; pinyin: shī; Wade–Giles: shih) Is the main type of Classical Chinese poetry.[108] Within this form of poetry the most important variations are "folk song" styled verse (yuefu), "old style" verse (gushi), "modern style" verse (jintishi). In all cases, rhyming is obligatory. The Yuefu is a folk ballad or a poem written in the folk ballad style, and the number of lines and the length of the lines could be irregular. For the other variations of shi poetry, generally either a four line (quatrain, or jueju) or else an eight-line poem is normal; either way with the even numbered lines rhyming. The line length is scanned by an according number of characters (according to the convention that one character equals one syllable), and are predominantly either five or seven characters long, with a caesura before the final three syllables. The lines are generally end-stopped, considered as a series of couplets, and exhibit verbal parallelism as a key poetic device.[109] The "old style" verse (Gushi) is less formally strict than the jintishi, or regulated verse, which, despite the name "new style" verse actually had its theoretical basis laid as far back as Shen Yue (441–513 CE), although not considered to have reached its full development until the time of Chen Zi'ang (661–702 CE).[110] A good example of a poet known for his Gushi poems is Li Bai (701–762 CE). Among its other rules, the jintishi rules regulate the tonal variations within a poem, including the use of set patterns of the four tones of Middle Chinese. The basic form of jintishi (sushi) has eight lines in four couplets, with parallelism between the lines in the second and third couplets. The couplets with parallel lines contain contrasting content but an identical grammatical relationship between words. Jintishi often have a rich poetic diction, full of allusion, and can have a wide range of subject, including history and politics.[111][112] One of the masters of the form was Du Fu (712–770 CE), who wrote during the Tang Dynasty (8th century).[113]
116
+
117
+ The villanelle is a nineteen-line poem made up of five triplets with a closing quatrain; the poem is characterized by having two refrains, initially used in the first and third lines of the first stanza, and then alternately used at the close of each subsequent stanza until the final quatrain, which is concluded by the two refrains. The remaining lines of the poem have an a-b alternating rhyme.[114] The villanelle has been used regularly in the English language since the late 19th century by such poets as Dylan Thomas,[115] W. H. Auden,[116] and Elizabeth Bishop.[117]
118
+
119
+ A limerick is a poem that consists of five lines and is often humorous. Rhythm is very important in limericks for the first, second and fifth lines must have seven to ten syllables. However, the third and fourth lines only need five to seven. All of the lines must rhyme and have the same rhythm.
120
+
121
+ Tanka is a form of unrhymed Japanese poetry, with five sections totalling 31 on (phonological units identical to morae), structured in a 5-7-5-7-7 pattern.[118] There is generally a shift in tone and subject matter between the upper 5-7-5 phrase and the lower 7-7 phrase. Tanka were written as early as the Asuka period by such poets as Kakinomoto no Hitomaro (fl. late 7th century), at a time when Japan was emerging from a period where much of its poetry followed Chinese form.[119] Tanka was originally the shorter form of Japanese formal poetry (which was generally referred to as "waka"), and was used more heavily to explore personal rather than public themes. By the tenth century, tanka had become the dominant form of Japanese poetry, to the point where the originally general term waka ("Japanese poetry") came to be used exclusively for tanka. Tanka are still widely written today.[120]
122
+
123
+ Haiku is a popular form of unrhymed Japanese poetry, which evolved in the 17th century from the hokku, or opening verse of a renku.[121] Generally written in a single vertical line, the haiku contains three sections totalling 17 on (morae), structured in a 5-7-5 pattern. Traditionally, haiku contain a kireji, or cutting word, usually placed at the end of one of the poem's three sections, and a kigo, or season-word.[122] The most famous exponent of the haiku was Matsuo Bashō (1644–1694). An example of his writing:[123]
124
+
125
+ The khlong (โคลง, [kʰlōːŋ]) is among the oldest Thai poetic forms. This is reflected in its requirements on the tone markings of certain syllables, which must be marked with mai ek (ไม้เอก, Thai pronunciation: [máj èːk], ◌่) or mai tho (ไม้โท, [máj tʰōː], ◌้). This was likely derived from when the Thai language had three tones (as opposed to today's five, a split which occurred during the Ayutthaya Kingdom period), two of which corresponded directly to the aforementioned marks. It is usually regarded as an advanced and sophisticated poetic form.[124]
126
+
127
+ In khlong, a stanza (bot, บท, Thai pronunciation: [bòt]) has a number of lines (bat, บาท, Thai pronunciation: [bàːt], from Pali and Sanskrit pāda), depending on the type. The bat are subdivided into two wak (วรรค, Thai pronunciation: [wák], from Sanskrit varga).[note 1] The first wak has five syllables, the second has a variable number, also depending on the type, and may be optional. The type of khlong is named by the number of bat in a stanza; it may also be divided into two main types: khlong suphap (โคลงสุภาพ, [kʰlōːŋ sù.pʰâːp]) and khlong dan (โคลงดั้น, [kʰlōːŋ dân]). The two differ in the number of syllables in the second wak of the final bat and inter-stanza rhyming rules.[124]
128
+
129
+ The khlong si suphap (โคลงสี่สุภาพ, [kʰlōːŋ sìː sù.pʰâːp]) is the most common form still currently employed. It has four bat per stanza (si translates as four). The first wak of each bat has five syllables. The second wak has two or four syllables in the first and third bat, two syllables in the second, and four syllables in the fourth. Mai ek is required for seven syllables and Mai tho is required for four, as shown below. "Dead word" syllables are allowed in place of syllables which require mai ek, and changing the spelling of words to satisfy the criteria is usually acceptable.
130
+
131
+ Odes were first developed by poets writing in ancient Greek, such as Pindar, and Latin, such as Horace. Forms of odes appear in many of the cultures that were influenced by the Greeks and Latins.[125] The ode generally has three parts: a strophe, an antistrophe, and an epode. The antistrophes of the ode possess similar metrical structures and, depending on the tradition, similar rhyme structures. In contrast, the epode is written with a different scheme and structure. Odes have a formal poetic diction and generally deal with a serious subject. The strophe and antistrophe look at the subject from different, often conflicting, perspectives, with the epode moving to a higher level to either view or resolve the underlying issues. Odes are often intended to be recited or sung by two choruses (or individuals), with the first reciting the strophe, the second the antistrophe, and both together the epode.[126] Over time, differing forms for odes have developed with considerable variations in form and structure, but generally showing the original influence of the Pindaric or Horatian ode. One non-Western form which resembles the ode is the qasida in Persian poetry.[127]
132
+
133
+ The ghazal (also ghazel, gazel, gazal, or gozol) is a form of poetry common in Arabic, Bengali, Persian and Urdu. In classic form, the ghazal has from five to fifteen rhyming couplets that share a refrain at the end of the second line. This refrain may be of one or several syllables and is preceded by a rhyme. Each line has an identical meter. The ghazal often reflects on a theme of unattainable love or divinity.[128]
134
+
135
+ As with other forms with a long history in many languages, many variations have been developed, including forms with a quasi-musical poetic diction in Urdu.[129] Ghazals have a classical affinity with Sufism, and a number of major Sufi religious works are written in ghazal form. The relatively steady meter and the use of the refrain produce an incantatory effect, which complements Sufi mystical themes well.[130] Among the masters of the form is Rumi, a 13th-century Persian poet.[131]
136
+ One of the most famous poet in this type of poetry is Hafez, whose poems often include the theme of exposing hypocrisy. His life and poems have been the subject of much analysis, commentary and interpretation, influencing post-fourteenth century Persian writing more than any other author.[132][133] The West-östlicher Diwan of Johann Wolfgang von Goethe, a collection of lyrical poems, is inspired by the Persian poet Hafez.[134][135][136]
137
+
138
+ In addition to specific forms of poems, poetry is often thought of in terms of different genres and subgenres. A poetic genre is generally a tradition or classification of poetry based on the subject matter, style, or other broader literary characteristics.[137] Some commentators view genres as natural forms of literature. Others view the study of genres as the study of how different works relate and refer to other works.[138]
139
+
140
+ Narrative poetry is a genre of poetry that tells a story. Broadly it subsumes epic poetry, but the term "narrative poetry" is often reserved for smaller works, generally with more appeal to human interest. Narrative poetry may be the oldest type of poetry. Many scholars of Homer have concluded that his Iliad and Odyssey were composed of compilations of shorter narrative poems that related individual episodes. Much narrative poetry—such as Scottish and English ballads, and Baltic and Slavic heroic poems—is performance poetry with roots in a preliterate oral tradition. It has been speculated that some features that distinguish poetry from prose, such as meter, alliteration and kennings, once served as memory aids for bards who recited traditional tales.[139]
141
+
142
+ Notable narrative poets have included Ovid, Dante, Juan Ruiz, William Langland, Chaucer, Fernando de Rojas, Luís de Camões, Shakespeare, Alexander Pope, Robert Burns, Adam Mickiewicz, Alexander Pushkin, Edgar Allan Poe, Alfred Tennyson, and Anne Carson.
143
+
144
+ Lyric poetry is a genre that, unlike epic and dramatic poetry, does not attempt to tell a story but instead is of a more personal nature. Poems in this genre tend to be shorter, melodic, and contemplative. Rather than depicting characters and actions, it portrays the poet's own feelings, states of mind, and perceptions.[140] Notable poets in this genre include Christine de Pizan, John Donne, Charles Baudelaire, Gerard Manley Hopkins, Antonio Machado, and Edna St. Vincent Millay.
145
+
146
+ Epic poetry is a genre of poetry, and a major form of narrative literature. This genre is often defined as lengthy poems concerning events of a heroic or important nature to the culture of the time. It recounts, in a continuous narrative, the life and works of a heroic or mythological person or group of persons.[141] Examples of epic poems are Homer's Iliad and Odyssey, Virgil's Aeneid, the Nibelungenlied, Luís de Camões' Os Lusíadas, the Cantar de Mio Cid, the Epic of Gilgamesh, the Mahabharata, Valmiki's Ramayana, Ferdowsi's Shahnama, Nizami (or Nezami)'s Khamse (Five Books), and the Epic of King Gesar. While the composition of epic poetry, and of long poems generally, became less common in the west after the early 20th century, some notable epics have continued to be written. Derek Walcott won a Nobel prize to a great extent on the basis of his epic, Omeros.[142]
147
+
148
+ Poetry can be a powerful vehicle for satire. The Romans had a strong tradition of satirical poetry, often written for political purposes. A notable example is the Roman poet Juvenal's satires.[143]
149
+
150
+ The same is true of the English satirical tradition. John Dryden (a Tory), the first Poet Laureate, produced in 1682 Mac Flecknoe, subtitled "A Satire on the True Blue Protestant Poet, T.S." (a reference to Thomas Shadwell).[144] Another master of 17th-century English satirical poetry was John Wilmot, 2nd Earl of Rochester.[145] Satirical poets outside England include Poland's Ignacy Krasicki, Azerbaijan's Sabir and Portugal's Manuel Maria Barbosa du Bocage.
151
+
152
+ An elegy is a mournful, melancholy or plaintive poem, especially a lament for the dead or a funeral song. The term "elegy," which originally denoted a type of poetic meter (elegiac meter), commonly describes a poem of mourning. An elegy may also reflect something that seems to the author to be strange or mysterious. The elegy, as a reflection on a death, on a sorrow more generally, or on something mysterious, may be classified as a form of lyric poetry.[146][147]
153
+
154
+ Notable practitioners of elegiac poetry have included Propertius, Jorge Manrique, Jan Kochanowski, Chidiock Tichborne, Edmund Spenser, Ben Jonson, John Milton, Thomas Gray, Charlotte Turner Smith, William Cullen Bryant, Percy Bysshe Shelley, Johann Wolfgang von Goethe, Evgeny Baratynsky, Alfred Tennyson, Walt Whitman, Antonio Machado, Juan Ramón Jiménez, Giannina Braschi, William Butler Yeats, Rainer Maria Rilke, and Virginia Woolf.
155
+
156
+ The fable is an ancient literary genre, often (though not invariably) set in verse. It is a succinct story that features anthropomorphised animals, legendary creatures, plants, inanimate objects, or forces of nature that illustrate a moral lesson (a "moral"). Verse fables have used a variety of meter and rhyme patterns.[148]
157
+
158
+ Notable verse fabulists have included Aesop, Vishnu Sarma, Phaedrus, Marie de France, Robert Henryson, Biernat of Lublin, Jean de La Fontaine, Ignacy Krasicki, Félix María de Samaniego, Tomás de Iriarte, Ivan Krylov and Ambrose Bierce.
159
+
160
+ Dramatic poetry is drama written in verse to be spoken or sung, and appears in varying, sometimes related forms in many cultures. Greek tragedy in verse dates to the 6th century B.C., and may have been an influence on the development of Sanskrit drama,[149] just as Indian drama in turn appears to have influenced the development of the bianwen verse dramas in China, forerunners of Chinese Opera.[150] East Asian verse dramas also include Japanese Noh. Examples of dramatic poetry in Persian literature include Nizami's two famous dramatic works, Layla and Majnun and Khosrow and Shirin, Ferdowsi's tragedies such as Rostam and Sohrab, Rumi's Masnavi, Gorgani's tragedy of Vis and Ramin, and Vahshi's tragedy of Farhad.
161
+
162
+ Speculative poetry, also known as fantastic poetry (of which weird or macabre poetry is a major sub-classification), is a poetic genre which deals thematically with subjects which are "beyond reality", whether via extrapolation as in science fiction or via weird and horrific themes as in horror fiction. Such poetry appears regularly in modern science fiction and horror fiction magazines. Edgar Allan Poe is sometimes seen as the "father of speculative poetry".[151] Poe's most remarkable achievement in the genre was his anticipation, by three-quarters of a century, of the Big Bang theory of the universe's origin, in his then much-derided 1848 essay (which, due to its very speculative nature, he termed a "prose poem"), Eureka: A Prose Poem.[152][153]
163
+
164
+ Prose poetry is a hybrid genre that shows attributes of both prose and poetry. It may be indistinguishable from the micro-story (a.k.a. the "short short story", "flash fiction"). While some examples of earlier prose strike modern readers as poetic, prose poetry is commonly regarded as having originated in 19th-century France, where its practitioners included Aloysius Bertrand, Charles Baudelaire, Arthur Rimbaud and Stéphane Mallarmé.[154] Since the late 1980s especially, prose poetry has gained increasing popularity, with entire journals, such as The Prose Poem: An International Journal,[155] Contemporary Haibun Online,[156] and Haibun Today[157] devoted to that genre and its hybrids. Latin American poets of the 20th century who wrote prose poems include Octavio Paz and Giannina Braschi[158][159]
165
+
166
+ Light poetry, or light verse, is poetry that attempts to be humorous. Poems considered "light" are usually brief, and can be on a frivolous or serious subject, and often feature word play, including puns, adventurous rhyme and heavy alliteration. Although a few free verse poets have excelled at light verse outside the formal verse tradition, light verse in English usually obeys at least some formal conventions. Common forms include the limerick, the clerihew, and the double dactyl.
167
+
168
+ While light poetry is sometimes condemned as doggerel, or thought of as poetry composed casually, humor often makes a serious point in a subtle or subversive way. Many of the most renowned "serious" poets have also excelled at light verse. Notable writers of light poetry include Lewis Carroll, Ogden Nash, X. J. Kennedy, Willard R. Espy, and Wendy Cope.
169
+
170
+ Slam poetry as a genre originated in 1986 in Chicago, Illinois, when Marc Kelly Smith organized the first slam.[160][161] Slam performers comment emotively, aloud before an audience, on personal, social, or other matters. Slam focuses on the aesthetics of word play, intonation, and voice inflection. Slam poetry is often competitive, at dedicated "poetry slam" contests.[162]
171
+
en/4684.html.txt ADDED
@@ -0,0 +1,98 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ The four cardinal directions, or cardinal points, are the directions north, east, south, and west, commonly denoted by their initials N, E, S, and W. East and west are perpendicular (at right angles) to north and south, with east being in the clockwise direction of rotation from north and west being directly opposite east. Points between the cardinal directions form the points of the compass.
4
+
5
+ The intercardinal (also called the intermediate directions and, historically, ordinal) directions are northeast (NE), southeast (SE), southwest (SW), and northwest (NW). The intermediate direction of every set of intercardinal and cardinal direction is called a secondary intercardinal direction, the eight shortest points in the compass rose that is shown to the right (e.g. NNE, ENE, and ESE).
6
+
7
+ To keep to a bearing is not, in general, the same as going in a straight direction along a great circle. Conversely, one can keep to a great circle and the bearing may change. Thus the bearing of a straight path crossing the North Pole changes abruptly at the Pole from North to South. When travelling East or West, it is only on the Equator that one can keep East or West and be going straight (without the need to steer). Anywhere else, maintaining latitude requires a change in direction, requires steering. However, this change in direction becomes increasingly negligible as one moves to lower latitudes.
8
+
9
+ The Earth has a magnetic field which is approximately aligned with its axis of rotation. A magnetic compass is a device that uses this field to determine the cardinal directions. Magnetic compasses are widely used, but only moderately accurate. The north pole of the magnetic needle points towards the geographic north pole of the earth and vice versa. This is because the geographic north pole of the earth lies very close to the magnetic south pole of the earth. This south magnetic pole of the earth located at an angle of 17 degrees to the geographic north pole attracts the north pole of the magnetic needle and vice versa.
10
+
11
+ The position of the Sun in the sky can be used for orientation if the general time of day is known. In the morning the Sun rises roughly in the east (due east only on the equinoxes) and tracks upwards. In the evening it sets in the west, again roughly and only due west exactly on the equinoxes. In the middle of the day, it is to the south for viewers in the Northern Hemisphere, who live north of the Tropic of Cancer, and the north for those in the Southern Hemisphere, who live south of the Tropic of Capricorn. This method does not work very well when closer to the equator (i.e. between the Tropic of Cancer and the Tropic of Capricorn) since, in the northern hemisphere, the sun may be directly overhead or even to the north in summer. Conversely, at low latitudes in the southern hemisphere the sun may be to the south of the observer in summer. In these locations, one needs first to determine whether the sun is moving from east to west through north or south by watching its movements—left to right means it is going through south while the right to left means it is going through north; or one can watch the sun's shadows. If they move clockwise, the sun will be in the south at midday, and if they move anticlockwise, then the sun will be in the north at midday.
12
+
13
+ Because of the Earth's axial tilt, no matter what the location of the viewer, there are only two days each year when the sun rises precisely due east. These days are the equinoxes. On all other days, depending on the time of year, the sun rises either north or south of true east (and sets north or south of true west). For all locations, the sun is seen to rise north of east (and set north of west) from the Northward equinox to the Southward equinox, and rise south of east (and set south of west) from the Southward equinox to the Northward equinox.
14
+
15
+ There is a traditional method by which an analogue watch can be used to locate north and south. The Sun appears to move in the sky over a 24-hour period while the hour hand of a 12-hour clock dial takes twelve hours to complete one rotation. In the northern hemisphere, if the watch is rotated so that the hour hand points toward the Sun, the point halfway between the hour hand and 12 o'clock will indicate south. For this method to work in the southern hemisphere, the 12 is pointed toward the Sun and the point halfway between the hour hand and 12 o'clock will indicate north. During daylight saving time, the same method can be employed using 1 o'clock instead of 12. The difference between local time and zone time, the equation of time, and (near the tropics) the non-uniform change of the Sun's azimuth at different times of day limit the accuracy of this method.
16
+
17
+ A portable sundial can be used as a more accurate instrument than a watch for determining the cardinal directions. Since the design of a sundial takes account of the latitude of the observer, it can be used at any latitude. See: Sundial#Using a sundial as a compass.
18
+
19
+ Astronomy provides a method for finding direction at night. All the stars appear to lie on the imaginary Celestial sphere. Because of the rotation of the Earth, the Celestial Sphere appears to rotate around an axis passing through the North and South poles of the Earth. This axis intersects the Celestial Sphere at the North and South Celestial poles, which appear to the observer to lie directly above due North and South respectively on the horizon.
20
+
21
+ In either hemisphere, observations of the night sky show that the visible stars appear to be moving in circular paths, caused by the rotation of the Earth. This is best seen in a long exposure photograph, which is obtained by locking the shutter open for most of the intensely dark part of a moonless night. The resulting photograph reveals a multitude of concentric arcs (portions of perfect circles) from which the exact center can be readily derived, and which corresponds to the Celestial pole, which lies directly above the position of the true pole (North or South) on the horizon.
22
+ A published photograph exposed for nearly 8 hours demonstrates this effect.
23
+
24
+ The Northern Celestial pole is currently (but not permanently) within a fraction of 1 degree of the bright star Polaris. The exact position of the pole changes over thousands of years because of the precession of the equinoxes. Polaris is also known as the North Star, and is generically called a pole star or lodestar. Polaris is only visible during fair weather at night to inhabitants of the Northern Hemisphere.
25
+ The asterism "Big Dipper" may be used to find Polaris. The 2 corner stars of the "pan" (those opposite from the handle) point above the top of the "pan" to Polaris.
26
+
27
+ While observers in the Northern hemisphere can use the star Polaris to determine the Northern celestial pole, the Octans constellation's South Star is hardly visible enough to use for navigation. For this reason, the preferred alternative is to use the constellation Crux (The Southern Cross). The southern celestial pole lies at the intersection of (a) the line along the long axis of crux (i.e. through Alpha Crucis and Gamma Crucis) and (b) a line perpendicularly bisecting the line joining the "Pointers" (Alpha Centauri and Beta Centauri).
28
+
29
+ At the very end of the 19th century, in response to the development of battleships with large traversable guns that affected magnetic compasses, and possibly to avoid the need to wait for fair weather at night to precisely verify one's alignment with true north, the gyrocompass was developed for shipboard use. Since it finds true, rather than magnetic, north, it is immune to interference by local or shipboard magnetic fields. Its major disadvantage is that it depends on technology that many individuals might find too expensive to justify outside the context of a large commercial or military operation. It also requires a continuous power supply for its motors, and that it can be allowed to sit in one location for a period of time while it properly aligns itself.
30
+
31
+ Near the end of the 20th century, the advent of satellite-based Global Positioning Systems (GPS) provided yet another means for any individual to determine true north accurately. While GPS Receivers (GPSRs) function best with a clear view of the entire sky, they function day or night, and in all but the most severe weather. The government agencies responsible for the satellites continuously monitor and adjust them to maintain their accurate alignment with the Earth. There are consumer versions of the receivers that are attractively priced. Since there are no periodic access fees, or other licensing charges, they have become widely used. GPSR functionality is becoming more commonly added to other consumer devices such as mobile phones. Handheld GPSRs have modest power requirements, can be shut down as needed, and recalibrate within a couple of minutes of being restarted. In contrast with the gyrocompass which is most accurate when stationary, the GPS receiver, if it has only one antenna, must be moving, typically at more than 0.1 mph (0.2 km/h), to correctly display compass directions. On ships and aircraft, GPS receivers are often equipped with two or more antennas, separately attached to the vehicle. The exact latitudes and longitudes of the antennas are determined, which allows the cardinal directions to be calculated relative to the structure of the vehicle. Within these limitations GPSRs are considered both accurate and reliable. The GPSR has thus become the fastest and most convenient way to obtain a verifiable alignment with the cardinal directions.
32
+
33
+ The directional names are routinely associated with the degrees of rotation in the unit circle, a necessary step for navigational calculations (derived from trigonometry) and/or for use with Global Positioning Satellite (GPS) receivers. The four cardinal directions correspond to the following degrees of a compass:
34
+
35
+ The intercardinal (intermediate, or, historically, ordinal[1]) directions are the four intermediate compass directions located halfway between each pair of cardinal directions.
36
+
37
+ These eight directional names have been further compounded, resulting in a total of 32 named points evenly spaced around the compass: north (N), north by east (NbE), north-northeast (NNE), northeast by north (NEbN), northeast (NE), northeast by east (NEbE), east-northeast (ENE), east by north (EbN), east (E), etc.
38
+
39
+ With the cardinal points thus accurately defined, by convention cartographers draw standard maps with north (N) at the top, and east (E) at the right. In turn, maps provide a systematic means to record where places are, and cardinal directions are the foundation of a structure for telling someone how to find those places.
40
+
41
+ North does not have to be at the top. Most maps in medieval Europe, for example, placed east (E) at the top.[2] A few cartographers prefer south-up maps. Many portable GPS-based navigation computers today can be set to display maps either conventionally (N always up, E always right) or with the current instantaneous direction of travel, called the heading, always up (and whatever direction is +90° from that to the right).
42
+
43
+ In mathematics, cardinal directions or cardinal points are the six principal directions or points along the x-, y- and z-axis of three-dimensional space.
44
+
45
+ In the real world there are six cardinal directions not involved with geography that are north, south, east, west, up and down. In this context, up and down relate to elevation, altitude, or possibly depth (if water is involved). The topographic map is a special case of cartography in which the elevation is indicated on the map, typically via contour lines.
46
+
47
+ In astronomy, the cardinal points of an astronomical body as seen in the sky are four points defined by the directions towards which the celestial poles lie relative to the center of the disk of the object in the sky.[3][4]
48
+ A line (here it is a great circle on the celestial sphere) from the center of the disk to the North celestial pole will intersect the edge of the body (the "limb") at the North point. The North point will then be the point on the limb that is closest to the North celestial pole. Similarly, a line from the center to the South celestial pole will define the South point by its intersection with the limb. The points at right angles to the North and South points are the East and West points. Going around the disk clockwise from the North point, one encounters in order the West point, the South point, and then the East point. This is opposite to the order on a terrestrial map because one is looking up instead of down.
49
+
50
+ Similarly, when describing the location of one astronomical object relative to another, "north" means closer to the North celestial pole, "east" means at a higher right ascension, "south" means closer to the South celestial pole, and "west" means at a lower right ascension. If one is looking at two stars that are below the North Star, for example, the one that is "east" will actually be further to the left.
51
+
52
+ During the Migration Period, the Germanic languages' names for the cardinal directions entered the Romance languages, where they replaced the Latin names borealis (or septentrionalis) with north, australis (or meridionalis) with south, occidentalis with west and orientalis with east. It is possible that some northern people used the Germanic names for the intermediate directions. Medieval Scandinavian orientation would thus have involved a 45 degree rotation of cardinal directions.[5]
53
+
54
+ In many regions of the world, prevalent winds change direction seasonally, and consequently many cultures associate specific named winds with cardinal and intercardinal directions. For example, classical Greek culture characterized these winds as Anemoi.
55
+
56
+ In pre-modern Europe more generally, between eight and 32 points of the compass – cardinal and intercardinal subdirections – were given names. These often corresponded to the directional winds of the Mediterranean Sea (for example, southeast was linked to the Sirocco, a wind from the Sahara).
57
+
58
+ Particular colors are associated in some traditions with the cardinal points. These are typically "natural colors" of human perception rather than optical primary colors.[vague]
59
+
60
+ Many cultures, especially in Asia, include the center as a fifth cardinal point.
61
+
62
+ Central Asian, Eastern European[citation needed] and North East Asian cultures frequently have traditions associating colors with four or five cardinal points.
63
+
64
+ Systems with five cardinal points include those from pre-modern China, as well as traditional Turkic, Tibetan and Ainu cultures.
65
+
66
+ In Chinese tradition, a five cardinal point system is a foundation for I Ching, the Wu Xing and the five naked-eye planets. In traditional Chinese astrology, the zodiacal belt is divided into the four constellation groups corresponding to the four cardinal directions.
67
+
68
+ Each direction is often identified with a color, and (at least in China) with a mythological creature of that color. Geographical or ethnic terms may contain the name of the color instead of the name of the corresponding direction.[12][13]
69
+
70
+ East: Green (青 "qīng" corresponds to both green and blue); Spring; Wood
71
+
72
+ South: Red; Summer; Fire
73
+
74
+ West: White; Autumn; Metal
75
+
76
+ North: Black; Winter; Water
77
+
78
+ Center: Yellow; Earth
79
+
80
+ Countries where Arabic is used refer to the cardinal directions as Ash Shamal (N), Al Gharb (W), Ash Sharq (E) and Al Janoob (S). Additionally, Al Wusta is used for the center. All five are used for geographic subdivision names (wilayahs, states, regions, governorates, provinces, districts or even towns), and some are the origin of some Southern Iberian place names (such as Algarve, Portugal and Axarquía, Spain).
81
+
82
+ In Mesoamerica and North America, a number of traditional indigenous cosmologies include four cardinal directions and a center. Some may also include "above" and "below" as directions, and therefore focus on a cosmology of seven directions. Each direction may be associated with a color, which can vary widely between nations, but which is usually one of the basic colors found in nature and natural pigments, such as black, red, white, and yellow, with occasional appearances of blue, green, or other hues.[17] In some cases, e.g., many of the Puebloan peoples of the Southwestern United States, the four named directions are not North, South, East and West but are the four intermediate directions associated with the places of sunrise and sunset at the winter and summer solstices.[18][19] There can be great variety in color symbolism, even among cultures that are close neighbors geographically.
83
+
84
+ Ten Hindu deities, known as the "Dikpālas", have been recognized in classical Indian scriptures, symbolizing the four cardinal and four intercardinal directions with the additional directions of up and down. Each of the ten directions has its own name in Sanskrit.[20]
85
+
86
+ Some indigenous Australians have cardinal directions deeply embedded in their culture. For example, the Warlpiri people have a cultural philosophy deeply connected to the four cardinal directions[21] and the Guugu Yimithirr people use cardinal directions rather than relative direction even when indicating the position of an object close to their body. (For more information, see: Cultural use of cardinal rather than relative direction.)
87
+
88
+ The precise direction of the cardinal points appears to be important in Aboriginal stone arrangements.
89
+
90
+ Many aboriginal languages contain words for the usual four cardinal directions, but some contain words for 5 or even 6 cardinal directions.[22]
91
+
92
+ In some languages, such as Estonian, Finnish and Breton, the intercardinal directions have names that are not compounds of the names of the cardinal directions (as, for instance, northeast is compounded from north and east). In Estonian, those are kirre (northeast), kagu (southeast), edel (southwest), and loe (northwest), in Finnish koillinen (northeast), kaakko (southeast), lounas (southwest), and luode (northwest). In Japanese, there is the interesting situation that native Japanese words (yamato kotoba, kun readings of kanji) are used for the cardinal directions (such as minami for 南, south), but borrowed Chinese words (on readings of kanji) are used for intercardinal directions (such as tō-nan for 東南, southeast, lit. "east-south"). In the Malay language, adding laut (sea) to either east (timur) or west (barat) results in northeast or northwest, respectively, whereas adding daya to west (giving barat daya) results in southwest. However, southeast has a special word: tenggara.
93
+
94
+ Sanskrit and other Indian languages that borrow from it use the names of the gods associated with each direction: east (Indra), southeast (Agni), south (Yama/Dharma), southwest (Nirrti), west (Varuna), northwest (Vayu), north (Kubera/Heaven) and northeast (Ishana/Shiva). North is associated with the Himalayas and heaven while the south is associated with the underworld or land of the fathers (Pitr loka). The directions are named by adding "disha" to the names of each god or entity: e.g. Indradisha (direction of Indra) or Pitrdisha (direction of the forefathers i.e. south).
95
+
96
+ The Hopi language and the Tewa dialect spoken by the Arizona Tewa have proper names for the solstitial directions, which are approximately intercardinal, rather than for the cardinal directions.[23][24]
97
+
98
+ Use of the compass directions is common and deeply embedded in European culture, and also in Chinese culture (see south-pointing chariot). Some other cultures make greater use of other referents, such as towards the sea or towards the mountains (Hawaii, Bali), or upstream and downstream (most notably in ancient Egypt, also in the Yurok and Karuk languages). Lengo (Guadalcanal, Solomon Islands) has four non-compass directions: landward, seaward, upcoast, and downcoast.[citation needed]
en/4685.html.txt ADDED
@@ -0,0 +1,47 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ The leek is a vegetable, a cultivar of Allium ampeloprasum, the broadleaf wild leek. The edible part of the plant is a bundle of leaf sheaths that is sometimes erroneously called a stem or stalk. The genus Allium also contains the onion, garlic, shallot, scallion, chive,[1] and Chinese onion.[2]
2
+
3
+ Historically, many scientific names were used for leeks, but they are now all treated as cultivars of A. ampeloprasum.[3] The name 'leek' developed from the Old English word leac,[4] from which the modern English name of garlic also derives. Three closely related vegetables, elephant garlic, kurrat and Persian leek or tareh, are also cultivars of A. ampeloprasum, although different in their uses as food.
4
+
5
+ Rather than forming a tight bulb like the onion, the leek produces a long cylinder of bundled leaf sheaths that are generally blanched by pushing soil around them (trenching). They are often sold as small seedlings in flats that are started off early in greenhouses, to be planted out as weather permits. Once established in the garden, leeks are hardy; many varieties can be left in the ground during the winter to be harvested as needed.
6
+
7
+ Leek cultivars may be treated as a single cultivar group, e.g. as A. ampeloprasum 'Leek Group'.[5] The cultivars can be subdivided in several ways, but the most common types are "summer leeks", intended for harvest in the season when planted, and overwintering leeks, meant to be harvested in the spring of the year following planting. Summer leek types are generally smaller than overwintering types; overwintering types are generally more strongly flavored. Cultivars include 'King Richard' and 'Tadorna Blue'.
8
+
9
+ Leeks are easy to grow from seed and tolerate standing in the field for an extended harvest, which takes place up to 6 months from planting.[6] The soil in which it is grown has to be loose and drained well; leek can be grown in the same regions where onions can be grown.[7] Leeks usually reach maturity in the autumn months. Leeks can be bunched and harvested early when they are about the size of a finger or pencil, or they can be thinned and allowed to grow to a much larger mature size. Hilling leeks can produce better specimens.
10
+
11
+ Leeks suffer from insect pests including the thrips species Thrips tabaci and the leek moth.[8][9] Leeks are also susceptible to leek rust (Puccinia allii).[7]
12
+
13
+ Leeks have a mild, onion-like taste. In its raw state, the vegetable is crunchy and firm. The edible portions of the leek are the white base of the leaves (above the roots and stem base), the light green parts, and to a lesser extent the dark green parts of the leaves. The dark green portion is usually discarded because it has a tough texture, but it can be sautéed, or more commonly added to stock for flavor.[10] A few leaves are sometimes tied with twine and other herbs to form a bouquet garni.
14
+
15
+ Leeks are typically chopped into slices 5–10 mm thick. The slices have a tendency to fall apart, due to the layered structure of the leek. The different ways of preparing the vegetable are:
16
+
17
+ Leeks are an ingredient of cock-a-leekie soup, leek and potato soup, and vichyssoise, as well as plain leek soup.
18
+
19
+ Because of their symbolism in Wales (see below), they have come to be used extensively in that country’s cuisine. Elsewhere in Britain, leeks have come back into favor only in the last 50 years or so, having been overlooked for several centuries.[13]
20
+
21
+ The Hebrew Bible talks of חציר, identified by commentators as leek, and says it is abundant in Egypt.[14] Dried specimens from archaeological sites in ancient Egypt, as well as wall carvings and drawings, indicate that the leek was a part of the Egyptian diet from at least the second millennium BCE. Texts also show that it was grown in Mesopotamia from the beginning of the second millennium BCE.[15]
22
+
23
+ Leeks were eaten in ancient Rome, and regarded as superior to garlic and onions.[16] The 1st century cookbook Apicius contains four recipes involving leeks.[16] Raw leeks were the favorite vegetable of the Emperor Nero, who consumed it in soup or in oil, believing it beneficial to the quality of his voice.[17] This earned him the nickname "Porrophagus", or "Leek Eater".[16]
24
+
25
+ The leek is one of the national emblems of Wales, and it or the daffodil (in Welsh, the daffodil is known as "Peter's leek", Cenhinen Bedr) is worn on St. David's Day. According to one legend, King Cadwaladr of Gwynedd ordered his soldiers to identify themselves by wearing the vegetable on their helmets in an ancient battle against the Saxons that took place in a leek field.[18] The Elizabethan poet Michael Drayton stated, in contrast, that the tradition was a tribute to Saint David, who ate only leeks when he was fasting.[19] Whatever the case, the leek has been known to be a symbol of Wales for a long time; Shakespeare, for example, refers to the custom of wearing a leek as an “ancient tradition” in Henry V. In the play, Henry tells the Welsh officer Fluellen that he, too, is wearing a leek "for I am Welsh, you know, good countryman." The 1985 and 1990 British one pound coins bear the design of a leek in a coronet, representing Wales.
26
+
27
+ Alongside the other national floral emblems of countries currently and formerly in the Commonwealth or part of the United Kingdom (including the English Tudor Rose, Scottish thistle, Irish shamrock, Canadian maple leaf, and Indian lotus), the Welsh leek appeared on the coronation gown of Elizabeth II. It was designed by Norman Hartnell; when Hartnell asked if he could exchange the leek for the more aesthetically pleasing Welsh daffodil, he was told no.[20]
28
+
29
+ Perhaps the most visible use of the leek, however, is as the cap badge of the Welsh Guards, a battalion within the Household Division of the British Army.[21]
30
+
31
+ In Romania, the leek is also widely considered a symbol of Oltenia, a historical region in the southwestern part of the country.[22]
32
+
33
+ Buddhist monks of the Mahayana school do not consume leeks, as they are considered to "excite the senses".
34
+
35
+ Two blooming flower heads
36
+
37
+ A largely spent flower head showing open flowers, as well as developing seed pods
38
+
39
+ Leek field in Houthulst, Belgium
40
+
41
+ Still life of leeks and thyme
42
+
43
+ Section and root base
44
+
45
+ Leek sold in a supermarket
46
+
47
+ Leek seeds
en/4686.html.txt ADDED
@@ -0,0 +1,71 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ About 30 species; see text
4
+
5
+ The pear (/ˈpɛər/) tree and shrub are a species of genus Pyrus /ˈpaɪrəs/, in the family Rosaceae, bearing the pomaceous fruit of the same name. Several species of pear are valued for their edible fruit and juices while others are cultivated as trees.
6
+
7
+ The tree is medium-sized and native to coastal as well as mildly temperate regions of Europe, north Africa and Asia. Pear wood is one of the preferred materials in the manufacture of high-quality woodwind instruments and furniture.
8
+
9
+ About 3000 known varieties of pears are grown worldwide. The fruit is consumed fresh, canned, as juice, and dried. In 2017, world production of pears was 24 million tonnes, with China as the main producer.
10
+
11
+ The word pear is probably from Germanic pera as a loanword of Vulgar Latin pira, the plural of pirum, akin to Greek apios (from Mycenaean ápisos),[1] of Semitic origin (pirâ), meaning "fruit". The adjective pyriform or piriform means pear-shaped.
12
+
13
+ The pear is native to coastal and mildly temperate regions of the Old World, from western Europe and north Africa east right across Asia. It is a medium-sized tree, reaching 10–17 metres (33–56 ft) tall, often with a tall, narrow crown; a few species are shrubby.
14
+
15
+ The leaves are alternately arranged, simple, 2–12 centimetres (1–4 1⁄2 in) long, glossy green on some species, densely silvery-hairy in some others; leaf shape varies from broad oval to narrow lanceolate. Most pears are deciduous, but one or two species in southeast Asia are evergreen. Most are cold-hardy, withstanding temperatures as low as −25 to −40 °C (−13 to −40 °F) in winter, except for the evergreen species, which only tolerate temperatures down to about −15 °C (5 °F).
16
+
17
+ The flowers are white, rarely tinted yellow or pink, 2–4 centimetres (1–1 1⁄2 in) diameter, and have five petals.[2] Like that of the related apple, the pear fruit is a pome, in most wild species 1–4 centimetres (1⁄2–1 1⁄2 in) diameter, but in some cultivated forms up to 18 centimetres (7 in) long and 8 centimetres (3 in) broad; the shape varies in most species from oblate or globose, to the classic pyriform 'pear-shape' of the European pear with an elongated basal portion and a bulbous end.
18
+
19
+ The fruit is composed of the receptacle or upper end of the flower-stalk (the so-called calyx tube) greatly dilated. Enclosed within its cellular flesh is the true fruit: five 'cartilaginous' carpels, known colloquially as the "core". From the upper rim of the receptacle are given off the five sepals,[vague] the five petals, and the very numerous stamens.
20
+
21
+ Pears and apples cannot always be distinguished by the form of the fruit;[3] some pears look very much like some apples, e.g. the nashi pear. One major difference is that the flesh of pear fruit contains stone cells.
22
+
23
+ Pear cultivation in cool temperate climates extends to the remotest antiquity, and there is evidence of its use as a food since prehistoric times. Many traces of it have been found in prehistoric pile dwellings around Lake Zurich. Pears were cultivated in China as early as 2000 BC.[4] The word “pear”, or its equivalent, occurs in all the Celtic languages, while in Slavic and other dialects, differing appellations, still referring to the same thing, are found—a diversity and multiplicity of nomenclature which led Alphonse Pyramus de Candolle to infer a very ancient cultivation of the tree from the shores of the Caspian to those of the Atlantic.
24
+
25
+ The pear was also cultivated by the Romans, who ate the fruits raw or cooked, just like apples.[5] Pliny's Natural History recommended stewing them with honey and noted three dozen varieties. The Roman cookbook De re coquinaria has a recipe for a spiced, stewed-pear patina, or soufflé.[6]
26
+
27
+ A certain race of pears, with white down on the undersurface of their leaves, is supposed to have originated from P. nivalis, and their fruit is chiefly used in France in the manufacture of perry (see also cider). Other small-fruited pears, distinguished by their early ripening and apple-like fruit, may be referred to as P. cordata, a species found wild in western France and southwestern England. Pears have been cultivated in China for approximately 3000 years.
28
+
29
+ The genus is thought to have originated in present-day Western China[7] in the foothills of the Tian Shan, a mountain range of Central Asia, and to have spread to the north and south along mountain chains, evolving into a diverse group of over 20 widely recognized primary species.[citation needed] The enormous number of varieties of the cultivated European pear (Pyrus communis subsp. communis), are without doubt derived from one or two wild subspecies (P. communis subsp. pyraster and P. communis subsp. caucasica), widely distributed throughout Europe, and sometimes forming part of the natural vegetation of the forests. Court accounts of Henry III of England record pears shipped from La Rochelle-Normande and presented to the King by the Sheriffs of the City of London. The French names of pears grown in English medieval gardens suggest that their reputation, at the least, was French; a favored variety in the accounts was named for Saint Rule or Regul', Bishop of Senlis.[8]
30
+
31
+ Asian species with medium to large edible fruit include P. pyrifolia, P. ussuriensis, P. × bretschneideri, P. × sinkiangensis, and P. pashia. Other small-fruited species are frequently used as rootstocks for the cultivated forms.
32
+
33
+ According to Pear Bureau Northwest, about 3000 known varieties of pears are grown worldwide.[9]
34
+ The pear is normally propagated by grafting a selected variety onto a rootstock, which may be of a pear variety or quince. Quince rootstocks produce smaller trees, which is often desirable in commercial orchards or domestic gardens. For new varieties the flowers can be cross-bred to preserve or combine desirable traits. The fruit of the pear is produced on spurs, which appear on shoots more than one year old.[10]
35
+
36
+ Three species account for the vast majority of edible fruit production, the European pear Pyrus communis subsp. communis cultivated mainly in Europe and North America, the Chinese white pear (bai li) Pyrus ×bretschneideri, and the Nashi pear Pyrus pyrifolia (also known as Asian pear or apple pear), both grown mainly in eastern Asia. There are thousands of cultivars of these three species. A species grown in western China, P. sinkiangensis, and P. pashia, grown in southern China and south Asia, are also produced to a lesser degree.
37
+
38
+ Other species are used as rootstocks for European and Asian pears and as ornamental trees. Pear wood is close-grained and at least in the past was used as a specialized timber for fine furniture and making the blocks for woodcuts. The Manchurian or Ussurian Pear, Pyrus ussuriensis (which produces unpalatable fruit) has been crossed with Pyrus communis to breed hardier pear cultivars. The Bradford pear (Pyrus calleryana 'Bradford') in particular has become widespread in North America, and is used only as an ornamental tree, as well as a blight-resistant rootstock for Pyrus communis fruit orchards. The Willow-leaved pear (Pyrus salicifolia) is grown for its attractive, slender, densely silvery-hairy leaves.
39
+
40
+ The following cultivars have gained the Royal Horticultural Society's Award of Garden Merit:[11]
41
+
42
+ The purely decorative cultivar P. salicifolia ‘Pendula’, with pendulous branches and silvery leaves, has also won the award.[19]
43
+
44
+ Summer and autumn cultivars of Pyrus communis, being climacteric fruits, are gathered before they are fully ripe, while they are still green, but snap off when lifted. In the case of the 'Passe Crassane', long the favored winter pear in France, the crop is traditionally gathered at three different times: the first a fortnight or more before it is ripe, the second a week or ten days after that, and the third when fully ripe. The first gathering will come into eating last, and thus the season of the fruit may be considerably prolonged.
45
+
46
+ In 2017, world production of pears was 24.2 million tonnes, led by China with 68% of the total (table).
47
+
48
+ Pears may be stored at room temperature until ripe.[21] Pears are ripe when the flesh around the stem gives to gentle pressure.[21] Ripe pears are optimally stored refrigerated, uncovered in a single layer, where they have a shelf life of 2 to 3 days.[21]
49
+
50
+ Pears are consumed fresh, canned, as juice, and dried. The juice can also be used in jellies and jams, usually in combination with other fruits, including berries. Fermented pear juice is called perry or pear cider and is made in a way that is similar to how cider is made from apples.
51
+
52
+ Pears ripen at room temperature. Ripening is accelerated by the gas ethylene. If pears are placed next to bananas in a fruit bowl, the ethylene emitted by the banana causes the pears to ripen.[22] Refrigeration will slow further ripening. Pear Bureau Northwest offers tips on ripening and judging ripeness: Although the skin on Bartlett pears changes from green to yellow as they ripen, most varieties show little color change as they ripen. Because pears ripen from the inside out, the best way to judge ripeness is to "check the neck": apply gentle thumb pressure to the neck or stem end of the pear. If it yields to gentle pressure, then the pear is ripe, sweet, and juicy. If it is firm, leave the pear at room temperature and check daily for ripeness.[23]
53
+
54
+ The culinary or cooking pear is green but dry and hard, and only edible after several hours of cooking. Two Dutch cultivars are "Gieser Wildeman [nl]" (a sweet variety) and "Saint Remy (pear) [nl]" (slightly sour).[24]
55
+
56
+ Pear wood is one of the preferred materials in the manufacture of high-quality woodwind instruments and furniture, and was used for making the carved blocks for woodcuts. It is also used for wood carving, and as a firewood to produce aromatic smoke for smoking meat or tobacco. Pear wood is valued for kitchen spoons, scoops and stirrers, as it does not contaminate food with color, flavor or smell, and resists warping and splintering despite repeated soaking and drying cycles. Lincoln[25] describes it as "a fairly tough, very stable wood... (used for) carving... brushbacks, umbrella handles, measuring instruments such as set squares and T-squares... recorders... violin and guitar fingerboards and piano keys... decorative veneering." Pearwood is the favored wood for architect's rulers because it does not warp. It is similar to the wood of its relative, the apple tree (Malus domestica) and used for many of the same purposes.[25]
57
+
58
+ Raw pear is 84% water, 15% carbohydrates and contains negligible protein and fat (table). In a 100 g reference amount, raw pear supplies 57 calories, a moderate source of dietary fiber, and no other essential nutrients in significant amounts (table).
59
+
60
+ Pears grow in the sublime orchard of Alcinous, in Odyssey vii: "Therein grow trees, tall and luxuriant, pears and pomegranates and apple-trees with their bright fruit, and sweet figs, and luxuriant olives. Of these the fruit perishes not nor fails in winter or in summer, but lasts throughout the year."
61
+
62
+ 'A Partridge in a Pear Tree' is the first gift in "The Twelve Days of Christmas" cumulative song. This verse is repeated twelve times in the song.
63
+
64
+ The pear tree was an object of particular veneration (as was the Walnut) in the Tree worship of the Nakh peoples of the North Caucasus – see Vainakh mythology and see also Ingushetia – the best-known of the Vainakh peoples today being the Chechens of Chechnya in the Russian Federation.
65
+ Pear and walnut trees were held to be the sacred abodes of beneficent spirits in pre-Islamic Chechen religion and, for this reason, it was forbidden to fell them.[26]
66
+
67
+ Pears simmered in red wine
68
+
69
+ Pear in a bottle of pear Eau de vie
70
+
71
+ Pear Blossom in Eastern Siberia
en/4687.html.txt ADDED
@@ -0,0 +1,71 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ About 30 species; see text
4
+
5
+ The pear (/ˈpɛər/) tree and shrub are a species of genus Pyrus /ˈpaɪrəs/, in the family Rosaceae, bearing the pomaceous fruit of the same name. Several species of pear are valued for their edible fruit and juices while others are cultivated as trees.
6
+
7
+ The tree is medium-sized and native to coastal as well as mildly temperate regions of Europe, north Africa and Asia. Pear wood is one of the preferred materials in the manufacture of high-quality woodwind instruments and furniture.
8
+
9
+ About 3000 known varieties of pears are grown worldwide. The fruit is consumed fresh, canned, as juice, and dried. In 2017, world production of pears was 24 million tonnes, with China as the main producer.
10
+
11
+ The word pear is probably from Germanic pera as a loanword of Vulgar Latin pira, the plural of pirum, akin to Greek apios (from Mycenaean ápisos),[1] of Semitic origin (pirâ), meaning "fruit". The adjective pyriform or piriform means pear-shaped.
12
+
13
+ The pear is native to coastal and mildly temperate regions of the Old World, from western Europe and north Africa east right across Asia. It is a medium-sized tree, reaching 10–17 metres (33–56 ft) tall, often with a tall, narrow crown; a few species are shrubby.
14
+
15
+ The leaves are alternately arranged, simple, 2–12 centimetres (1–4 1⁄2 in) long, glossy green on some species, densely silvery-hairy in some others; leaf shape varies from broad oval to narrow lanceolate. Most pears are deciduous, but one or two species in southeast Asia are evergreen. Most are cold-hardy, withstanding temperatures as low as −25 to −40 °C (−13 to −40 °F) in winter, except for the evergreen species, which only tolerate temperatures down to about −15 °C (5 °F).
16
+
17
+ The flowers are white, rarely tinted yellow or pink, 2–4 centimetres (1–1 1⁄2 in) diameter, and have five petals.[2] Like that of the related apple, the pear fruit is a pome, in most wild species 1–4 centimetres (1⁄2–1 1⁄2 in) diameter, but in some cultivated forms up to 18 centimetres (7 in) long and 8 centimetres (3 in) broad; the shape varies in most species from oblate or globose, to the classic pyriform 'pear-shape' of the European pear with an elongated basal portion and a bulbous end.
18
+
19
+ The fruit is composed of the receptacle or upper end of the flower-stalk (the so-called calyx tube) greatly dilated. Enclosed within its cellular flesh is the true fruit: five 'cartilaginous' carpels, known colloquially as the "core". From the upper rim of the receptacle are given off the five sepals,[vague] the five petals, and the very numerous stamens.
20
+
21
+ Pears and apples cannot always be distinguished by the form of the fruit;[3] some pears look very much like some apples, e.g. the nashi pear. One major difference is that the flesh of pear fruit contains stone cells.
22
+
23
+ Pear cultivation in cool temperate climates extends to the remotest antiquity, and there is evidence of its use as a food since prehistoric times. Many traces of it have been found in prehistoric pile dwellings around Lake Zurich. Pears were cultivated in China as early as 2000 BC.[4] The word “pear”, or its equivalent, occurs in all the Celtic languages, while in Slavic and other dialects, differing appellations, still referring to the same thing, are found—a diversity and multiplicity of nomenclature which led Alphonse Pyramus de Candolle to infer a very ancient cultivation of the tree from the shores of the Caspian to those of the Atlantic.
24
+
25
+ The pear was also cultivated by the Romans, who ate the fruits raw or cooked, just like apples.[5] Pliny's Natural History recommended stewing them with honey and noted three dozen varieties. The Roman cookbook De re coquinaria has a recipe for a spiced, stewed-pear patina, or soufflé.[6]
26
+
27
+ A certain race of pears, with white down on the undersurface of their leaves, is supposed to have originated from P. nivalis, and their fruit is chiefly used in France in the manufacture of perry (see also cider). Other small-fruited pears, distinguished by their early ripening and apple-like fruit, may be referred to as P. cordata, a species found wild in western France and southwestern England. Pears have been cultivated in China for approximately 3000 years.
28
+
29
+ The genus is thought to have originated in present-day Western China[7] in the foothills of the Tian Shan, a mountain range of Central Asia, and to have spread to the north and south along mountain chains, evolving into a diverse group of over 20 widely recognized primary species.[citation needed] The enormous number of varieties of the cultivated European pear (Pyrus communis subsp. communis), are without doubt derived from one or two wild subspecies (P. communis subsp. pyraster and P. communis subsp. caucasica), widely distributed throughout Europe, and sometimes forming part of the natural vegetation of the forests. Court accounts of Henry III of England record pears shipped from La Rochelle-Normande and presented to the King by the Sheriffs of the City of London. The French names of pears grown in English medieval gardens suggest that their reputation, at the least, was French; a favored variety in the accounts was named for Saint Rule or Regul', Bishop of Senlis.[8]
30
+
31
+ Asian species with medium to large edible fruit include P. pyrifolia, P. ussuriensis, P. × bretschneideri, P. × sinkiangensis, and P. pashia. Other small-fruited species are frequently used as rootstocks for the cultivated forms.
32
+
33
+ According to Pear Bureau Northwest, about 3000 known varieties of pears are grown worldwide.[9]
34
+ The pear is normally propagated by grafting a selected variety onto a rootstock, which may be of a pear variety or quince. Quince rootstocks produce smaller trees, which is often desirable in commercial orchards or domestic gardens. For new varieties the flowers can be cross-bred to preserve or combine desirable traits. The fruit of the pear is produced on spurs, which appear on shoots more than one year old.[10]
35
+
36
+ Three species account for the vast majority of edible fruit production, the European pear Pyrus communis subsp. communis cultivated mainly in Europe and North America, the Chinese white pear (bai li) Pyrus ×bretschneideri, and the Nashi pear Pyrus pyrifolia (also known as Asian pear or apple pear), both grown mainly in eastern Asia. There are thousands of cultivars of these three species. A species grown in western China, P. sinkiangensis, and P. pashia, grown in southern China and south Asia, are also produced to a lesser degree.
37
+
38
+ Other species are used as rootstocks for European and Asian pears and as ornamental trees. Pear wood is close-grained and at least in the past was used as a specialized timber for fine furniture and making the blocks for woodcuts. The Manchurian or Ussurian Pear, Pyrus ussuriensis (which produces unpalatable fruit) has been crossed with Pyrus communis to breed hardier pear cultivars. The Bradford pear (Pyrus calleryana 'Bradford') in particular has become widespread in North America, and is used only as an ornamental tree, as well as a blight-resistant rootstock for Pyrus communis fruit orchards. The Willow-leaved pear (Pyrus salicifolia) is grown for its attractive, slender, densely silvery-hairy leaves.
39
+
40
+ The following cultivars have gained the Royal Horticultural Society's Award of Garden Merit:[11]
41
+
42
+ The purely decorative cultivar P. salicifolia ‘Pendula’, with pendulous branches and silvery leaves, has also won the award.[19]
43
+
44
+ Summer and autumn cultivars of Pyrus communis, being climacteric fruits, are gathered before they are fully ripe, while they are still green, but snap off when lifted. In the case of the 'Passe Crassane', long the favored winter pear in France, the crop is traditionally gathered at three different times: the first a fortnight or more before it is ripe, the second a week or ten days after that, and the third when fully ripe. The first gathering will come into eating last, and thus the season of the fruit may be considerably prolonged.
45
+
46
+ In 2017, world production of pears was 24.2 million tonnes, led by China with 68% of the total (table).
47
+
48
+ Pears may be stored at room temperature until ripe.[21] Pears are ripe when the flesh around the stem gives to gentle pressure.[21] Ripe pears are optimally stored refrigerated, uncovered in a single layer, where they have a shelf life of 2 to 3 days.[21]
49
+
50
+ Pears are consumed fresh, canned, as juice, and dried. The juice can also be used in jellies and jams, usually in combination with other fruits, including berries. Fermented pear juice is called perry or pear cider and is made in a way that is similar to how cider is made from apples.
51
+
52
+ Pears ripen at room temperature. Ripening is accelerated by the gas ethylene. If pears are placed next to bananas in a fruit bowl, the ethylene emitted by the banana causes the pears to ripen.[22] Refrigeration will slow further ripening. Pear Bureau Northwest offers tips on ripening and judging ripeness: Although the skin on Bartlett pears changes from green to yellow as they ripen, most varieties show little color change as they ripen. Because pears ripen from the inside out, the best way to judge ripeness is to "check the neck": apply gentle thumb pressure to the neck or stem end of the pear. If it yields to gentle pressure, then the pear is ripe, sweet, and juicy. If it is firm, leave the pear at room temperature and check daily for ripeness.[23]
53
+
54
+ The culinary or cooking pear is green but dry and hard, and only edible after several hours of cooking. Two Dutch cultivars are "Gieser Wildeman [nl]" (a sweet variety) and "Saint Remy (pear) [nl]" (slightly sour).[24]
55
+
56
+ Pear wood is one of the preferred materials in the manufacture of high-quality woodwind instruments and furniture, and was used for making the carved blocks for woodcuts. It is also used for wood carving, and as a firewood to produce aromatic smoke for smoking meat or tobacco. Pear wood is valued for kitchen spoons, scoops and stirrers, as it does not contaminate food with color, flavor or smell, and resists warping and splintering despite repeated soaking and drying cycles. Lincoln[25] describes it as "a fairly tough, very stable wood... (used for) carving... brushbacks, umbrella handles, measuring instruments such as set squares and T-squares... recorders... violin and guitar fingerboards and piano keys... decorative veneering." Pearwood is the favored wood for architect's rulers because it does not warp. It is similar to the wood of its relative, the apple tree (Malus domestica) and used for many of the same purposes.[25]
57
+
58
+ Raw pear is 84% water, 15% carbohydrates and contains negligible protein and fat (table). In a 100 g reference amount, raw pear supplies 57 calories, a moderate source of dietary fiber, and no other essential nutrients in significant amounts (table).
59
+
60
+ Pears grow in the sublime orchard of Alcinous, in Odyssey vii: "Therein grow trees, tall and luxuriant, pears and pomegranates and apple-trees with their bright fruit, and sweet figs, and luxuriant olives. Of these the fruit perishes not nor fails in winter or in summer, but lasts throughout the year."
61
+
62
+ 'A Partridge in a Pear Tree' is the first gift in "The Twelve Days of Christmas" cumulative song. This verse is repeated twelve times in the song.
63
+
64
+ The pear tree was an object of particular veneration (as was the Walnut) in the Tree worship of the Nakh peoples of the North Caucasus – see Vainakh mythology and see also Ingushetia – the best-known of the Vainakh peoples today being the Chechens of Chechnya in the Russian Federation.
65
+ Pear and walnut trees were held to be the sacred abodes of beneficent spirits in pre-Islamic Chechen religion and, for this reason, it was forbidden to fell them.[26]
66
+
67
+ Pears simmered in red wine
68
+
69
+ Pear in a bottle of pear Eau de vie
70
+
71
+ Pear Blossom in Eastern Siberia
en/4688.html.txt ADDED
@@ -0,0 +1,45 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ Clownfish or anemonefish are fishes from the subfamily Amphiprioninae in the family Pomacentridae. Thirty species are recognized: one in the genus Premnas, while the remaining are in the genus Amphiprion. In the wild, they all form symbiotic mutualisms with sea anemones. Depending on species, anemonefish are overall yellow, orange, or a reddish or blackish color, and many show white bars or patches. The largest can reach a length of 17 cm (6.7 in), while the smallest barely achieve 7–8 cm (2.8–3.1 in).
4
+
5
+ Anemonefish are endemic to the warmer waters of the Indian Ocean, including the Red Sea and Pacific Oceans, including the Great Barrier Reef, Southeast Asia, Japan, and the Indo-Malaysian region. While most species have restricted distributions, others are widespread. Anemonefish typically live at the bottom of shallow seas in sheltered reefs or in shallow lagoons. No anemonefish are found in the Atlantic.[1]
6
+
7
+ Anemonefish are omnivorous and can feed on undigested food from their host anemones, and the fecal matter from the anemonefish provides nutrients to the sea anemone. Anemonefish primarily feed on small zooplankton from the water column, such as copepods and tunicate larvae, with a small portion of their diet coming from algae, with the exception of Amphiprion perideraion, which primarily feeds on algae.[2][3] They may also consume the tentacles of their host anemone.[4]
8
+
9
+ Anemonefish and sea anemones have a symbiotic, mutualistic relationship, each providing many benefits to the other. The individual species are generally highly host specific, and especially the genera Heteractis and Stichodactyla, and the species Entacmaea quadricolor are frequent anemonefish partners. The sea anemone protects the anemonefish from predators, as well as providing food through the scraps left from the anemone's meals and occasional dead anemone tentacles, and functions as a safe nest site. In return, the anemonefish defends the anemone from its predators and parasites.[5][6] The anemone also picks up nutrients from the anemonefish's excrement.[7] The nitrogen excreted from anemonefish increases the number of algae incorporated into the tissue of their hosts, which aids the anemone in tissue growth and regeneration.[3] The activity of the anemonefish results in greater water circulation around the sea anemone,[8] and it has been suggested that their bright coloring might lure small fish to the anemone, which then catches them.[9] Studies on anemonefish have found that they alter the flow of water around sea anemone tentacles by certain behaviors and movements such as "wedging" and "switching". Aeration of the host anemone tentacles allows for benefits to the metabolism of both partners, mainly by increasing anemone body size and both anemonefish and anemone respiration.[10]
10
+
11
+ Several theories are given about how they can survive the sea anemone poison:
12
+
13
+ Anemonefish are the best known example of fish that are able to live among the venomous sea anemone tentacles, but several others occur, including juvenile threespot dascyllus, certain cardinalfish (such as Banggai cardinalfish), incognito (or anemone) goby, and juvenile painted greenling.[12][13][14]
14
+
15
+ In a group of anemonefish, a strict dominance hierarchy exists. The largest and most aggressive female is found at the top. Only two anemonefish, a male and a female, in a group reproduce – through external fertilization. Anemonefish are sequential hermaphrodites, meaning they develop into males first, and when they mature, they become females. If the female anemonefish is removed from the group, such as by death, one of the largest and most dominant males becomes a female. The remaining males move up a rank in the hierarchy.
16
+
17
+ Anemonefish lay eggs on any flat surface close to their host anemones. In the wild, anemonefish spawn around the time of the full moon. Depending on the species, they can lay hundreds or thousands of eggs. The male parent guards the eggs until they hatch about 6–10 days later, typically two hours after dusk.[15]
18
+
19
+ Anemonefish colonies usually consist of the reproductive male and female and a few male juveniles, which help tend the colony.[16] Although multiple males cohabit an environment with a single female, polygamy does not occur and only the adult pair exhibits reproductive behavior. However, if the female dies, the social hierarchy shifts with the breeding male exhibiting protandrous sex reversal to become the breeding female. The largest juvenile then becomes the new breeding male after a period of rapid growth.[17] The existence of protandry in anemonefish may rest on the case that nonbreeders modulate their phenotype in a way that causes breeders to tolerate them. This strategy prevents conflict by reducing competition between males for one female. For example, by purposefully modifying their growth rate to remain small and submissive, the juveniles in a colony present no threat to the fitness of the adult male, thereby protecting themselves from being evicted by the dominant fish.[18]
20
+
21
+ The reproductive cycle of anemonefish is often correlated with the lunar cycle. Rates of spawning for anemonefish peak around the first and third quarters of the moon. The timing of this spawn means that the eggs hatch around the full moon or new moon periods. One explanation for this lunar clock is that spring tides produce the highest tides during full or new moons. Nocturnal hatching during high tide may reduce predation by allowing for a greater capacity for escape. Namely, the stronger currents and greater water volume during high tide protect the hatchlings by effectively sweeping them to safety. Before spawning, anemonefish exhibit increased rates of anemone and substrate biting, which help prepare and clean the nest for the spawn.[17]
22
+
23
+ In terms of parental care, male anemonefish are often the caretakers of eggs. Before making the clutch, the parents often clear an oval-shaped clutch varying in diameter for the spawn. Fecundity, or reproductive rate, of the females, usually ranges from 600 to 1500 eggs depending on her size. In contrast to most animal species, the female-only occasionally takes responsibility for the eggs, with males expending most of the time and effort. Male anemonefish care for their eggs by fanning and guarding them for 6 to 10 days until they hatch. In general, eggs develop more rapidly in a clutch when males fan properly, and fanning represents a crucial mechanism of successfully developing eggs. This suggests that males can control the success of hatching an egg clutch by investing different amounts of time and energy towards the eggs. For example, a male could choose to fan less in times of scarcity or fan more in times of abundance. Furthermore, males display increased alertness when guarding more valuable broods, or eggs in which paternity was guaranteed. Females, though, display generally less preference for parental behavior than males. All these suggest that males have increased parental investment towards the eggs compared to females.[19]
24
+
25
+ Historically, anemonefish have been identified by morphological features and color pattern in the field, while in a laboratory, other features such as scalation of the head, tooth shape, and body proportions are used.[2] These features have been used to group species into six complexes, clownfish, tomato, skunk, clarkii, saddleback, and maroon.[20] As can be seen from the gallery, each of the fish in these complexes has a similar appearance. Genetic analysis has shown that these complexes are not monophyletic groups, particularly the 11 species in the A. clarkii group, where only A. clarkii and A. tricintus are in the same clade, with six species,A. allardi A. bicinctus, A. chagosensis, A. chrosgaster, A. fuscocaudatus, A. latifasciatus, and A. omanensis being in an Indian clade, A. chrysopterus having monospecific lineage, and A. akindynos in the Australian clade with A. mccullochi.[21] Other significant differences are that A. latezonatus also has monospecific lineage, and A. nigripes is in the Indian clade rather than with A. akallopisos, the skunk anemonefish.[22] A. latezonatus is more closely related to A. percula and Premnas biaculeatus than to the saddleback fish with which it was previously grouped.[23][22]
26
+
27
+ Obligate mutualism was thought to be the key innovation that allowed anemonefish to radiate rapidly, with rapid and convergent morphological changes correlated with the ecological niches offered by the host anemones.[23] The complexity of mitochondrial DNA structure shown by genetic analysis of the Australian clade suggested evolutionary connectivity among samples of A. akindynos and A. mccullochi that the authors theorize was the result of historical hybridization and introgression in the evolutionary past. The two evolutionary groups had individuals of both species detected, thus the species lacked reciprocal monophyly. No shared haplotypes were found between species.[24]
28
+
29
+ A. percula (clown anemonefish) in a 'normal' orange and a melanistic blackish variant
30
+
31
+ A. clarkii (Clark's anemonefish)
32
+
33
+ A. polymnus (saddleback clownfish) off Sulawesi, Indonesia
34
+
35
+ A. ephippium (red saddleback anemonefish)
36
+
37
+ A. perideraion (pink skunk anemonefish)
38
+
39
+ Male P. biaculeatus (maroon anemonefish) in Papua New Guinea
40
+
41
+ Anemonefish make up 43% of the global marine ornamental trade, and 25% of the global trade comes from fish bred in captivity, while the majority is captured from the wild,[27][28] accounting for decreased densities in exploited areas.[29] Public aquaria and captive-breeding programs are essential to sustain their trade as marine ornamentals, and has recently become economically feasible.[30][31] It is one of a handful of marine ornamentals whose complete lifecycle has been in closed captivity. Members of some anemonefish species, such as the maroon clownfish, become aggressive in captivity; others, like the false percula clownfish, can be kept successfully with other individuals of the same species.[32]
42
+
43
+ When a sea anemone is not available in an aquarium, the anemonefish may settle in some varieties of soft corals, or large polyp stony corals.[33] Once an anemone or coral has been adopted, the anemonefish will defend it. Anemonefish, however, are not obligately tied to hosts, and can survive alone in captivity.[34][35]
44
+
45
+ In Disney/Pixar's 2003 film Finding Nemo and its 2016 sequel Finding Dory main characters Marlin and Nemo are clownfish, probably the species A. ocellaris.[36] The popularity of anemonefish for aquaria increased following the release of this the first film associated with an increase in the numbers of those captured in the wild.[37]
en/4689.html.txt ADDED
@@ -0,0 +1,45 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ Clownfish or anemonefish are fishes from the subfamily Amphiprioninae in the family Pomacentridae. Thirty species are recognized: one in the genus Premnas, while the remaining are in the genus Amphiprion. In the wild, they all form symbiotic mutualisms with sea anemones. Depending on species, anemonefish are overall yellow, orange, or a reddish or blackish color, and many show white bars or patches. The largest can reach a length of 17 cm (6.7 in), while the smallest barely achieve 7–8 cm (2.8–3.1 in).
4
+
5
+ Anemonefish are endemic to the warmer waters of the Indian Ocean, including the Red Sea and Pacific Oceans, including the Great Barrier Reef, Southeast Asia, Japan, and the Indo-Malaysian region. While most species have restricted distributions, others are widespread. Anemonefish typically live at the bottom of shallow seas in sheltered reefs or in shallow lagoons. No anemonefish are found in the Atlantic.[1]
6
+
7
+ Anemonefish are omnivorous and can feed on undigested food from their host anemones, and the fecal matter from the anemonefish provides nutrients to the sea anemone. Anemonefish primarily feed on small zooplankton from the water column, such as copepods and tunicate larvae, with a small portion of their diet coming from algae, with the exception of Amphiprion perideraion, which primarily feeds on algae.[2][3] They may also consume the tentacles of their host anemone.[4]
8
+
9
+ Anemonefish and sea anemones have a symbiotic, mutualistic relationship, each providing many benefits to the other. The individual species are generally highly host specific, and especially the genera Heteractis and Stichodactyla, and the species Entacmaea quadricolor are frequent anemonefish partners. The sea anemone protects the anemonefish from predators, as well as providing food through the scraps left from the anemone's meals and occasional dead anemone tentacles, and functions as a safe nest site. In return, the anemonefish defends the anemone from its predators and parasites.[5][6] The anemone also picks up nutrients from the anemonefish's excrement.[7] The nitrogen excreted from anemonefish increases the number of algae incorporated into the tissue of their hosts, which aids the anemone in tissue growth and regeneration.[3] The activity of the anemonefish results in greater water circulation around the sea anemone,[8] and it has been suggested that their bright coloring might lure small fish to the anemone, which then catches them.[9] Studies on anemonefish have found that they alter the flow of water around sea anemone tentacles by certain behaviors and movements such as "wedging" and "switching". Aeration of the host anemone tentacles allows for benefits to the metabolism of both partners, mainly by increasing anemone body size and both anemonefish and anemone respiration.[10]
10
+
11
+ Several theories are given about how they can survive the sea anemone poison:
12
+
13
+ Anemonefish are the best known example of fish that are able to live among the venomous sea anemone tentacles, but several others occur, including juvenile threespot dascyllus, certain cardinalfish (such as Banggai cardinalfish), incognito (or anemone) goby, and juvenile painted greenling.[12][13][14]
14
+
15
+ In a group of anemonefish, a strict dominance hierarchy exists. The largest and most aggressive female is found at the top. Only two anemonefish, a male and a female, in a group reproduce – through external fertilization. Anemonefish are sequential hermaphrodites, meaning they develop into males first, and when they mature, they become females. If the female anemonefish is removed from the group, such as by death, one of the largest and most dominant males becomes a female. The remaining males move up a rank in the hierarchy.
16
+
17
+ Anemonefish lay eggs on any flat surface close to their host anemones. In the wild, anemonefish spawn around the time of the full moon. Depending on the species, they can lay hundreds or thousands of eggs. The male parent guards the eggs until they hatch about 6–10 days later, typically two hours after dusk.[15]
18
+
19
+ Anemonefish colonies usually consist of the reproductive male and female and a few male juveniles, which help tend the colony.[16] Although multiple males cohabit an environment with a single female, polygamy does not occur and only the adult pair exhibits reproductive behavior. However, if the female dies, the social hierarchy shifts with the breeding male exhibiting protandrous sex reversal to become the breeding female. The largest juvenile then becomes the new breeding male after a period of rapid growth.[17] The existence of protandry in anemonefish may rest on the case that nonbreeders modulate their phenotype in a way that causes breeders to tolerate them. This strategy prevents conflict by reducing competition between males for one female. For example, by purposefully modifying their growth rate to remain small and submissive, the juveniles in a colony present no threat to the fitness of the adult male, thereby protecting themselves from being evicted by the dominant fish.[18]
20
+
21
+ The reproductive cycle of anemonefish is often correlated with the lunar cycle. Rates of spawning for anemonefish peak around the first and third quarters of the moon. The timing of this spawn means that the eggs hatch around the full moon or new moon periods. One explanation for this lunar clock is that spring tides produce the highest tides during full or new moons. Nocturnal hatching during high tide may reduce predation by allowing for a greater capacity for escape. Namely, the stronger currents and greater water volume during high tide protect the hatchlings by effectively sweeping them to safety. Before spawning, anemonefish exhibit increased rates of anemone and substrate biting, which help prepare and clean the nest for the spawn.[17]
22
+
23
+ In terms of parental care, male anemonefish are often the caretakers of eggs. Before making the clutch, the parents often clear an oval-shaped clutch varying in diameter for the spawn. Fecundity, or reproductive rate, of the females, usually ranges from 600 to 1500 eggs depending on her size. In contrast to most animal species, the female-only occasionally takes responsibility for the eggs, with males expending most of the time and effort. Male anemonefish care for their eggs by fanning and guarding them for 6 to 10 days until they hatch. In general, eggs develop more rapidly in a clutch when males fan properly, and fanning represents a crucial mechanism of successfully developing eggs. This suggests that males can control the success of hatching an egg clutch by investing different amounts of time and energy towards the eggs. For example, a male could choose to fan less in times of scarcity or fan more in times of abundance. Furthermore, males display increased alertness when guarding more valuable broods, or eggs in which paternity was guaranteed. Females, though, display generally less preference for parental behavior than males. All these suggest that males have increased parental investment towards the eggs compared to females.[19]
24
+
25
+ Historically, anemonefish have been identified by morphological features and color pattern in the field, while in a laboratory, other features such as scalation of the head, tooth shape, and body proportions are used.[2] These features have been used to group species into six complexes, clownfish, tomato, skunk, clarkii, saddleback, and maroon.[20] As can be seen from the gallery, each of the fish in these complexes has a similar appearance. Genetic analysis has shown that these complexes are not monophyletic groups, particularly the 11 species in the A. clarkii group, where only A. clarkii and A. tricintus are in the same clade, with six species,A. allardi A. bicinctus, A. chagosensis, A. chrosgaster, A. fuscocaudatus, A. latifasciatus, and A. omanensis being in an Indian clade, A. chrysopterus having monospecific lineage, and A. akindynos in the Australian clade with A. mccullochi.[21] Other significant differences are that A. latezonatus also has monospecific lineage, and A. nigripes is in the Indian clade rather than with A. akallopisos, the skunk anemonefish.[22] A. latezonatus is more closely related to A. percula and Premnas biaculeatus than to the saddleback fish with which it was previously grouped.[23][22]
26
+
27
+ Obligate mutualism was thought to be the key innovation that allowed anemonefish to radiate rapidly, with rapid and convergent morphological changes correlated with the ecological niches offered by the host anemones.[23] The complexity of mitochondrial DNA structure shown by genetic analysis of the Australian clade suggested evolutionary connectivity among samples of A. akindynos and A. mccullochi that the authors theorize was the result of historical hybridization and introgression in the evolutionary past. The two evolutionary groups had individuals of both species detected, thus the species lacked reciprocal monophyly. No shared haplotypes were found between species.[24]
28
+
29
+ A. percula (clown anemonefish) in a 'normal' orange and a melanistic blackish variant
30
+
31
+ A. clarkii (Clark's anemonefish)
32
+
33
+ A. polymnus (saddleback clownfish) off Sulawesi, Indonesia
34
+
35
+ A. ephippium (red saddleback anemonefish)
36
+
37
+ A. perideraion (pink skunk anemonefish)
38
+
39
+ Male P. biaculeatus (maroon anemonefish) in Papua New Guinea
40
+
41
+ Anemonefish make up 43% of the global marine ornamental trade, and 25% of the global trade comes from fish bred in captivity, while the majority is captured from the wild,[27][28] accounting for decreased densities in exploited areas.[29] Public aquaria and captive-breeding programs are essential to sustain their trade as marine ornamentals, and has recently become economically feasible.[30][31] It is one of a handful of marine ornamentals whose complete lifecycle has been in closed captivity. Members of some anemonefish species, such as the maroon clownfish, become aggressive in captivity; others, like the false percula clownfish, can be kept successfully with other individuals of the same species.[32]
42
+
43
+ When a sea anemone is not available in an aquarium, the anemonefish may settle in some varieties of soft corals, or large polyp stony corals.[33] Once an anemone or coral has been adopted, the anemonefish will defend it. Anemonefish, however, are not obligately tied to hosts, and can survive alone in captivity.[34][35]
44
+
45
+ In Disney/Pixar's 2003 film Finding Nemo and its 2016 sequel Finding Dory main characters Marlin and Nemo are clownfish, probably the species A. ocellaris.[36] The popularity of anemonefish for aquaria increased following the release of this the first film associated with an increase in the numbers of those captured in the wild.[37]