title
stringlengths 2
145
| content
stringlengths 86
178k
|
---|---|
tropics | The tropics are the regions of Earth surrounding the Equator. They are defined in latitude by the Tropic of Cancer in the Northern Hemisphere at 23°26′10.3″ (or 23.43619°) N and the Tropic of Capricorn in
the Southern Hemisphere at 23°26′10.3″ (or 23.43619°) S. The tropics are also referred to as the tropical zone and the torrid zone (see geographical zone).
In terms of climate, the tropics receive sunlight that is more direct than the rest of Earth and are generally hotter and wetter as they aren't affected as much by the solar seasons. The word "tropical" sometimes refers to this sort of climate in the zone rather than to the geographical zone itself. The tropical zone includes deserts and snow-capped mountains, which are not tropical in the climatic sense. The tropics are distinguished from the other climatic and biomatic regions of Earth, which are the middle latitudes and the polar regions on either side of the equatorial zone.
The tropics constitute 39.8% of Earth's surface area and contain 36% of Earth's landmass. As of 2014, the region was home also to 40% of the world's population, and this figure was then projected to reach 50% by 2050. Because of global warming, the weather conditions of the tropics are expanding with areas in the subtropics, having more extreme weather events such as heatwaves and more intense storms. These changes in weather conditions may make certain parts of the tropics uninhabitable.
Etymology
The word "tropic" comes via Latin from Ancient Greek τροπή (tropē), meaning "to turn" or "change direction".
Astronomical definition
The tropics are defined as the region between the Tropic of Cancer in the Northern Hemisphere at 23°26′10.3″ (or 23.43619°) N and the Tropic of Capricorn in the Southern Hemisphere at 23°26′10.3″ (or 23.43619°) S; these latitudes correspond to the axial tilt of the Earth.
The Tropic of Cancer is the Northernmost latitude from which the Sun can ever be seen directly overhead, and the Tropic of Capricorn is the Southernmost. This means that the tropical zone includes everywhere on Earth which is a subsolar point at least once during the solar year. Thus the maximum latitudes of the tropics have equal distance from the equator on either side. Likewise, they approximate the angle of the Earth's axial tilt. This angle is not perfectly fixed, mainly due to the influence of the moon, but the limits of tropics are a geographic convention, and their variance from the true latitudes is very small.
Seasons and climate
Many tropical areas have both a dry and a wet season. The wet season, rainy season or green season is the time of year, ranging from one or more months, when most of the average annual rainfall in a region falls. Areas with wet seasons are disseminated across portions of the tropics and subtropics, some even in temperate regions. Under the Köppen climate classification, for tropical climates, a wet-season month is defined as one or more months where average precipitation is 60 mm (2.4 in) or more. Some areas with pronounced rainy seasons see a break in rainfall during mid-season when the intertropical convergence zone or monsoon trough moves poleward of their location during the middle of the warm season; typical vegetation in these areas ranges from moist seasonal tropical forests to savannahs.
When the wet season occurs during the warm season, or summer, precipitation falls mainly during the late afternoon and early evening hours. The wet season is a time when air quality improves, freshwater quality improves and vegetation grows significantly due to the wet season supplementing flora, leading to crop yields late in the season. Floods and rains cause rivers to overflow their banks, and some animals to retreat to higher ground. Soil nutrients are washed away and erosion increases. The incidence of malaria increases in areas where the rainy season coincides with high temperatures. Animals have adaptation and survival strategies for the wetter regime. The previous dry season leads to food shortages into the wet season, as the crops have yet to mature.
However, regions within the tropics may well not have a tropical climate. Under the Köppen climate classification, much of the area within the geographical tropics is classed not as "tropical" but as "dry" (arid or semi-arid), including the Sahara Desert, the Atacama Desert and Australian Outback. Also, there are alpine tundra and snow-capped peaks, including Mauna Kea, Mount Kilimanjaro, Puncak Jaya and the Andes as far south as the northernmost parts of Chile and Perú.
Ecosystems
Tropical plants and animals are those species native to the tropics. Tropical ecosystems may consist of tropical rainforests, seasonal tropical forests, dry (often deciduous) forests, spiny forests, deserts, savannahs, grasslands and other habitat types. There are often wide areas of biodiversity, and species endemism present, particularly in rainforests and seasonal forests. Some examples of important biodiversity and high endemism ecosystems are El Yunque National Forest in Puerto Rico, Costa Rican and Nicaraguan rainforests, Amazon Rainforest territories of several South American countries, Madagascar dry deciduous forests, the Waterberg Biosphere of South Africa, and eastern Madagascar rainforests. Often the soils of tropical forests are low in nutrient content, making them quite vulnerable to slash-and-burn deforestation techniques, which are sometimes an element of shifting cultivation agricultural systems.
In biogeography, the tropics are divided into Paleotropics (Africa, Asia and Australia) and Neotropics (Caribbean, Central America, and South America). Together, they are sometimes referred to as the Pantropic. The system of biogeographic realms differs somewhat; the Neotropical realm includes both the Neotropics and temperate South America, and the Paleotropics correspond to the Afrotropical, Indomalayan, Oceanian, and tropical Australasian realms.
Flora
Flora are plants found in a specific region at a specific time. In Latin it means a "flower".Some well-known plants that are exclusively found or originate from the tropics or are often associated with the tropics include:
Stone fruits such as mangos, avocado, sapote etc.
Citrus fruits such as oranges, lemons, mandarins, etc.
Banana trees
Bird of paradise flower
Palm treesCoconut trees
Ferns
Orchids
Papaya trees
Dragon fruit
Bamboo
Jackfruit
Giant Water Lily
Rubber Tree
Cacao
Coffee
Tropicality
Tropicality refers to the image of the tropics that people from outside the tropics have of the region, ranging from critical to verging on fetishism. The idea of tropicality gained renewed interest in geographical discourse when French geographer Pierre Gourou published Les Pays Tropicaux (The Tropical World in English), in the late 1940s.
Tropicality encompassed two major images. One, is that the tropics represent a 'Garden of Eden', a heaven on Earth, a land of rich biodiversity or a tropical paradise. The alternative is that the tropics consist of wild, unconquerable nature. The latter view was often discussed in old Western literature more so than the first. Evidence suggests over time that the view of the tropics as such in popular literature has been supplanted by more well-rounded and sophisticated interpretations.Western scholars tried to theorise why tropical areas were relatively more inhospitable to human civilisations than colder regions of the Northern Hemisphere. A popular explanation focused on the differences in climate. Tropical jungles and rainforests have much more humid and hotter weather than colder and drier temperaments of the Northern Hemisphere, giving to a more diverse biosphere. This theme led some scholars to suggest that humid hot climates correlate to human populations lacking control over nature e.g. 'the wild Amazonian rainforests'.
See also
Hardiness zone
Subtropics
Tropical ecology
Tropical marine climate
Tropical year
Torrid zone
References
== External links == |
freon | Freon ( FREE-on) is a registered trademark of the Chemours Company and generic descriptor for a number of halocarbon products. They are stable, nonflammable, low toxicity gases or liquids which have generally been used as refrigerants and as aerosol propellants. These include chlorofluorocarbons and hydrofluorocarbons, both of which cause ozone depletion (although the latter much less so) and contribute to global warming. 'Freon' is the brand name for the refrigerants R-12, R-13B1, R-22, R-410A, R-502, and R-503 manufactured by The Chemours Company, and so is not used to label all refrigerants of this type. They emit a strong smell similar to acetone, a common nail polish remover component.
History
The first CFCs were synthesized by Frédéric Swarts in the 1890s. In the late 1920s, a research team was formed by Charles Franklin Kettering in General Motors to find a replacement for the dangerous refrigerants then in use, such as ammonia. The team was headed by Thomas Midgley, Jr. In 1928, they improved the synthesis of CFCs and demonstrated their usefulness for such a purpose and their stability and nontoxicity. Kettering patented a refrigerating apparatus to use the gas; this was issued to Frigidaire, a wholly owned subsidiary of General Motors.In 1930, General Motors and DuPont formed Kinetic Chemicals to produce Freon. Their product was dichlorodifluoromethane and is now designated "Freon-12", "R-12", or "CFC-12". The number after the R is a refrigerant class number developed by DuPont to systematically identify single halogenated hydrocarbons, as well as other refrigerants besides halocarbons.
Most uses of CFCs are now banned or severely restricted by the Montreal Protocol of August 1987, as they have been shown to be responsible for ozone depletion. Brands of freon containing hydrofluorocarbons (HFCs) instead have replaced many uses, but they, too, are under strict control under the Kyoto Protocol, as they are deemed "super-greenhouse effect" gases.
See also
Chlorodifluoromethane (R-22 or HCFC-22), a type of Freon.
Dichlorodifluoromethane (R-12 or CFC-12), the most commonly used Freon brand refrigerant prior to its ban in many countries in 1996 and total ban in 2010.
1,1,1,2-Tetrafluoroethane (R-134a or HFC-134a), one of the main replacements for the formerly widespread R-12.
Opteon halogenated olefins now replacing Freons in many applications.
== References == |
chrysaora hysoscella | Chrysaora hysoscella, the compass jellyfish, is a common species of jellyfish that inhabits coastal waters in temperate regions of the northeastern Atlantic Ocean, including the North Sea and Mediterranean Sea. In the past it was also recorded in the southeastern Atlantic, including South Africa, but this was caused by confusion with close relatives; C. africana, C. fulgida and an undescribed species tentatively referred to as "C. agulhensis".It is a true jellyfish displaying radial symmetry with distinct brown markings shaped like elongated V's on its bell. C. hysoscella adults are highly susceptible to the parasite Hyperia medusarum, but this has had no significant effects on the population. This organism has a benthic polyp stage before developing into a pelagic adult medusae. Compass jellyfish consume a variety of marine invertebrates and plankton and are preyed on by very few. C. hysoscella contribute to the global issue of jellyfish overpopulation which is concerning to humans for various reasons including recreational interference, economic turmoil for fishing communities, and depleted fish resources.
Body plan
As an adult, the bell of the compass jellyfish typically has a diameter of 15–25 cm (5.9–9.8 in). It usually has 16 brown elongated V-shaped markings on the translucent yellow-white bell. The markings surround a central brown spot and resemble the face of a compass, hence the common name compass jellyfish. It is usually colored yellowish white, with some brown. Its 24 tentacles are arranged in eight groups of three. Each tentacle has stinging cells for capturing prey and defense from predators. A sense organ is located between each group of tentacles, which can perceive changes in light and helps the jellyfish determine and maintain its position in the water column. It has 4 oral arms that can be distinguished from the tentacles because the arms are noticeably longer and have a folded, frilly appearance. These arms are used to facilitate transfer of captured prey from the tentacles to the mouth which is between the oral arms at the center of the underside of the bell.
Habitat
The compass jellyfish is found in coastal waters of the northeast Atlantic, including the Celtic, Irish, North and Mediterranean Seas. They inhabit these waters mostly at the top of the water column, and although they inhabit shallow water, they move up and down in the water column often ranging from surface waters to just above the seabed. They are rarely found deeper than 30 m from the surface.
Feeding and predation
Compass jellyfish are carnivores, consuming other marine invertebrates and plankton. They feed on a variety of benthic and pelagic organisms including but not limited to: dinoflagellates, copepods, crustacean eggs, larval fish, and chaetognaths. They stun and capture their prey with stinging cells on their tentacles. The oral arms facilitate movement of captured prey into the oral opening. Compass jellyfish have very few predators. They are known to be consumed by the leatherback sea turtle and ocean sunfish.
Life cycle
Like other Scyphozoans, Chrysaora hysoscella undergo metamorphosis as the organism develops and experiences a polyp and then medusa form. Females release planular larvae which swim to find a suitable place to settle. The planulae attach to a benthic substrate and develop into a sessile polyp which releases immature medusae through asexual reproduction called strobilation. Chrysaora hysoscella function as a male upon maturity and then develop female gametes, meaning this organism is protandrously hermaphroditic.
Reproduction
Chrysaora hysoscella utilize both sexual and asexual reproduction throughout development. Mature individuals reproduce sexually by broadcast spawning. Males release sperm from their mouths into the water column. Females fertilize the sperm internally and can fertilize sperm from multiple male partners. The larvae released from the female settle as benthic polyps that reproduce asexually. The polyps release multiple ephyrae through strobilation. Ephyrae are the earliest form of the medusa stage. Research indicates that Chrysaora hysoscella polyps are capable of releasing ephyrae over time and therefore are not limited to a single reproductive event.
Parasite Hyperia medusarum
Adult Chrysaora hysoscella are often parasitised by Hyperia medusarum. C. hysoscella found inshore and closer to the surface are more likely to have the parasite. The parasite can be found inside of the body cavity in the umbrella and gonads but tends to move from umbrella to gonads if there is space for them there. The gonads are more enriched in carbon and protein content then any other part of the body, making this region the ideal location to settle and feed. They have also been found on the oral arms of the jellyfish where they can eat prey caught by the medusae.
Effects of global warming
Scyphozoa populations are increasing with the warming climate and warmer ocean temperatures. Studies suggest that warmer winter temperatures allow for a longer strobilation period and subsequently higher ephyra production per polyp, higher percentages of polyp strobilation, and higher polyp survival rate. Polyps will be more successful in warmer temperatures but not in extreme temperatures. C. hysoscella are predicted to migrate further northwards to maintain ideal conditions.
Impact
Thriving jellyfish populations have been found to take over as top predators in areas where fin fish have been over-exploited. Increased abundance of jellyfish negatively impacts fish populations in the same region because jellyfish feed on fish eggs and larvae. Jellyfish and larval fish can also share common dietary preferences. Competition for food resources can result in depleted fish populations. Overpopulation of jellyfish is a concern to humans for many reasons. Jellyfish stings are painful and sometimes deadly to humans. Fishing nets can be overwhelmed with jellyfish bycatch or torn by jellyfish caught in the nets. Jellyfish can clog water inlets to power plants, causing serious problems for power production. Jellyfish can invade aquaculture cages, ruining the production of the organism being farmed.
References
External links
Photos of Chrysaora hysoscella on Sealife Collection |
the uninhabitable earth | "The Uninhabitable Earth" is an article by American journalist David Wallace-Wells published in the July 10, 2017 issue of New York magazine. The long-form article depicts a worst-case scenario of what might happen in the near-future due to global warming. The story was the most read article in the history of the magazine.
The article became the inspiration for The Uninhabitable Earth: Life After Warming, a book-length treatment of the ideas explored in the original essay.
General
On November 20, 2017, NYU's Arthur L. Carter Journalism Institute hosted a 2-hour-long conversation between Wallace-Wells and Michael E. Mann to discuss the controversy around the article.Accompanying the article are a series of extended interviews with scientists. These include paleontologist Peter Ward, climatologist Michael E. Mann, oceanographer Wallace Smith Broecker, climatologist James Hansen and scientist Michael Oppenheimer. In addition an annotated edition of the article was published online that includes inline footnotes.
In February 2019, Wallace-Wells published The Uninhabitable Earth: Life After Warming. The book was excerpted in The Guardian.
Reception
The story received immediate criticism from the climate change community along two fronts: the piece is too pessimistic; or it contains some factual errors. The NGO Climate Feedback summarized reviews by dozens of professional scientists, summarizing that, "The reviewers found that some statements in this complex article do misrepresent research on the topic, and some others lack the necessary context to be clearly understood by the reader. Many other explanations in the article are correct, but readers are likely left with an overall conclusion that is exaggerated compared to our best scientific understanding." Jason Samenow referred to it as a "climate doom piece" because Wallace-Wells presents some of the worse case scenarios without admission they are "remote" possibilities, and without exploring the more likely outcomes, which are still very serious. With reference to factual errors, Michael Mann and several others specifically criticized the description of Arctic methane emissions. In his conversation with Mann at NYU, Wallace-Wells noted that he would not include comments on methane release if he were to write the piece again.Some journalists defended the science saying it is mostly correct; "I haven't seen any good evidence for serious factual errors," said Kevin Drum. Emily Atkin said "The complaints about the science in Wallace-Wells’s article are mostly quibbles". Robinson Meyer of The Atlantic said it is an "unusually specific and severe depiction of what global warming will do to the planet." Susan Matthews writing in Slate said "The instantly viral piece might be the Silent Spring of our time". The major criticism is that David Wallace-Wells was trying to scare people. This theme was then explored by journalists and commentators with some saying they thought fear was necessary given the reality of the problem, while others thought scaring people was counter-productive. For example, Eric Holthaus said that "scaring the shit out of [people] is a really bad strategy" for getting them to want to address climate change.In a later interview, Wallace-Wells said that "it didn’t seem plausible to me that there was more risk at scaring people too much than there was at not scaring them enough ... my feeling was, and is, if there's a one percent chance that we’ve set off a chain reaction that could end the human race, then that should be something that the public knows and thinks about."
References
External links
David Wallace-Wells (July 10, 2017). "The Uninhabitable Earth". New York. Retrieved July 11, 2017.
David Wallace-Wells (July 14, 2017). "The Uninhabitable Earth, Annotated Edition". New York. Retrieved July 14, 2017. |
barents sea | The Barents Sea ( BARR-ənts, also US: BAR-ənts; Norwegian: Barentshavet, Urban East Norwegian: [ˈbɑ̀ːrəntsˌhɑːvə]; Russian: Баренцево море, romanized: Barentsevo More) is a marginal sea of the Arctic Ocean, located off the northern coasts of Norway and Russia and divided between Norwegian and Russian territorial waters. It was known earlier among Russians as the Northern Sea, Pomorsky Sea or Murman Sea ("Norse Sea"); the current name of the sea is after the historical Dutch navigator Willem Barentsz.
The Barents Sea is a rather shallow shelf sea with an average depth of 230 metres (750 ft), and it is an important site for both fishing and hydrocarbon exploration. It is bordered by the Kola Peninsula to the south, the shelf edge towards the Norwegian Sea to the west, the archipelagos of Svalbard to the northwest, Franz Josef Land to the northeast and Novaya Zemlya to the east. The islands of Novaya Zemlya, an extension of the northern end of the Ural Mountains, separate the Barents Sea from the Kara Sea.
Although part of the Arctic Ocean, the Barents Sea has been characterised as "turning into the Atlantic" or in the process of being "Atlantified" because of its status as "the Arctic warming hot spot." Hydrologic changes due to global warming have led to a reduction in sea ice and in the stratification of the water column, which could produce major changes in weather in Eurasia. One prediction is that as the Barents Sea's permanent ice-free area grows additional evaporation will increase winter snowfalls in much of continental Europe.
Geography
The southern half of the Barents Sea, including the ports of Murmansk (Russia) and Vardø (Norway) remain ice-free year-round due to the warm North Atlantic drift. In September, the entire Barents Sea is more or less completely ice-free. In 1944, Finland's territory also reached the Barents Sea. The Liinakhamari harbour in the Pechengsky District was Finland's only ice-free winter harbour until it was ceded to the Soviet Union.
There are three main types of water masses in the Barents Sea: Warm, salty Atlantic water (temperature >3 °C, salinity >35) from the North Atlantic drift; cold Arctic water (temperature <0 °C, salinity <35) from the north; and warm, but not very salty, coastal water (temperature >3 °C, salinity <34.7). Between the Atlantic and Polar waters, a front called the Polar Front is formed. In the western parts of the sea (close to Bear Island), this front is determined by the bottom topography and is therefore relatively sharp and stable from year to year, while in the east (towards Novaya Zemlya), it can be quite diffuse and its position can vary markedly between years.
The lands of Novaya Zemlya attained most of their early Holocene coastal deglaciation approximately 10,000 years before the present.
Extent
The International Hydrographic Organization defines the limits of the "Barentsz Sea" [sic] as follows:
On the west: The northeastern limit of the Norwegian Sea [A line joining the southernmost point of West Spitzbergen [sic] to North Cape of Bear Island, through this island to Cape Bull and thence on to North Cape in Norway (25°45'E)].On the northwest: The eastern shore of West Spitzbergen [sic], Hinlopen Strait up to 80° latitude north; south and east coasts of North-East Land [the island of Nordaustlandet] to Cape Leigh Smith (80°05′N 28°00′E).On the north: Cape Leigh Smith across the Islands Bolshoy Ostrov (Great Island) [Storøya], Gilles [Kvitøya] and Victoria; Cape Mary Harmsworth (southwestern extremity of Alexandra Land) along the northern coasts of Franz-Josef Land as far as Cape Kohlsaat (81°14′N 65°10′E).On the east: Cape Kohlsaat to Cape Zhelaniya (Desire); west and southwest coast of Novaya Zemlya to Cape Kussov Noss and thence to western entrance Cape, Dolgaya Bay (70°15′N 58°25′E) on Vaigach Island. Through Vaigach Island to Cape Greben; thence to Cape Belyi Noss on the mainland.On the south: The northern limit of the White Sea [A line joining Svyatoi Nos (Murmansk Coast, 39°47'E) and Cape Kanin].Other islands in the Barents Sea include Chaichy and Timanets.
Geology
The Barents Sea was originally formed from two major continental collisions: the Caledonian orogeny, in which the Baltica and Laurentia collided to form Laurasia, and a subsequent collision between Laurasia and Western Siberia. Most of its geological history is dominated by extensional tectonics, caused by the collapse of the Caledonian and Uralian orogenic belts and the break-up of Pangaea. These events created the major rift basins that dominate the Barents Shelf, along with various platforms and structural highs. The later geological history of the Barents Sea is dominated by Late Cenozoic uplift, particularly that caused by Quaternary glaciation, which has resulted in erosion and deposition of significant sediment.
Ecology
Due to the North Atlantic drift, the Barents Sea has a high biological production compared to other oceans of similar latitude. The spring bloom of phytoplankton can start quite early near the ice edge because the fresh water from the melting ice makes up a stable water layer on top of the seawater. The phytoplankton bloom feeds zooplankton such as Calanus finmarchicus, Calanus glacialis, Calanus hyperboreus, Oithona spp., and krill. The zooplankton feeders include young cod, capelin, polar cod, whales, and little auk. The capelin is a key food for top predators such as the north-east Arctic cod, harp seals, and seabirds such as the common guillemot and Brunnich's guillemot. The fisheries of the Barents Sea, in particular the cod fisheries, are of great importance for both Norway and Russia.
SIZEX-89 was an international winter experiment in 1989 for which the main objectives were to perform sensor signature studies of different ice types to develop SAR algorithms for ice variables, such as ice types, ice concentrations and ice kinematics. Although previous research suggested that predation by whales may be the cause of depleting fish stocks, more recent research suggests that marine mammal consumption has only a trivial influence on fisheries. A model assessing the effects of fisheries and climate was far more accurate at describing trends in fish abundance. There is a genetically distinct polar bear population associated with the Barents Sea.
Pollution
The Barents Sea is "among the most polluted places on Earth" due to accumulated marine garbage, decades of Soviet nuclear tests, radioactive waste dumping and industrial pollution. The elevated pollution has caused elevated rates of disease among locals. With rising military buildup and increased use of shipping lanes heading east through the Arctic, there are concerns that a further increase in pollution is likely, not least from the increased risk of future oil spills from ships not properly equipped for the environment.
Connections to global weather
History
Name
The Barents Sea was formerly known to Russians as Murmanskoye More, or the "Sea of Murmans" (i.e., their term for Norwegians). It appears with this name in sixteenth-century maps, including Gerard Mercator's Map of the Arctic published in his 1595 atlas. Its eastern corner, in the region of the Pechora River's estuary, has been known as Pechorskoye Morye, that is, Pechora Sea. It was also known as Pomorsky Morye, after the first inhabitants of its shores, the Pomors.This sea was given its present name by Europeans in honour of Willem Barentsz, a Dutch navigator and explorer. Barentsz was the leader of early expeditions to the far north, at the end of the sixteenth century.
The Barents Sea has been called by sailors "The Devil's Dance Floor" due to its unpredictability and difficulty level.Ocean rowers call it "Devil's Jaw". In 2017, after the first recorded complete man-powered crossing of the Barents Sea from Tromsø to Longyearbyen in a rowboat by the Polar Row expedition, captain Fiann Paul was asked by Norwegian TV2 how a rower would name the Barents Sea. Fiann responded that he would name it "Devil's Jaw", adding that the winds you constantly battle are like breath from the devil's nostrils while he holds you in his jaws.
Modern era
Seabed mapping was completed in 1933; the first full map was produced by Russian marine geologist Maria Klenova.
The Barents Sea was the site of a notable World War II engagement which later became known as the Battle of the Barents Sea. Under the command of Oskar Kummetz, German warships sank minelayer HMS Bramble and destroyer HMS Achates but lost destroyer Z16 Friedrich Eckoldt. Also, the German cruiser Admiral Hipper was severely damaged by British gunfire. The Germans later retreated and the British convoy arrived safely at Murmansk shortly afterwards.
During the Cold War, the Soviet Red Banner Northern Fleet used the southern reaches of the sea as a ballistic missile submarine bastion, a strategy that Russia continued. Nuclear contamination from dumped Russian naval reactors is an environmental concern in the Barents Sea.
Economy
Political status
For decades there was a boundary dispute between Norway and Russia regarding the position of the boundary between their respective claims to the Barents Sea. The Norwegians favoured a median line, based on the Geneva Convention of 1958, whereas the Russians favoured a meridian- based sector line, based on a Soviet decision of 1926. A neutral "grey" zone between the competing claims had an area of 175,000 square kilometres (68,000 sq mi), which is approximately 12% of the total area of the Barents Sea. The two countries started negotiations on the location of the boundary in 1974 and agreed to a moratorium on hydrocarbon exploration in 1976.
Twenty years after the fall of the Soviet Union, in 2010 Norway and Russia signed an agreement that placed the boundary equidistant from their competing claims. This was ratified and went into force on 7 July 2011, opening the grey zone for hydrocarbon exploration.
Oil and gas
Encouraged by the success of oil exploration and production in the North Sea in the 1960s, Norway began hydrocarbon exploration in the Barents Sea in 1969. They acquired seismic reflection surveys through the following years, which were analysed to understand the location of the main sedimentary basins. NorskHydro drilled the first well in 1980, which was a dry hole, and the first discoveries were made the following year: the Alke and Askeladden gas fields. Several more discoveries were made on the Norwegian side of the Barents Sea throughout the 1980s, including the important Snøhvit field.However, interest in the area began to wane due to a succession of dry holes, wells containing only gas (which was cheap at the time), and the prohibitive costs of developing wells in such a remote area. Interest in the area was reignited in the late 2000s after the Snovhit field was finally brought into production and two new large discoveries were made.The Russians began exploration in their territory around the same time, encouraged by their success in the Timan-Pechora Basin. They drilled their first wells in the early 1980s, and some very large gas fields were discovered throughout this decade. The Shtokman field was discovered in 1988 and is classed as a giant gas field: currently the 5th-largest gas field in the world. Similar practical difficulties Barents Sea resulted in a decline in Russian exploration, aggravated by the nation's political instability of the 1990s.
Fishing
The Barents Sea contains the world's largest remaining cod population, as well as important stocks of haddock and capelin. Fishing is managed jointly by Russia and Norway in the form of the Joint Norwegian–Russian Fisheries Commission, established in 1976, in an attempt to keep track of how many fish are leaving the ecosystem due to fishing. The Joint Norwegian-Russian Fisheries Commission sets Total Allowable Catches (TACs) for multiple species throughout their migratory tracks. Through the Commission, Norway and Russia also exchange fishing quotas and catch statistics to ensure the TACs are not being violated.
However there are problems with reporting under this system, and researchers believe that they do not have accurate data for the effects of fishing on the Barents Sea ecosystem. Cod is one of the major catches. A large portion of catches are not reported when the fishing boats land, to account for profits that are being lost to high taxes and fees. Since many fishermen do not strictly follow the TACs and rules set forth by the Commission, the amount of fish being extracted annually from the Barents Sea is underestimated.
Barents Sea biodiversity and marine bioprospecting
The Barents Sea, where temperate waters from the Gulf Stream and cold waters from the Arctic meet, is home to an enormous diversity of organisms, which are well-adapted to the extreme conditions of their marine habitats. This makes these arctic species very attractive for marine bioprospecting. Marine bioprospecting may be defined as the search for bioactive molecules and compounds from marine sources that have new, unique properties and the potential for commercial applications. Amongst others, applications include medicines, food and feed, textiles, cosmetics and the process industry.The Norwegian government strategically supports the development of marine bioprospecting as it has the potential to contribute to new and sustainable wealth creation. Tromsø and the northern areas of Norway play a central role in this strategy. They have excellent access to unique Arctic marine organisms, existing marine industries, and R&D competence and infrastructure in this region. Since 2007, science and industry have cooperated closely on bioprospecting and the development and commercialization of new products.
See also
Barents Basin
Continental shelf of Russia
Energy in Norway
List of largest biotechnology & pharmaceutical companies
List of oil and gas fields of the Barents Sea
List of seas
Notes
References
Ole Gunnar Austvik (2006) Oil and gas in the High North, Security Policy Library no. 4, The Norwegian Atlantic Committee. ISSN 0802-6602.
C. Michael Hogan (2008) Polar Bear: Ursus maritimus, Globaltwitcher.com, ed. Nicklas Stromberg.
World Wildlife Fund (2008). Barents Sea environment and conservation.
Zeeberg, JaapJan; David J. Lubinski; Steven L. Forman (September 2001). "Holocene Relative Sea Level History of Novaya Zemlya, Russia and Implications for Late Weichselian Ice-Sheet Loading" (PDF). Quaternary Research. Quaternary Research Center/Elsevier Science. 56 (2): 218–230. Bibcode:2001QuRes..56..218Z. doi:10.1006/qres.2001.2256. ISSN 0033-5894. S2CID 58938344.
External links
"Barents Sea" . Encyclopædia Britannica. Vol. 3 (11th ed.). 1911.
Barents.com—Developing the Barents Region
Foraminifera of the Barents Sea—illustrated catalog |
marine ecosystem | Marine ecosystems are the largest of Earth's aquatic ecosystems and exist in waters that have a high salt content. These systems contrast with freshwater ecosystems, which have a lower salt content. Marine waters cover more than 70% of the surface of the Earth and account for more than 97% of Earth's water supply and 90% of habitable space on Earth. Seawater has an average salinity of 35 parts per thousand of water. Actual salinity varies among different marine ecosystems. Marine ecosystems can be divided into many zones depending upon water depth and shoreline features. The oceanic zone is the vast open part of the ocean where animals such as whales, sharks, and tuna live. The benthic zone consists of substrates below water where many invertebrates live. The intertidal zone is the area between high and low tides. Other near-shore (neritic) zones can include mudflats, seagrass meadows, mangroves, rocky intertidal systems, salt marshes, coral reefs, lagoons. In the deep water, hydrothermal vents may occur where chemosynthetic sulfur bacteria form the base of the food web.
Marine ecosystems are characterized by the biological community of organisms that they are associated with and their physical environment. Classes of organisms found in marine ecosystems include brown algae, dinoflagellates, corals, cephalopods, echinoderms, and sharks.
Marine ecosystems are important sources of ecosystem services and food and jobs for significant portions of the global population. Human uses of marine ecosystems and pollution in marine ecosystems are significantly threats to the stability of these ecosystems. Environmental problems concerning marine ecosystems include unsustainable exploitation of marine resources (for example overfishing of certain species), marine pollution, climate change, and building on coastal areas. Moreover, much of the carbon dioxide causing global warming and heat captured by global warming are absorbed by the ocean, ocean chemistry is changing through processes like ocean acidification which in turn threatens marine ecosystems.
Because of the opportunities in marine ecosystems for humans and the threats created by humans, the international community has prioritized "Life below water" as Sustainable Development Goal 14. The goal is to "Conserve and sustainably use the oceans, seas and marine resources for sustainable development".
Types or locations
Marine coastal ecosystems
Coral reefs
Coral reefs are one of the most well-known marine ecosystems in the world, with the largest being the Great Barrier Reef. These reefs are composed of large coral colonies of a variety of species living together. The corals form multiple symbiotic relationships with the organisms around them.
Mangroves
Mangroves are trees or shrubs that grow in low-oxygen soil near coastlines in tropical or subtropical latitudes. They are an extremely productive and complex ecosystem that connects the land and sea. Mangroves consist of species that are not necessarily related to each other and are often grouped for the characteristics they share rather than genetic similarity. Because of their proximity to the coast, they have all developed adaptions such as salt excretion and root aeration to live in salty, oxygen-depleted water. Mangroves can often be recognized by their dense tangle of roots that act to protect the coast by reducing erosion from storm surges, currents, wave, and tides. The mangrove ecosystem is also an important source of food for many species as well as excellent at sequestering carbon dioxide from the atmosphere with global mangrove carbon storage is estimated at 34 million metric tons per year.
Seagrass meadows
Seagrasses form dense underwater meadows which are among the most productive ecosystems in the world. They provide habitats and food for a diversity of marine life comparable to coral reefs. This includes invertebrates like shrimp and crabs, cod and flatfish, marine mammals and birds. They provide refuges for endangered species such as seahorses, turtles, and dugongs. They function as nursery habitats for shrimps, scallops and many commercial fish species. Seagrass meadows provide coastal storm protection by the way their leaves absorb energy from waves as they hit the coast. They keep coastal waters healthy by absorbing bacteria and nutrients, and slow the speed of climate change by sequestering carbon dioxide into the sediment of the ocean floor.
Seagrasses evolved from marine algae which colonized land and became land plants, and then returned to the ocean about 100 million years ago. However, today seagrass meadows are being damaged by human activities such as pollution from land runoff, fishing boats that drag dredges or trawls across the meadows uprooting the grass, and overfishing which unbalances the ecosystem. Seagrass meadows are currently being destroyed at a rate of about two football fields every hour.
Kelp forests
Kelp forests occur worldwide throughout temperate and polar coastal oceans. In 2007, kelp forests were also discovered in tropical waters near Ecuador.Physically formed by brown macroalgae, kelp forests provide a unique habitat for marine organisms and are a source for understanding many ecological processes. Over the last century, they have been the focus of extensive research, particularly in trophic ecology, and continue to provoke important ideas that are relevant beyond this unique ecosystem. For example, kelp forests can influence coastal oceanographic patterns and provide many ecosystem services.However, the influence of humans has often contributed to kelp forest degradation. Of particular concern are the effects of overfishing nearshore ecosystems, which can release herbivores from their normal population regulation and result in the overgrazing of kelp and other algae. This can rapidly result in transitions to barren landscapes where relatively few species persist. Already due to the combined effects of overfishing and climate change, kelp forests have all but disappeared in many especially vulnerable places, such as Tasmania's east coast and the coast of Northern California. The implementation of marine protected areas is one management strategy useful for addressing such issues, since it may limit the impacts of fishing and buffer the ecosystem from additive effects of other environmental stressors.
Estuaries
Estuaries occur where there is a noticeable change in salinity between saltwater and freshwater sources. This is typically found where rivers meet the ocean or sea. The wildlife found within estuaries is unique as the water in these areas is brackish - a mix of freshwater flowing to the ocean and salty seawater. Other types of estuaries also exist and have similar characteristics as traditional brackish estuaries. The Great Lakes are a prime example. There, river water mixes with lake water and creates freshwater estuaries. Estuaries are extremely productive ecosystems that many humans and animal species rely on for various activities. This can be seen as, of the 32 largest cities in the world, 22 are located on estuaries as they provide many environmental and economic benefits such as crucial habitat for many species, and being economic hubs for many coastal communities. Estuaries also provide essential ecosystem services such as water filtration, habitat protection, erosion control, gas regulation nutrient cycling, and it even gives education, recreation and tourism opportunities to people.
Lagoons
Lagoons are areas that are separated from larger water by natural barriers such as coral reefs or sandbars. There are two types of lagoons, coastal and oceanic/atoll lagoons. A coastal lagoon is, as the definition above, simply a body of water that is separated from the ocean by a barrier. An atoll lagoon is a circular coral reef or several coral islands that surround a lagoon. Atoll lagoons are often much deeper than coastal lagoons. Most lagoons are very shallow meaning that they are greatly affected by changed in precipitation, evaporation and wind. This means that salinity and temperature are widely varied in lagoons and that they can have water that ranges from fresh to hypersaline. Lagoons can be found in on coasts all over the world, on every continent except Antarctica and is an extremely diverse habitat being home to a wide array of species including birds, fish, crabs, plankton and more. Lagoons are also important to the economy as they provide a wide array of ecosystem services in addition to being the home of so many different species. Some of these services include fisheries, nutrient cycling, flood protection, water filtration, and even human tradition.
Salt marsh
Salt marshes are a transition from the ocean to the land, where fresh and saltwater mix. The soil in these marshes is often made up of mud and a layer of organic material called peat. Peat is characterized as waterlogged and root-filled decomposing plant matter that often causes low oxygen levels (hypoxia). These hypoxic conditions causes growth of the bacteria that also gives salt marshes the sulfurous smell they are often known for. Salt marshes exist around the world and are needed for healthy ecosystems and a healthy economy. They are extremely productive ecosystems and they provide essential services for more than 75 percent of fishery species and protect shorelines from erosion and flooding. Salt marshes can be generally divided into the high marsh, low marsh, and the upland border. The low marsh is closer to the ocean, with it being flooded at nearly every tide except low tide. The high marsh is located between the low marsh and the upland border and it usually only flooded when higher than usual tides are present. The upland border is the freshwater edge of the marsh and is usually located at elevations slightly higher than the high marsh. This region is usually only flooded under extreme weather conditions and experiences much less waterlogged conditions and salt stress than other areas of the marsh.
Intertidal zones
Intertidal zones are the areas that are visible and exposed to air during low tide and covered up by saltwater during high tide. There are four physical divisions of the intertidal zone with each one having its distinct characteristics and wildlife. These divisions are the Spray zone, High intertidal zone, Middle Intertidal zone, and Low intertidal zone. The Spray zone is a damp area that is usually only reached by the ocean and submerged only under high tides or storms. The high intertidal zone is submerged at high tide but remains dry for long periods between high tides. Due to the large variance of conditions possible in this region, it is inhabited by resilient wildlife that can withstand these changes such as barnacles, marine snails, mussels and hermit crabs. Tides flow over the middle intertidal zone two times a day and this zone has a larger variety of wildlife. The low intertidal zone is submerged nearly all the time except during the lowest tides and life is more abundant here due to the protection that the water gives.
Ocean surface
Organisms that live freely at the surface, termed neuston, include keystone organisms like the golden seaweed Sargassum that makes up the Sargasso Sea, floating barnacles, marine snails, nudibranchs, and cnidarians. Many ecologically and economically important fish species live as or rely upon neuston. Species at the surface are not distributed uniformly; the ocean's surface harbours unique neustonic communities and ecoregions found at only certain latitudes and only in specific ocean basins. But the surface is also on the front line of climate change and pollution. Life on the ocean's surface connects worlds. From shallow waters to the deep sea, the open ocean to rivers and lakes, numerous terrestrial and marine species depend on the surface ecosystem and the organisms found there.The ocean's surface acts like a skin between the atmosphere above and the water below, and harbours an ecosystem unique to this environment. This sun-drenched habitat can be defined as roughly one metre in depth, as nearly half of UV-B is attenuated within this first meter. Organisms here must contend with wave action and unique chemical and physical properties. The surface is utilised by a wide range of species, from various fish and cetaceans, to species that ride on ocean debris (termed rafters). Most prominently, the surface is home to a unique community of free-living organisms, termed neuston (from the Greek word, υεω, which means both to swim and to float. Floating organisms are also sometimes referred to as pleuston, though neuston is more commonly used). Despite the diversity and importance of the ocean's surface in connecting disparate habitats, and the risks it faces, not a lot is known about neustonic life.A stream of airborne microorganisms circles the planet above weather systems but below commercial air lanes. Some peripatetic microorganisms are swept up from terrestrial dust storms, but most originate from marine microorganisms in sea spray. In 2018, scientists reported that hundreds of millions of viruses and tens of millions of bacteria are deposited daily on every square meter around the planet.
Deep sea and sea floor
The deep sea contains up to 95% of the space occupied by living organisms. Combined with the sea floor (or benthic zone), these two areas have yet to be fully explored and have their organisms documented.
Large marine ecosystems
In 1984, National Oceanic and Atmospheric Administration (NOAA) of the United States developed the concept of large marine ecosystems (sometimes abbreviated to LMEs), to identify areas of the oceans for environmental conservation purposes and to enable collaborative ecosystem-based management in transnational areas, in a way consistent with the 1982 UN Convention on the Law of the Sea. This name refers to relatively large regions on the order of 200,000 km2 (77,000 sq mi) or greater, characterized by their distinct bathymetry, hydrography, productivity, and trophically dependent populations. Such LMEs encompass coastal areas from river basins and estuaries to the seaward boundaries of continental shelves and the outer margins of the major ocean current systems.Altogether, there are 66 LMEs, which contribute an estimated $3 trillion annually. This includes being responsible for 90% of global annual marine fishery biomass. LME-based conservation is based on recognition that the world's coastal ocean waters are degraded by unsustainable fishing practices, habitat degradation, eutrophication, toxic pollution, aerosol contamination, and emerging diseases, and that positive actions to mitigate these threats require coordinated actions by governments and civil society to recover depleted fish populations, restore degraded habitats and reduce coastal pollution. Five modules are considered when assessing LMEs: productivity, fish and fisheries, pollution and ecosystem health, socioeconomics, and governance. Periodically assessing the state of each module within a marine LME is encouraged to ensure maintained health of the ecosystem and future benefit to managing governments. The Global Environment Facility (GEF) aids in managing LMEs off the coasts of Africa and Asia by creating resource management agreements between environmental, fisheries, energy and tourism ministers of bordering countries. This means participating countries share knowledge and resources pertaining to local LMEs to promote longevity and recovery of fisheries and other industries dependent upon LMEs.
Large marine ecosystems include:
Role in ecosystem services
In addition to providing many benefits to the natural world, marine ecosystems also provide social, economic, and biological ecosystem services to humans. Pelagic marine systems regulate the global climate, contribute to the water cycle, maintain biodiversity, provide food and energy resources, and create opportunities for recreation and tourism. Economically, marine systems support billions of dollars worth of capture fisheries, aquaculture, offshore oil and gas, and trade and shipping.
Ecosystem services fall into multiple categories, including supporting services, provisioning services, regulating services, and cultural services.The productivity of a marine ecosystem can be measured in several ways. Measurements pertaining to zooplankton biodiversity and species composition, zooplankton biomass, water-column structure, photosynthetically active radiation, transparency, chlorophyll-a, nitrate, and primary production are used to assess changes in LME productivity and potential fisheries yield. Sensors attached to the bottom of ships or deployed on floats can measure these metrics and be used to quantitatively describe changes in productivity alongside physical changes in the water column such as temperature and salinity. This data can be used in conjunction with satellite measurements of chlorophyll and sea surface temperatures to validate measurements and observe trends on greater spatial and temporal scales.
Bottom-trawl surveys and pelagic-species acoustic surveys are used to assess changes in fish biodiversity and abundance in LMEs. Fish populations can be surveyed for stock identification, length, stomach content, age-growth relationships, fecundity, coastal pollution and associated pathological conditions, as well as multispecies trophic relationships. Fish trawls can also collect sediment and inform us about ocean-bottom conditions such as anoxia.
Threats
Human exploitation and development
Coastal marine ecosystems experience growing population pressures with nearly 40% of people in the world living within 100 km of the coast. Humans often aggregate near coastal habitats to take advantage of ecosystem services. For example, coastal capture fisheries from mangroves and coral reef habitats are estimated to be worth a minimum of $34 billion per year. Yet, many of these habitats are either marginally protected or not protected. Mangrove area has declined worldwide by more than one-third since 1950, and 60% of the world's coral reefs are now immediately or directly threatened. Human development, aquaculture, and industrialization often lead to the destruction, replacement, or degradation of coastal habitats.Moving offshore, pelagic marine systems are directly threatened by overfishing. Global fisheries landings peaked in the late 1980s, but are now declining, despite increasing fishing effort. Fish biomass and average trophic level of fisheries landing are decreasing, leading to declines in marine biodiversity. In particular, local extinctions have led to declines in large, long-lived, slow-growing species, and those that have narrow geographic ranges. Biodiversity declines can lead to associated declines in ecosystem services. A long-term study reports the decline of 74–92% of catch per unit effort of sharks in Australian coastline from the 1960s to 2010s. Such biodiversity losses impact not just species themselves, but humans as well, and can contribute to climate change across the globe. The National Oceanic and Atmospheric Administration (NOAA) states that managing and protecting marine ecosystems is crucial in attempting to conserve biodiversity in the face of Earth’s rapidly changing climate.
Pollution
Invasive species
Global aquarium trade
Ballast water transport
Aquaculture
Climate change
Warming temperatures (see ocean heat content, sea surface temperature, and marine heat wave)
Increased frequency/intensity of storms
Ocean acidification
Sea level rise
Society and culture
Global goals
By integrating socioeconomic metrics with ecosystem management solutions, scientific findings can be utilized to benefit both the environment and economy of local regions. Management efforts must be practical and cost-effective. In 2000, the Department of Natural Resource Economics at the University of Rhode Island has created a method for measuring and understanding the human dimensions of LMEs and for taking into consideration both socioeconomic and environmental costs and benefits of managing Large Marine Ecosystems.International attention to address the threats of coasts has been captured in Sustainable Development Goal 14 "Life Below Water" which sets goals for international policy focused on preserving coastal ecosystems and supporting more sustainable economic practices for coastal communities. Furthermore, the United Nations has declared 2021-2030 the UN Decade on Ecosystem Restoration, but restoration of coastal ecosystems has received insufficient attention.
See also
References
External links
U.S. Environmental Protection Agency—EPA: Marine Ecosystems
Smithsonian Institution: Ocean Portal
Marine Ecosystems Research Programme (UK) |
infrared window | The infrared atmospheric window refers to a region of the Infrared spectrum where there is relatively little absorption of terrestrial thermal radiation by atmospheric gases. The window plays an important role in the atmospheric greenhouse effect by maintaining the balance between incoming solar radiation and outgoing IR to space. In the Earth's atmosphere this window is roughly the region between 8 and 14 μm although it can be narrowed or closed at times and places of high humidity because of the strong absorption in the water vapor continuum or because of blocking by clouds. It covers a substantial part of the spectrum from surface thermal emission which starts at roughly 5 μm. Principally it is a large gap in the absorption spectrum of water vapor. Carbon dioxide plays an important role in setting the boundary at the long wavelength end. Ozone partly blocks transmission in the middle of the window.
The importance of the infrared atmospheric window in the atmospheric energy balance was discovered by George Simpson in 1928, based on G. Hettner's 1918 laboratory studies of the gap in the absorption spectrum of water vapor. In those days, computers were not available, and Simpson notes that he used approximations; he writes about the need for this in order to calculate outgoing IR radiation: "There is no hope of getting an exact solution; but by making suitable simplifying assumptions . . . ." Nowadays, accurate line-by-line computations are possible, and careful studies of the spectroscopy of infrared atmospheric gases have been published.
Mechanisms in the infrared atmospheric window
The principal natural greenhouse gases in order of their importance are water vapor H2O, carbon dioxide CO2, ozone O3, methane CH4 and nitrous oxide N2O. The concentration of the least common of these, N2O, is about 400 ppbV. Other gases which contribute to the greenhouse effect are present at pptV levels. These include the chlorofluorocarbons (CFCs), halons and hydrofluorocarbons (HFC and HCFCs). As discussed below, a major reason that they are so effective as greenhouse gases is that they have strong vibrational bands that fall in the infrared atmospheric window. IR absorption by CO2 at 14.7 μm sets the long wavelength limit of the infrared atmospheric window together with absorption by rotational transitions of H2O at slightly longer wavelengths. The short wavelength boundary of the atmospheric IR window is set by absorption in the lowest frequency vibrational bands of water vapor. There is a strong band of ozone at 9.6 μm in the middle of the window which is why it acts as such a strong greenhouse gas. Water vapor has a continuum absorption due to collisional broadening of absorption lines which extends through the window. Local very high humidity can completely block the infrared vibrational window.
Over the Atlas Mountains, interferometrically recorded spectra of outgoing longwave radiation show emission that has arisen from the land surface at a temperature of about 320 K and passed through the atmospheric window, and non-window emission that has arisen mainly from the troposphere at temperatures about 260 K.
Over Côte d'Ivoire, interferometrically recorded spectra of outgoing longwave radiation show emission that has arisen from the cloud tops at a temperature of about 265 K and passed through the atmospheric window, and non-window emission that has arisen mainly from the troposphere at temperatures about 240 K. This means that, at the scarcely absorbed continuum of wavelengths (8 to 14 μm), the radiation emitted, by the Earth's surface into a dry atmosphere, and by the cloud tops, mostly passes unabsorbed through the atmosphere, and is emitted directly to space; there is also partial window transmission in far infrared spectral lines between about 16 and 28 μm. Clouds are excellent emitters of infrared radiation. Window radiation from cloud tops arises at altitudes where the air temperature is low, but as seen from those altitudes, the water vapor content of the air above is much lower than that of the air at the land-sea surface. Moreover, the water vapour continuum absorptivity, molecule for molecule, decreases with pressure decrease. Thus water vapour above the clouds, besides being less concentrated, is also less absorptive than water vapour at lower altitudes. Consequently, the effective window as seen from the cloud-top altitudes is more open, with the result that the cloud tops are effectively strong sources of window radiation; that is to say, in effect the clouds obstruct the window only to a small degree (see another opinion about this, proposed by Ahrens (2009) on page 43).
Importance for life
Without the infrared atmospheric window, the Earth would become much too warm to support life, and possibly so warm that it would lose its water, as Venus did early in Solar System history. Thus, the existence of an atmospheric window is critical to Earth remaining a habitable planet.
As a proposed management strategy for global warming, passive daytime radiative cooling (PDRC) surfaces use the infrared window to send heat back into outer space with the aim of reversing rising temperature increases caused by climate change.
Threats
In recent decades, the existence of the infrared atmospheric window has become threatened by the development of highly unreactive gases containing bonds between fluorine and carbon, sulfur or nitrogen. The impact of these compounds was first discovered by Indian–American atmospheric scientist Veerabhadran Ramanathan in 1975, one year after Roland and Molina's much-more-celebrated paper on the ability of chlorofluorocarbons to destroy stratospheric ozone.
The "stretching frequencies" of bonds between fluorine and other light nonmetals are such that strong absorption in the atmospheric window will always be characteristic of compounds containing such bonds, although fluorides of nonmetals other than carbon, nitrogen or sulfur are short-lived due to hydrolysis. This absorption is strengthened because these bonds are highly polar due to the extreme electronegativity of the fluorine atom. Bonds to chlorine and bromine also absorb in the atmospheric window, though much less strongly.
Moreover, the unreactive nature of such compounds that makes them so valuable for many industrial purposes means that they are not removable in the natural circulation of the Earth's lower atmosphere. Extremely small natural sources created by means of radioactive oxidation of fluorite and subsequent reaction with sulfate or carbonate minerals produce via degassing atmospheric concentrations of about 40 ppt for all perfluorocarbons and 0.01 ppt for sulfur hexafluoride, but the only natural ceiling is via photolysis in the mesosphere and upper stratosphere. It is estimated that perfluorocarbons (CF4, C2F6, C3F8), originating from commercial production of anesthetics, refrigerants, and polymers can stay in the atmosphere for between two thousand six hundred and fifty thousand years.This means that such compounds possess enormous global warming potential. One kilogram of sulfur hexafluoride will, for example, cause as much warming as 26.7 tonnes of carbon dioxide over 100 years, and as much as 37.6 tonnes over 500 years. Perfluorocarbons are similar in this respect, and even carbon tetrachloride (CCl4) has a global warming potential of 2310 compared to carbon dioxide. Quite short-lived halogenated compounds can still have fairly high global warming potentials: for instance chloroform, with a lifetime of 0.5 years, still has a global warming potential of 22; halothane, with a lifetime of only one year, has a GWP of 47 over 100 years, and Halon 1202, with a lifetime of 2.9 years, has a 100 year global warming potential 231 times that of carbon dioxide. These compounds still remain highly problematic with an ongoing effort to find substitutes for them.
See also
Greenhouse effect
Greenhouse gas
Infrared astronomy
Optical window
Ozone depletion
Radio window
References
Books
Mihalas, D.; Weibel-Mihalas, B. (1984). Foundations of Radiation Hydrodynamics. Oxford University Press. ISBN 0-19-503437-6. Archived from the original on 2011-10-08. Retrieved 2009-10-17.
External links
IR Atmospheric Window Archived 2018-10-11 at the Wayback Machine |
how global warming works | How Global Warming Works is a website developed by Michael Ranney, a professor of cognitive psychology at the University of California, Berkeley in Berkeley, California, United States. The stated goal of the website is to educate the public on the mechanisms of global warming, which was motivated by research Ranney and colleagues conducted on attitudes towards and understanding of global warming.
Background
The motivation for the website came from two studies conducted by Ranney and colleagues. In the first study, they hypothesized that one of the factors explaining why fewer Americans believe in global warming than do people in other industrialized nations is that they do not understand the mechanism of global warming.: 2230 To test this hypothesis, they anonymously surveyed 270 park visitors and community college students in San Diego. They reported that none of the 270 participants could explain the basic mechanism of global warming even though 80% thought that global warming was real and that 77% thought that humans contributed to it.: 2230 In the second study, they hypothesized that if people understood the mechanism of global warming, their understanding and acceptance of it would increase. Using a 400-word explanation of global warming they tested their hypothesis on students from the University of California, Berkeley and from the University of Texas at Brownsville.The following summary of the explanation given to the students to read was provided in Scientific American:
Summary: (a) Earth absorbs most of the sunlight it receives; (b) Earth then emits the absorbed light's energy as infrared light; (c) greenhouse gases absorb a lot of the infrared light before it can leave our atmosphere; (d) being absorbed slows the rate at which energy escapes to space; and (e) the slower passage of energy heats up the atmosphere, water, and ground. By increasing the amount of greenhouse gases in the atmosphere, humans are increasing the atmosphere’s absorption of infrared light, thereby warming Earth and disrupting global climate patterns.
They reported that by reading a brief description of the mechanism of global warming, participants in the study increased both their understanding and acceptance of global warming. These results, which have been repeatedly replicated, motivated them to launch a new website with the aim of providing website visitors with videos of the mechanisms of global warming so that they could educate themselves on how global warming works.
Website
The website provides videos ranging from 52 seconds to under 5 minutes that describe and illustrate the mechanisms of global warming. It also provides seven statistics that have been shown by Ranney and Clark to increase global warming acceptance. Further, the website's videos have been translated into Mandarin and German [1], and transcripts of the videos in several other languages are available. Texts explaining global warming's mechanism are also available. Some of the site's information has been translated into Mandarin, and the Mandarin videos are available on Youku.
Analysis
In 2014 Dan Kahan was skeptical about Ranney's approach and this website's large-scale effectiveness in educating people about global warming, telling Nova, "I don't think it makes sense to believe that if you tell people in five-minute lectures about climate science, that it's going to solve the problem". However, Ranney and his colleagues have been assessing the videos in randomized controlled experiments and indicate that the videos (including a four-minute German video), like the 400-word mechanistic text, increase viewers' global warming acceptance—as do the aforementioned representative statistics. In addition, the website contrasts the change in earth's temperature since 1880 with the change in the value of the Dow Jones Industrial Average (adjusted for inflation); this contrast also increases readers' global warming acceptance.
See also
Public opinion on climate change
Global warming controversy
References
External links
Official website
Main Mandarin page of website |
rain | Rain is water droplets that have condensed from atmospheric water vapor and then fall under gravity. Rain is a major component of the water cycle and is responsible for depositing most of the fresh water on the Earth. It provides water for hydroelectric power plants, crop irrigation, and suitable conditions for many types of ecosystems.
The major cause of rain production is moisture moving along three-dimensional zones of temperature and moisture contrasts known as weather fronts. If enough moisture and upward motion is present, precipitation falls from convective clouds (those with strong upward vertical motion) such as cumulonimbus (thunder clouds) which can organize into narrow rainbands. In mountainous areas, heavy precipitation is possible where upslope flow is maximized within windward sides of the terrain at elevation which forces moist air to condense and fall out as rainfall along the sides of mountains. On the leeward side of mountains, desert climates can exist due to the dry air caused by downslope flow which causes heating and drying of the air mass. The movement of the monsoon trough, or intertropical convergence zone, brings rainy seasons to savannah climes.
The urban heat island effect leads to increased rainfall, both in amounts and intensity, downwind of cities. Global warming is also causing changes in the precipitation pattern globally, including wetter conditions across eastern North America and drier conditions in the tropics. Antarctica is the driest continent. The globally averaged annual precipitation over land is 715 mm (28.1 in), but over the whole Earth, it is much higher at 990 mm (39 in). Climate classification systems such as the Köppen classification system use average annual rainfall to help differentiate between differing climate regimes. Rainfall is measured using rain gauges. Rainfall amounts can be estimated by weather radar.
Formation
Water-saturated air
Air contains water vapor, and the amount of water in a given mass of dry air, known as the mixing ratio, is measured in grams of water per kilogram of dry air (g/kg). The amount of moisture in the air is also commonly reported as relative humidity; which is the percentage of the total water vapor air can hold at a particular air temperature. How much water vapor a parcel of air can contain before it becomes saturated (100% relative humidity) and forms into a cloud (a group of visible and tiny water and ice particles suspended above the Earth's surface) depends on its temperature. Warmer air can contain more water vapor than cooler air before becoming saturated. Therefore, one way to saturate a parcel of air is to cool it. The dew point is the temperature to which a parcel must be cooled in order to become saturated.There are four main mechanisms for cooling the air to its dew point: adiabatic cooling, conductive cooling, radiational cooling, and evaporative cooling. Adiabatic cooling occurs when air rises and expands. The air can rise due to convection, large-scale atmospheric motions, or a physical barrier such as a mountain (orographic lift). Conductive cooling occurs when the air comes into contact with a colder surface, usually by being blown from one surface to another, for example from a liquid water surface to colder land. Radiational cooling occurs due to the emission of infrared radiation, either by the air or by the surface underneath. Evaporative cooling occurs when moisture is added to the air through evaporation, which forces the air temperature to cool to its wet-bulb temperature, or until it reaches saturation.The main ways water vapor is added to the air are wind convergence into areas of upward motion, precipitation or virga falling from above, daytime heating evaporating water from the surface of oceans, water bodies or wet land, transpiration from plants, cool or dry air moving over warmer water, and lifting air over mountains. Water vapor normally begins to condense on condensation nuclei such as dust, ice, and salt in order to form clouds. Elevated portions of weather fronts (which are three-dimensional in nature) force broad areas of upward motion within the Earth's atmosphere which form clouds decks such as altostratus or cirrostratus. Stratus is a stable cloud deck which tends to form when a cool, stable air mass is trapped underneath a warm air mass. It can also form due to the lifting of advection fog during breezy conditions.
Coalescence and fragmentation
Coalescence occurs when water droplets fuse to create larger water droplets. Air resistance typically causes the water droplets in a cloud to remain stationary. When air turbulence occurs, water droplets collide, producing larger droplets.
As these larger water droplets descend, coalescence continues, so that drops become heavy enough to overcome air resistance and fall as rain. Coalescence generally happens most often in clouds above freezing and is also known as the warm rain process. In clouds below freezing, when ice crystals gain enough mass they begin to fall. This generally requires more mass than coalescence when occurring between the crystal and neighboring water droplets. This process is temperature dependent, as supercooled water droplets only exist in a cloud that is below freezing. In addition, because of the great temperature difference between cloud and ground level, these ice crystals may melt as they fall and become rain.Raindrops have sizes ranging from 0.1 to 9 mm (0.0039 to 0.3543 in) mean diameter but develop a tendency to break up at larger sizes. Smaller drops are called cloud droplets, and their shape is spherical. As a raindrop increases in size, its shape becomes more oblate, with its largest cross-section facing the oncoming airflow. Large rain drops become increasingly flattened on the bottom, like hamburger buns; very large ones are shaped like parachutes. Contrary to popular belief, their shape does not resemble a teardrop. The biggest raindrops on Earth were recorded over Brazil and the Marshall Islands in 2004 — some of them were as large as 10 mm (0.39 in). The large size is explained by condensation on large smoke particles or by collisions between drops in small regions with particularly high content of liquid water.Raindrops associated with melting hail tend to be larger than other raindrops.Intensity and duration of rainfall are usually inversely related, i.e., high-intensity storms are likely to be of short duration and low-intensity storms can have a long duration.
Droplet size distribution
The final droplet size distribution is an exponential distribution. The number of droplets with diameter between
d
{\displaystyle d}
and
D
+
d
D
{\displaystyle D+dD}
per unit volume of space is
n
(
d
)
=
n
0
e
−
d
/
⟨
d
⟩
d
D
{\displaystyle n(d)=n_{0}e^{-d/\langle d\rangle }dD}
. This is commonly referred to as the Marshall–Palmer law after the researchers who first characterized it. The parameters are somewhat temperature-dependent, and the slope also scales with the rate of rainfall
⟨
d
⟩
−
1
=
41
R
−
0.21
{\displaystyle \langle d\rangle ^{-1}=41R^{-0.21}}
(d in centimeters and R in millimeters per hour).Deviations can occur for small droplets and during different rainfall conditions. The distribution tends to fit averaged rainfall, while instantaneous size spectra often deviate and have been modeled as gamma distributions. The distribution has an upper limit due to droplet fragmentation.
Raindrop impacts
Raindrops impact at their terminal velocity, which is greater for larger drops due to their larger mass-to-drag ratio. At sea level and without wind, 0.5 mm (0.020 in) drizzle impacts at 2 m/s (6.6 ft/s) or 7.2 km/h (4.5 mph), while large 5 mm (0.20 in) drops impact at around 9 m/s (30 ft/s) or 32 km/h (20 mph).Rain falling on loosely packed material such as newly fallen ash can produce dimples that can be fossilized, called raindrop impressions. The air density dependence of the maximum raindrop diameter together with fossil raindrop imprints has been used to constrain the density of the air 2.7 billion years ago.The sound of raindrops hitting water is caused by bubbles of air oscillating underwater.The METAR code for rain is RA, while the coding for rain showers is SHRA.
Virga
In certain conditions, precipitation may fall from a cloud but then evaporate or sublime before reaching the ground. This is termed virga and is more often seen in hot and dry climates.
Causes
Frontal activity
Stratiform (a broad shield of precipitation with a relatively similar intensity) and dynamic precipitation (convective precipitation which is showery in nature with large changes in intensity over short distances) occur as a consequence of slow ascent of air in synoptic systems (on the order of cm/s), such as in the vicinity of cold fronts and near and poleward of surface warm fronts. Similar ascent is seen around tropical cyclones outside the eyewall, and in comma-head precipitation patterns around mid-latitude cyclones.A wide variety of weather can be found along an occluded front, with thunderstorms possible, but usually, their passage is associated with a drying of the air mass. Occluded fronts usually form around mature low-pressure areas. What separates rainfall from other precipitation types, such as ice pellets and snow, is the presence of a thick layer of air aloft which is above the melting point of water, which melts the frozen precipitation well before it reaches the ground. If there is a shallow near-surface layer that is below freezing, freezing rain (rain which freezes on contact with surfaces in subfreezing environments) will result. Hail becomes an increasingly infrequent occurrence when the freezing level within the atmosphere exceeds 3,400 m (11,000 ft) above ground level.
Convection
Convective rain, or showery precipitation, occurs from convective clouds (e.g., cumulonimbus or cumulus congestus). It falls as showers with rapidly changing intensity. Convective precipitation falls over a certain area for a relatively short time, as convective clouds have limited horizontal extent. Most precipitation in the tropics appears to be convective; however, it has been suggested that stratiform precipitation also occurs. Graupel and hail indicate convection. In mid-latitudes, convective precipitation is intermittent and often associated with baroclinic boundaries such as cold fronts, squall lines, and warm fronts.
Orographic effects
Orographic precipitation occurs on the windward side of mountains and is caused by the rising air motion of a large-scale flow of moist air across the mountain ridge, resulting in adiabatic cooling and condensation. In mountainous parts of the world subjected to relatively consistent winds (for example, the trade winds), a more moist climate usually prevails on the windward side of a mountain than on the leeward or downwind side. Moisture is removed by orographic lift, leaving drier air (see katabatic wind) on the descending and generally warming, leeward side where a rain shadow is observed.In Hawaii, Mount Waiʻaleʻale, on the island of Kauai, is notable for its extreme rainfall, as it is amongst the places in the world with the highest levels of rainfall, with 9,500 mm (373 in). Systems known as Kona storms affect the state with heavy rains between October and April. Local climates vary considerably on each island due to their topography, divisible into windward (Koʻolau) and leeward (Kona) regions based upon location relative to the higher mountains. Windward sides face the east to northeast trade winds and receive much more rainfall; leeward sides are drier and sunnier, with less rain and less cloud cover.In South America, the Andes mountain range blocks Pacific moisture that arrives in that continent, resulting in a desert-like climate just downwind across western Argentina. The Sierra Nevada range creates the same effect in North America forming the Great Basin and Mojave Deserts.
Within the tropics
The wet, or rainy, season is the time of year, covering one or more months, when most of the average annual rainfall in a region falls. The term green season is also sometimes used as a euphemism by tourist authorities. Areas with wet seasons are dispersed across portions of the tropics and subtropics. Savanna climates and areas with monsoon regimes have wet summers and dry winters. Tropical rainforests technically do not have dry or wet seasons, since their rainfall is equally distributed through the year. Some areas with pronounced rainy seasons will see a break in rainfall mid-season when the intertropical convergence zone or monsoon trough move poleward of their location during the middle of the warm season. When the wet season occurs during the warm season, or summer, rain falls mainly during the late afternoon and early evening hours. The wet season is a time when air quality improves, freshwater quality improves, and vegetation grows significantly.
Tropical cyclones, a source of very heavy rainfall, consist of large air masses several hundred miles across with low pressure at the centre and with winds blowing inward towards the centre in either a clockwise direction (southern hemisphere) or counterclockwise (northern hemisphere). Although cyclones can take an enormous toll in lives and personal property, they may be important factors in the precipitation regimes of places they impact, as they may bring much-needed precipitation to otherwise dry regions. Areas in their path can receive a year's worth of rainfall from a tropical cyclone passage.
Human influence
The fine particulate matter produced by car exhaust and other human sources of pollution forms cloud condensation nuclei leads to the production of clouds and increases the likelihood of rain. As commuters and commercial traffic cause pollution to build up over the course of the week, the likelihood of rain increases: it peaks by Saturday, after five days of weekday pollution has been built up. In heavily populated areas that are near the coast, such as the United States' Eastern Seaboard, the effect can be dramatic: there is a 22% higher chance of rain on Saturdays than on Mondays. The urban heat island effect warms cities 0.6 to 5.6 °C (1.1 to 10.1 °F) above surrounding suburbs and rural areas. This extra heat leads to greater upward motion, which can induce additional shower and thunderstorm activity. Rainfall rates downwind of cities are increased between 48% and 116%. Partly as a result of this warming, monthly rainfall is about 28% greater between 32 and 64 km (20 and 40 mi) downwind of cities, compared with upwind. Some cities induce a total precipitation increase of 51%.Increasing temperatures tend to increase evaporation which can lead to more precipitation. Precipitation generally increased over land north of 30°N from 1900 through 2005 but has declined over the tropics since the 1970s. Globally there has been no statistically significant overall trend in precipitation over the past century, although trends have varied widely by region and over time. Eastern portions of North and South America, northern Europe, and northern and central Asia have become wetter. The Sahel, the Mediterranean, southern Africa and parts of southern Asia have become drier. There has been an increase in the number of heavy precipitation events over many areas during the past century, as well as an increase since the 1970s in the prevalence of droughts—especially in the tropics and subtropics. Changes in precipitation and evaporation over the oceans are suggested by the decreased salinity of mid- and high-latitude waters (implying more precipitation), along with increased salinity in lower latitudes (implying less precipitation and/or more evaporation). Over the contiguous United States, total annual precipitation increased at an average rate of 6.1 percent since 1900, with the greatest increases within the East North Central climate region (11.6 percent per century) and the South (11.1 percent). Hawaii was the only region to show a decrease (−9.25 percent).Analysis of 65 years of United States of America rainfall records show the lower 48 states have an increase in heavy downpours since 1950. The largest increases are in the Northeast and Midwest, which in the past decade, have seen 31 and 16 percent more heavy downpours compared to the 1950s. Rhode Island is the state with the largest increase, 104%. McAllen, Texas is the city with the largest increase, 700%. Heavy downpour in the analysis are the days where total precipitation exceeded the top one percent of all rain and snow days during the years 1950–2014.The most successful attempts at influencing weather involve cloud seeding, which include techniques used to increase winter precipitation over mountains and suppress hail.
Characteristics
Patterns
Rainbands are cloud and precipitation areas which are significantly elongated. Rainbands can be stratiform or convective, and are generated by differences in temperature. When noted on weather radar imagery, this precipitation elongation is referred to as banded structure. Rainbands in advance of warm occluded fronts and warm fronts are associated with weak upward motion, and tend to be wide and stratiform in nature.Rainbands spawned near and ahead of cold fronts can be squall lines which are able to produce tornadoes. Rainbands associated with cold fronts can be warped by mountain barriers perpendicular to the front's orientation due to the formation of a low-level barrier jet. Bands of thunderstorms can form with sea breeze and land breeze boundaries if enough moisture is present. If sea breeze rainbands become active enough just ahead of a cold front, they can mask the location of the cold front itself.Once a cyclone occludes an occluded front (a trough of warm air aloft) will be caused by strong southerly winds on its eastern periphery rotating aloft around its northeast, and ultimately northwestern, periphery (also termed the warm conveyor belt), forcing a surface trough to continue into the cold sector on a similar curve to the occluded front. The front creates the portion of an occluded cyclone known as its comma head, due to the comma-like shape of the mid-tropospheric cloudiness that accompanies the feature. It can also be the focus of locally heavy precipitation, with thunderstorms possible if the atmosphere along the front is unstable enough for convection. Banding within the comma head precipitation pattern of an extratropical cyclone can yield significant amounts of rain. Behind extratropical cyclones during fall and winter, rainbands can form downwind of relative warm bodies of water such as the Great Lakes. Downwind of islands, bands of showers and thunderstorms can develop due to low-level wind convergence downwind of the island edges. Offshore California, this has been noted in the wake of cold fronts.Rainbands within tropical cyclones are curved in orientation. Tropical cyclone rainbands contain showers and thunderstorms that, together with the eyewall and the eye, constitute a hurricane or tropical storm. The extent of rainbands around a tropical cyclone can help determine the cyclone's intensity.
Acidity
The phrase acid rain was first used by Scottish chemist Robert Augus Smith in 1852. The pH of rain varies, especially due to its origin. On America's East Coast, rain that is derived from the Atlantic Ocean typically has a pH of 5.0–5.6; rain that comes across the continental from the west has a pH of 3.8–4.8; and local thunderstorms can have a pH as low as 2.0. Rain becomes acidic primarily due to the presence of two strong acids, sulfuric acid (H2SO4) and nitric acid (HNO3). Sulfuric acid is derived from natural sources such as volcanoes, and wetlands (sulfate-reducing bacteria); and anthropogenic sources such as the combustion of fossil fuels, and mining where H2S is present. Nitric acid is produced by natural sources such as lightning, soil bacteria, and natural fires; while also produced anthropogenically by the combustion of fossil fuels and from power plants. In the past 20 years, the concentrations of nitric and sulfuric acid has decreased in presence of rainwater, which may be due to the significant increase in ammonium (most likely as ammonia from livestock production), which acts as a buffer in acid rain and raises the pH.
Köppen climate classification
The Köppen classification depends on average monthly values of temperature and precipitation. The most commonly used form of the Köppen classification has five primary types labeled A through E. Specifically, the primary types are A, tropical; B, dry; C, mild mid-latitude; D, cold mid-latitude; and E, polar. The five primary classifications can be further divided into secondary classifications such as rain forest, monsoon, tropical savanna, humid subtropical, humid continental, oceanic climate, Mediterranean climate, steppe, subarctic climate, tundra, polar ice cap, and desert.
Rain forests are characterized by high rainfall, with definitions setting minimum normal annual rainfall between 1,750 and 2,000 mm (69 and 79 in). A tropical savanna is a grassland biome located in semi-arid to semi-humid climate regions of subtropical and tropical latitudes, with rainfall between 750 and 1,270 mm (30 and 50 in) a year. They are widespread on Africa, and are also found in India, the northern parts of South America, Malaysia, and Australia. The humid subtropical climate zone is where winter rainfall is associated with large storms that the westerlies steer from west to east. Most summer rainfall occurs during thunderstorms and from occasional tropical cyclones. Humid subtropical climates lie on the east side continents, roughly between latitudes 20° and 40° degrees away from the equator.An oceanic (or maritime) climate is typically found along the west coasts at the middle latitudes of all the world's continents, bordering cool oceans, as well as southeastern Australia, and is accompanied by plentiful precipitation year-round. The Mediterranean climate regime resembles the climate of the lands in the Mediterranean Basin, parts of western North America, parts of Western and South Australia, in southwestern South Africa and in parts of central Chile. The climate is characterized by hot, dry summers and cool, wet winters. A steppe is a dry grassland. Subarctic climates are cold with continuous permafrost and little precipitation.
Pollution
Measurement
Gauges
Rain is measured in units of length per unit time, typically in millimeters per hour, or in countries where imperial units are more common, inches per hour. The "length", or more accurately, "depth" being measured is the depth of rain water that would accumulate on a flat, horizontal and impermeable surface during a given amount of time, typically an hour. One millimeter of rainfall is the equivalent of one liter of water per square meter.The standard way of measuring rainfall or snowfall is the standard rain gauge, which can be found in 100-mm (4-in) plastic and 200-mm (8-in) metal varieties. The inner cylinder is filled by 25 mm (0.98 in) of rain, with overflow flowing into the outer cylinder. Plastic gauges have markings on the inner cylinder down to 0.25 mm (0.0098 in) resolution, while metal gauges require use of a stick designed with the appropriate 0.25 mm (0.0098 in) markings. After the inner cylinder is filled, the amount inside it is discarded, then filled with the remaining rainfall in the outer cylinder until all the fluid in the outer cylinder is gone, adding to the overall total until the outer cylinder is empty. Other types of gauges include the popular wedge gauge (the cheapest rain gauge and most fragile), the tipping bucket rain gauge, and the weighing rain gauge. For those looking to measure rainfall the most inexpensively, a can that is cylindrical with straight sides will act as a rain gauge if left out in the open, but its accuracy will depend on what ruler is used to measure the rain with. Any of the above rain gauges can be made at home, with enough know-how.When a precipitation measurement is made, various networks exist across the United States and elsewhere where rainfall measurements can be submitted through the Internet, such as CoCoRAHS or GLOBE. If a network is not available in the area where one lives, the nearest local weather or met office will likely be interested in the measurement.
Remote sensing
One of the main uses of weather radar is to be able to assess the amount of precipitations fallen over large basins for hydrological purposes. For instance, river flood control, sewer management and dam construction are all areas where planners use rainfall accumulation data. Radar-derived rainfall estimates complement surface station data which can be used for calibration. To produce radar accumulations, rain rates over a point are estimated by using the value of reflectivity data at individual grid points. A radar equation is then used, which is
where Z represents the radar reflectivity, R represents the rainfall rate, and A and b are constants.
Satellite-derived rainfall estimates use passive microwave instruments aboard polar orbiting as well as geostationary weather satellites to indirectly measure rainfall rates. If one wants an accumulated rainfall over a time period, one has to add up all the accumulations from each grid box within the images during that time.
Intensity
Rainfall intensity is classified according to the rate of precipitation, which depends on the considered time. The following categories are used to classify rainfall intensity:
Light rain — when the precipitation rate is < 2.5 mm (0.098 in) per hour
Moderate rain — when the precipitation rate is between 2.5 mm (0.098 in) – 7.6 mm (0.30 in) or 10 mm (0.39 in) per hour
Heavy rain — when the precipitation rate is > 7.6 mm (0.30 in) per hour, or between 10 mm (0.39 in) and 50 mm (2.0 in) per hour
Violent rain — when the precipitation rate is > 50 mm (2.0 in) per hourTerms used for a heavy or violent rain include gully washer, trash-mover and toad-strangler.
The intensity can also be expressed by rainfall erosivity R-factor or in terms of the rainfall time-structure n-index.
Return period
The average time between occurrences of an event with a specified intensity and duration is called the return period. The intensity of a storm can be predicted for any return period and storm duration, from charts based on historic data for the location. The return period is often expressed as an n-year event. For instance, a 10-year storm describes a rare rainfall event occurring on average once every 10 years. The rainfall will be greater and the flooding will be worse than the worst storm expected in any single year. A 100-year storm describes an extremely rare rainfall event occurring on average once in a century. The rainfall will be extreme and flooding worse than a 10-year event. The probability of an event in any year is the inverse of the return period (assuming the probability remains the same for each year). For instance, a 10-year storm has a probability of occurring of 10 percent in any given year, and a 100-year storm occurs with a 1 percent probability in a year. As with all probability events, it is possible, though improbable, to have multiple 100-year storms in a single year.
Forecasting
The Quantitative Precipitation Forecast (abbreviated QPF) is the expected amount of liquid precipitation accumulated over a specified time period over a specified area. A QPF will be specified when a measurable precipitation type reaching a minimum threshold is forecast for any hour during a QPF valid period. Precipitation forecasts tend to be bound by synoptic hours such as 0000, 0600, 1200 and 1800 GMT. Terrain is considered in QPFs by use of topography or based upon climatological precipitation patterns from observations with fine detail. Starting in the mid to late 1990s, QPFs were used within hydrologic forecast models to simulate impact to rivers throughout the United States.Forecast models show significant sensitivity to humidity levels within the planetary boundary layer, or in the lowest levels of the atmosphere, which decreases with height. QPF can be generated on a quantitative, forecasting amounts, or a qualitative, forecasting the probability of a specific amount, basis. Radar imagery forecasting techniques show higher skill than model forecasts within 6 to 7 hours of the time of the radar image. The forecasts can be verified through use of rain gauge measurements, weather radar estimates, or a combination of both. Various skill scores can be determined to measure the value of the rainfall forecast.
Impact
Agricultural
Precipitation, especially rain, has a dramatic effect on agriculture. All plants need at least some water to survive, therefore rain (being the most effective means of watering) is important to agriculture. While a regular rain pattern is usually vital to healthy plants, too much or too little rainfall can be harmful, even devastating to crops. Drought can kill crops and increase erosion, while overly wet weather can cause harmful fungus growth. Plants need varying amounts of rainfall to survive. For example, certain cacti require small amounts of water, while tropical plants may need up to hundreds of inches of rain per year to survive.
In areas with wet and dry seasons, soil nutrients diminish and erosion increases during the wet season. Animals have adaptation and survival strategies for the wetter regime. The previous dry season leads to food shortages into the wet season, as the crops have yet to mature. Developing countries have noted that their populations show seasonal weight fluctuations due to food shortages seen before the first harvest, which occurs late in the wet season. Rain may be harvested through the use of rainwater tanks; treated to potable use or for non-potable use indoors or for irrigation. Excessive rain during short periods of time can cause flash floods.
Culture and religion
Cultural attitudes towards rain differ across the world. In temperate climates, people tend to be more stressed when the weather is unstable or cloudy, with its impact greater on men than women. Rain can also bring joy, as some consider it to be soothing or enjoy the aesthetic appeal of it. In dry places, such as India, or during periods of drought, rain lifts people's moods. In Botswana, the Setswana word for rain, pula, is used as the name of the national currency, in recognition of the economic importance of rain in its country, since it has a desert climate. Several cultures have developed means of dealing with rain and have developed numerous protection devices such as umbrellas and raincoats, and diversion devices such as gutters and storm drains that lead rains to sewers. Many people find the scent during and immediately after rain pleasant or distinctive. The source of this scent is petrichor, an oil produced by plants, then absorbed by rocks and soil, and later released into the air during rainfall.
Rain holds an important religious significance in many cultures. The ancient Sumerians believed that rain was the semen of the sky god An, which fell from the heavens to inseminate his consort, the earth goddess Ki, causing her to give birth to all the plants of the earth. The Akkadians believed that the clouds were the breasts of Anu's consort Antu and that rain was milk from her breasts. According to Jewish tradition, in the first century BC, the Jewish miracle-worker Honi ha-M'agel ended a three-year drought in Judaea by drawing a circle in the sand and praying for rain, refusing to leave the circle until his prayer was granted. In his Meditations, the Roman emperor Marcus Aurelius preserves a prayer for rain made by the Athenians to the Greek sky god Zeus. Various Native American tribes are known to have historically conducted rain dances in effort to encourage rainfall. Rainmaking rituals are also important in many African cultures. In the present-day United States, various state governors have held Days of Prayer for rain, including the Days of Prayer for Rain in the State of Texas in 2011.
Global climatology
Approximately 505,000 km3 (121,000 cu mi) of water falls as precipitation each year across the globe with 398,000 km3 (95,000 cu mi) of it over the oceans. Given the Earth's surface area, that means the globally averaged annual precipitation is 990 mm (39 in). Deserts are defined as areas with an average annual precipitation of less than 250 mm (10 in) per year, or as areas where more water is lost by evapotranspiration than falls as precipitation.
Deserts
The northern half of Africa is dominated by the world's most extensive hot, dry region, the Sahara Desert. Some deserts also occupy much of southern Africa: the Namib and the Kalahari. Across Asia, a large annual rainfall minimum, composed primarily of deserts, stretches from the Gobi Desert in Mongolia west-southwest through western Pakistan (Balochistan) and Iran into the Arabian Desert in Saudi Arabia. Most of Australia is semi-arid or desert, making it the world's driest inhabited continent. In South America, the Andes mountain range blocks Pacific moisture that arrives in that continent, resulting in a desert-like climate just downwind across western Argentina. The drier areas of the United States are regions where the Sonoran Desert overspreads the Desert Southwest, the Great Basin and central Wyoming.
Polar deserts
Since rain only falls as liquid, it rarely falls when surface temperatures are below freezing, unless there is a layer of warm air aloft, in which case it becomes freezing rain. Due to the entire atmosphere being below freezing most of the time, very cold climates see very little rainfall and are often known as polar deserts. A common biome in this area is the tundra which has a short summer thaw and a long frozen winter. Ice caps see no rain at all, making Antarctica the world's driest continent.
Rainforests
Rainforests are areas of the world with very high rainfall. Both tropical and temperate rainforests exist. Tropical rainforests occupy a large band of the planet mostly along the equator. Most temperate rainforests are located on mountainous west coasts between 45 and 55 degrees latitude, but they are often found in other areas.
Around 40–75% of all biotic life is found in rainforests. Rainforests are also responsible for 28% of the world's oxygen turnover.
Monsoons
The equatorial region near the Intertropical Convergence Zone (ITCZ), or monsoon trough, is the wettest portion of the world's continents. Annually, the rain belt within the tropics marches northward by August, then moves back southward into the Southern Hemisphere by February and March. Within Asia, rainfall is favored across its southern portion from India east and northeast across the Philippines and southern China into Japan due to the monsoon advecting moisture primarily from the Indian Ocean into the region. The monsoon trough can reach as far north as the 40th parallel in East Asia during August before moving southward thereafter. Its poleward progression is accelerated by the onset of the summer monsoon which is characterized by the development of lower air pressure (a thermal low) over the warmest part of Asia. Similar, but weaker, monsoon circulations are present over North America and Australia.During the summer, the Southwest monsoon combined with Gulf of California and Gulf of Mexico moisture moving around the subtropical ridge in the Atlantic Ocean bring the promise of afternoon and evening thunderstorms to the southern tier of the United States as well as the Great Plains. The eastern half of the contiguous United States east of the 98th meridian, the mountains of the Pacific Northwest, and the Sierra Nevada range are the wetter portions of the nation, with average rainfall exceeding 760 mm (30 in) per year. Tropical cyclones enhance precipitation across southern sections of the United States, as well as Puerto Rico, the United States Virgin Islands, the Northern Mariana Islands, Guam, and American Samoa.
Impact of the Westerlies
Westerly flow from the mild north Atlantic leads to wetness across western Europe, in particular Ireland and the United Kingdom, where the western coasts can receive between 1,000 mm (39 in), at sea level and 2,500 mm (98 in), on the mountains of rain per year. Bergen, Norway is one of the more famous European rain-cities with its yearly precipitation of 2,250 mm (89 in) on average. During the fall, winter, and spring, Pacific storm systems bring most of Hawaii and the western United States much of their precipitation. Over the top of the ridge, the jet stream brings a summer precipitation maximum to the Great Lakes. Large thunderstorm areas known as mesoscale convective complexes move through the Plains, Midwest, and Great Lakes during the warm season, contributing up to 10% of the annual precipitation to the region.The El Niño-Southern Oscillation affects the precipitation distribution, by altering rainfall patterns across the western United States, Midwest, the Southeast, and throughout the tropics. There is also evidence that global warming is leading to increased precipitation to the eastern portions of North America, while droughts are becoming more frequent in the tropics and subtropics.
Wettest known locations
Cherrapunji, situated on the southern slopes of the Eastern Himalaya in Shillong, India is the confirmed wettest place on Earth, with an average annual rainfall of 11,430 mm (450 in). The highest recorded rainfall in a single year was 22,987 mm (905.0 in) in 1861. The 38-year average at nearby Mawsynram, Meghalaya, India is 11,873 mm (467.4 in). The wettest spot in Australia is Mount Bellenden Ker in the north-east of the country which records an average of 8,000 mm (310 in) per year, with over 12,200 mm (480.3 in) of rain recorded during 2000. The Big Bog on the island of Maui has the highest average annual rainfall in the Hawaiian Islands, at 10,300 mm (404 in). Mount Waiʻaleʻale on the island of Kauaʻi achieves similar torrential rains, while slightly lower than that of the Big Bog, at 9,500 mm (373 in) of rain per year over the last 32 years, with a record 17,340 mm (683 in) in 1982. Its summit is considered one of the rainiest spots on earth, with a reported 350 days of rain per year. Lloró, a town situated in Chocó, Colombia, is probably the place with the largest rainfall in the world, averaging 13,300 mm (523.6 in) per year. The Department of Chocó is extraordinarily humid. Tutunendaó, a small town situated in the same department, is one of the wettest estimated places on Earth, averaging 11,394 mm (448.6 in) per year; in 1974 the town received 26,303 mm (86 ft 3.6 in), the largest annual rainfall measured in Colombia. Unlike Cherrapunji, which receives most of its rainfall between April and September, Tutunendaó receives rain almost uniformly distributed throughout the year. Quibdó, the capital of Chocó, receives the most rain in the world among cities with over 100,000 inhabitants: 9,000 mm (354 in) per year. Storms in Chocó can drop 500 mm (20 in) of rainfall in a day. This amount is more than what falls in many cities in a year's time.
See also
Notes
a b c The value given is the continent's highest, and possibly the world's, depending on measurement practices, procedures and period of record variations.
^ The official greatest average annual precipitation for South America is 900 cm (354 in) at Quibdó, Colombia. The 1,330 cm (523.6 in) average at Lloró [23 km (14 mi) SE and at a higher elevation than Quibdó] is an estimated amount.
^ Approximate elevation.
^ Recognized as "The Wettest place on Earth" by the Guinness Book of World Records.
^ This is the highest figure for which records are available. The summit of Mount Snowdon, about 500 yards (460 m) from Glaslyn, is estimated to have at least 200.0 inches (5,080 mm) per year.
References
External links
BBC article on the weekend rain effect
BBC article on rain-making
BBC article on the mathematics of running in the rain
What are clouds, and why does it rain? |
sustainable development goals | The Sustainable Development Goals (SDGs) or Global Goals are a collection of seventeen interlinked objectives designed to serve as a "shared blueprint for peace and prosperity for people and the planet, now and into the future." The short titles of the 17 SDGs are: No poverty (SDG 1), Zero hunger (SDG 2), Good health and well-being (SDG 3), Quality education (SDG 4), Gender equality (SDG 5), Clean water and sanitation (SDG 6), Affordable and clean energy (SDG 7), Decent work and economic growth (SDG 8), Industry, innovation and infrastructure (SDG 9), Reduced inequalities (SDG 10), Sustainable cities and communities (SDG 11), Responsible consumption and production (SDG 12), Climate action (SDG 13), Life below water (SDG 14), Life on land (SDG 15), Peace, justice, and strong institutions (SDG 16), and Partnerships for the goals (SDG 17).
The SDGs emphasize the interconnected environmental, social and economic aspects of sustainable development by putting sustainability at their center.In 2015, the United Nations General Assembly (UNGA) created the SDGs as part of the Post-2015 Development Agenda. This agenda sought to design a new global development framework, replacing the Millennium Development Goals, which were completed that same year. These goals were formally articulated and adopted in a UNGA resolution known as the 2030 Agenda, often informally referred to as Agenda 2030. On 6 July 2017, the SDGs were made more actionable by a UNGA resolution that identifies specific targets for each goal and provides indicators to measure progress. Most targets are to be achieved by 2030, although some have no end date.There are cross-cutting issues and synergies between the different goals; for example, for SDG 13 on climate action, the IPCC sees robust synergies with SDGs 3 (health), 7 (clean energy), 11 (cities and communities), 12 (responsible consumption and production) and 14 (oceans).: 70 On the other hand, critics and observers have also identified trade-offs between the goals,: 67 such as between ending hunger and promoting environmental sustainability.: 26 Furthermore, concerns have arisen over the high number of goals (compared to the eight Millennium Development Goals), leading to compounded trade-offs, a weak emphasis on environmental sustainability, and difficulties tracking qualitative indicators.
The SDGs are monitored by the UN (United Nations) High-Level Political Forum on Sustainable Development (HLPF), an annual forum held under the auspices of the United Nations Economic and Social Council. However, the HLPF comes with its own set of problems due to a lack of political leadership and divergent national interests.: 206 To facilitate monitoring of progress on SDG implementation, the online SDG Tracker was launched in June 2018 to present all available data across all indicators. The COVID-19 pandemic had serious negative impacts on all 17 SDGs in 2020. A scientific assessment of the political impacts of the SDGs found in 2022 that the SDGs have only had limited transformative political impact thus far. At the very least, they have affected the way actors understand and communicate about sustainable development.
Adoption
On 25 September 2015, the 193 countries of the UN General Assembly adopted the 2030 Development Agenda titled "Transforming our world: the 2030 Agenda for Sustainable Development." This agenda has 92 paragraphs. Paragraph 59 outlines the 17 Sustainable Development Goals and the associated 169 targets and 232 indicators.
The UN-led process involved its 193 Member States and global civil society. The resolution is a broad intergovernmental agreement that acts as the Post-2015 Development Agenda. The SDGs build on the principles agreed upon in Resolution A/RES/66/288, entitled "The Future We Want." This was a non-binding document released as a result of Rio+20 Conference held in 2012.
Implementation
Implementation of the SDGs started worldwide in 2016. This process can also be called Localizing the SDGs. In 2019 António Guterres (secretary-general of the United Nations) issued a global call for a Decade of Action to deliver the Sustainable Development Goals by 2030. This decade will last from 2020 to 2030. The plan is that the secretary general of the UN will convene an annual platform for driving the Decade of Action.There are two main types of actors for implementation of the SDGs: state and non-state actors. State actors include national governments and sub-national authorities, whereas non-state actors are corporations and civil society.: 80 Civil society participation and empowerment is important but there are also diverse interests in this group.: 80 Building new partnerships is useful. However, the SDGs are not legally binding and purposefully designed to provide much leeway for actors. Therefore, they can interpret the goals differently and often according to their interests.
Content of the 17 goals
Structure of goals, targets and indicators
The lists of targets and indicators for each of the 17 SDGs was published in a UN resolution in July 2017. Each goal typically has 8–12 targets, and each target has between one and four indicators used to measure progress toward reaching the targets, with the average of 1.5 indicators per target. The targets are either outcome targets (circumstances to be attained) or means of implementation targets. The latter targets were introduced late in the process of negotiating the SDGs to address the concern of some Member States about how the SDGs were to be achieved. Goal 17 is wholly about how the SDGs will be achieved.The numbering system of targets is as follows: Outcome targets use numbers, whereas means of implementation targets use lower case letters. For example, SDG 6 has a total of 8 targets. The first six are outcome targets and are labeled Targets 6.1 to 6.6. The final two targets are means of implementation targets and are labeled as Targets 6.a and 6.b.
The United Nations Statistics Division (UNSD) website provides a current official indicator list which includes all updates until the 51st session Statistical Commission in March 2020.The indicators for the targets have varying levels of methodological development and availability of data at the global level. Initially, some indicators (called Tier 3 indicators) had no internationally established methodology or standards. Later, the global indicator framework was adjusted so that Tier 3 indicators were either abandoned, replaced or refined. As of 17 July 2020, there were 231 unique indicators.Data or information must address all vulnerable groups such as children, elderly folks, persons with disabilities, refugees, indigenous peoples, migrants, and internally-displaced persons.
Reviews of indicators
The indicator framework was comprehensively reviewed at the 51st session of the United Nations Statistical Commission in 2020. It will be reviewed again in 2025. At the 51st session of the Statistical Commission (held in New York City from 3–6 March 2020) a total of 36 changes to the global indicator framework were proposed for the commission's consideration. Some indicators were replaced, revised or deleted. Between 15 October 2018 and 17 April 2020, other changes were made to the indicators. Yet their measurement continues to be fraught with difficulties.
Listing of 17 goals with their targets and indicators
Goal 1: No poverty
SDG 1 is to: "End poverty in all its forms everywhere." Achieving SDG 1 would end extreme poverty globally by 2030. One of its indicators is the proportion of population living below the poverty line. The data gets analyzed by sex, age, employment status, and geographical location (urban/rural).
Goal 2: Zero hunger (No hunger)
SDG 2 is to: "End hunger, achieve food security and improved nutrition, and promote sustainable agriculture." Indicators for this goal are for example the prevalence of undernourishment, prevalence of severe food insecurity, and prevalence of stunting among children under five years of age.
Goal 3: Good health and well-being
SDG 3 is to: "Ensure healthy lives and promote well-being for all at all ages." Important indicators here are life expectancy as well as child and maternal mortality. Further indicators are for example deaths from road traffic injuries, prevalence of current tobacco use, suicide mortality rate.
Goal 4: Quality education
SDG 4 is to: "Ensure inclusive and equitable quality education and promote lifelong learning opportunities for all." The indicators for this goal are, for example, attendance rates at primary schools, completion rates of primary school education, participation in tertiary education, and so forth. In each case, parity indices are looked at to ensure that disadvantaged students do not miss out (data is collected on "female/male, rural/urban, bottom/top wealth quintile and others such as disability status, indigenous peoples") . There is also an indicator around the facilities that the school buildings have (access to electricity, the internet, computers, drinking water, toilets etc.).
Goal 5: Gender equality
SDG 5 is to: "Achieve gender equality and empower all women and girls." Indicators include, for example, having suitable legal frameworks and the representation by women in national parliament or in local deliberative bodies. Numbers on forced marriage and female genital mutilation/cutting (FGM/C) are also included in another indicator.
Goal 6: Clean water and sanitation
SDG 6 is to: "Ensure availability and sustainable management of water and sanitation for all." The Joint Monitoring Programme (JMP) of WHO and UNICEF is responsible for monitoring progress to achieve the first two targets of this goal. Important indicators for this goal are the percentages of the population that uses safely managed drinking water, and has access to safely managed sanitation. The JMP reported in 2017 that 4.5 billion people do not have safely managed sanitation. Another indicator looks at the proportion of domestic and industrial wastewater that is safely treated.
Goal 7: Affordable and clean energy
SDG 7 is to: "Ensure access to affordable, reliable, sustainable and modern energy for all." One of the indicators for this goal is the percentage of population with access to electricity (progress in expanding access to electricity has been made in several countries, notably India, Bangladesh, and Kenya). Other indicators look at the renewable energy share and energy efficiency.
Goal 8: Decent work and economic growth
SDG 8 is to: "Promote sustained, inclusive and sustainable economic growth, full and productive employment and decent work for all." Important indicators for this goal include economic growth in least developed countries and the rate of real GDP per capita. Further examples are rates of youth unemployment and occupational injuries or the number of women engaged in the labor force compared to men.
Goal 9: Industry, Innovation and Infrastructure
SDG 9 is to: "Build resilient infrastructure, promote inclusive and sustainable industrialization, and foster innovation." Indicators in this goal include for example, the proportion of people who are employed in manufacturing activities, are living in areas covered by a mobile network, or who have access to the internet. An indicator that is connected to climate change is "CO2 emissions per unit of value added."
Goal 10: Reduced inequality
SDG 10 is to: "Reduce income inequality within and among countries." Important indicators for this SDG are: income disparities, aspects of gender and disability, as well as policies for migration and mobility of people.
Goal 11: Sustainable cities and communities
SDG 11 is to: "Make cities and human settlements inclusive, safe, resilient, and sustainable." Important indicators for this goal are the number of people living in urban slums, the proportion of the urban population who has convenient access to public transport, and the extent of built-up area per person.
Goal 12: Responsible consumption and production
SDG 12 is to: "Ensure sustainable consumption and production patterns." One of the indicators is the number of national policy instruments to promote sustainable consumption and production patterns.: 14 Another one is global fossil fuel subsidies.: 14 An increase in domestic recycling and a reduced reliance on the global plastic waste trade are other actions that might help meet the goal.
Goal 13: Climate action
SDG 13 is to: "Take urgent action to combat climate change and its impacts by regulating emissions and promoting developments in renewable energy." In 2021 to early 2023, the Intergovernmental Panel on Climate Change (IPCC) published its Sixth Assessment Report which assesses scientific, technical, and socio-economic information concerning climate change.
Goal 14: Life below water
SDG 14 is to: "Conserve and sustainably use the oceans, seas and marine resources for sustainable development." The current efforts to protect oceans, marine environments and small-scale fishers are not meeting the need to protect the resources. Increased ocean temperatures and oxygen loss act concurrently with ocean acidification to constitute the deadly trio of climate change pressures on the marine environment.
Goal 15: Life on land
SDG 15 is to: "Protect, restore and promote sustainable use of terrestrial ecosystems, sustainably manage forests, combat desertification, and halt and reverse land degradation and halt biodiversity loss." The proportion of remaining forest area, desertification and species extinction risk are example indicators of this goal.
Goal 16: Peace, justice and strong institutions
SDG 16 is to: "Promote peaceful and inclusive societies for sustainable development, provide access to justice for all and build effective, accountable and inclusive institutions at all levels." Rates of birth registration and prevalence of bribery are two examples of indicators included in this goal.
Goal 17: Partnership for the goals
SDG 17 is to: "Strengthen the means of implementation and revitalize the global partnership for sustainable development." Increasing international cooperation is seen as vital to achieving each of the 16 previous goals. Developing multi-stakeholder partnerships to facilitate knowledge exchange, expertise, technology, and financial resources is recognized as critical to overall success of the SDGs. The goal includes improving north–south and South-South cooperation. Public-private partnerships which involve civil societies are specifically mentioned.
Cross-cutting issues and synergies
Three aspects need to come together to achieve sustainable development: the economic, socio-political, and environmental dimensions are all vital and interdependent. Multidisciplinary and trans-disciplinary research across all three sectors are required to achieve progress. This proves difficult when major governments fail to support it.Gender equality, education, culture and health are examples of cross cutting issues. These are some examples of various interlinkages inherent in the SDGs.
Gender equality
The widespread consensus is that progress on all of the SDGs will be stalled if women's empowerment and gender equality are not prioritized, and treated holistically. The SDGs look to policy makers as well as private sector executives and board members to work toward gender equality. Statements from diverse sources such as the Organisation for Economic Cooperation and Development (OECD), UN Women and the World Pensions Forum, have noted that investments in women and girls have positive impacts on economies. National and global development investments in women and girls often exceed their initial scope.Gender equality is mainstreamed throughout the SDG framework by ensuring that as much sex-disaggregated data as possible are collected.: 11
Education and culture
Education for sustainable development (ESD) is explicitly recognized in the SDGs as part of Target 4.7 of the SDG on education. UNESCO promotes the Global Citizenship Education (GCED) as a complementary approach. Education for sustainable development is important for all the other 16 SDGs.Culture is explicitly referenced in SDG 11 Target 4 ("Strengthen efforts to protect and safeguard the world's cultural and natural heritage"). However, culture is seen as a cross-cutting theme because it impacts several SDGs. For example, culture plays a role in SDG targets where they relate to environment and resilience (within SDGs 11, 12 and 16), prosperity and livelihoods (within SDG 8), inclusion and participation (within SDG 11 and 16).: 2
Health
SDGs 1 to 6 directly address health disparities, primarily in developing countries. These six goals address key issues in Global Public Health, Poverty, Hunger and Food security, Health, Education, Gender equality and women's empowerment, as well as water and sanitation. Public health officials can use these goals to set their own agenda and plan for smaller scale initiatives for their organizations.
The links between the various sustainable development goals and public health are numerous and well established:
SDG 1: Living below the poverty line is attributed to poorer health outcomes and can be even worse for persons living in developing countries where extreme poverty is more common. A child born into poverty is twice as likely to die before the age of five compared to a child from a wealthier family.
SDG 2: The detrimental effects of hunger and malnutrition that can arise from systemic challenges with food security are enormous. The World Health Organization estimates that 12.9 percent of the population in developing countries is undernourished.
SDG 4 and 5: Educational equity has yet to be reached in the world. Public health efforts are impeded by this, as a lack of education can lead to poorer health outcomes. This is shown by children of mothers who have no education having a lower survival rate compared to children born to mothers with primary or greater levels of education.
Synergies
Synergies amongst the SDGs are "the good antagonists of trade-offs.": 67 With regards to SDG 13 on climate action, the IPCC sees robust synergies particularly for the SDGs 3 (health), 7 (clean energy), 11 (cities and communities), 12 (responsible consumption and production) and 14 (oceans).: 70 To meet SDG 13 and other SDGs, sustained long-term investment in green innovation is required to: decarbonize the physical capital stock – energy, industry, and transportation infrastructure – and ensure its resilience to a changing future climate; to preserve and enhance natural capital – forests, oceans, and wetlands; and to train people to work in a climate-neutral economy.
Challenges
Difficulties with tracking qualitative indicators
Regarding the targets of the SDGs, there is generally weak evidence linking the means of implementation to outcomes. The targets about means of implementation (those denoted with a letter, for example, Target 6.a) are imperfectly conceptualized and inconsistently formulated, and tracking their largely qualitative indicators will be difficult.
Trade-offs not explicitly addressed
The trade-offs among the 17 SDGs might prevent their realization.: 66 For example, these are three difficult trade-offs to consider: "How can ending hunger be reconciled with environmental sustainability? (SDG targets 2.3 and 15.2) How can economic growth be reconciled with environmental sustainability? (SDG targets 9.2 and 9.4) How can income inequality be reconciled with economic growth? (SDG targets 10.1 and 8.1)."The SDGs do not specifically address the tensions between economic growth and environmental sustainability. Instead, they emphasize "longstanding but dubious claims about decoupling and resource efficiency as technological solutions to the environmental crisis.": 145 For example, continued global economic growth of 3 percent (SDG 8) may not be reconcilable with ecological sustainability goals, because the required rate of absolute global eco-economic decoupling is far higher than any country has achieved in the past.
Covid-19 pandemic
The COVID-19 pandemic slowed progress towards achieving the SDGs. At the UN High-level Political Forum on Sustainable Development in July 2023, speakers remarked that the pandemic, and multiple worldwide crises such as climate change, threatened decades of progress on the SDGs.
Criticism
Too many goals and overall problems
Scholars have pointed out flaws in the design of the SDGs for the following aspects: "the number of goals, the structure of the goal framework (for example, the non-hierarchical structure), the coherence between the goals, the specificity or measurability of the targets, the language used in the text, and their reliance on neoliberal economic development-oriented sustainable development as their core orientation.": 161 The SDGs may simply maintain the status quo and fall short of delivering an ambitious development agenda. The current status quo has been described as "separating human wellbeing and environmental sustainability, failing to change governance and to pay attention to trade-offs, root causes of poverty and environmental degradation, and social justice issues."A commentary in The Economist in 2015 argued that 169 targets for the SDGs is too many, describing them as sprawling, misconceived and a mess compared to the eight Millennium Development Goals (MDGs).
Weak on environmental sustainability
Scholars have criticized that the SDGs "fail to recognize that planetary, people and prosperity concerns are all part of one earth system, and that the protection of planetary integrity should not be a means to an end, but an end in itself.": 147 The SDGs "remain fixated on the idea that economic growth is foundational to achieve all pillars of sustainable development.": 147 They do not prioritize environmental protection.: 144 The SDGs include three environment-focused SDGs, which are Goal 13, 14 and 15 (climate, land and oceans), but there is no overarching environmental or planetary goal.: 144 The SDGs do not pursue planetary integrity as such.: 144 Environmental constraints and planetary boundaries are underrepresented within the SDGs. For instance, the way the current SDGs are structured leads to a negative correlation between environmental sustainability and SDGs, with most indicators within even the sustainability-focused goals focusing on social or economic outcomes. They could unintentionally promote environmental destruction in the name of sustainable development.Certain studies also argue that the focus of the SDGs on neoliberal sustainable development is detrimental to planetary integrity and justice. Both of these ambitions (planetary integrity and justice) would require limits to economic growth.: 145 Scientists have proposed several ways to address the weaknesses regarding environmental sustainability in the SDGs:
The monitoring of essential variables to better capture the essence of coupled environmental and social systems that underpin sustainable development, helping to guide coordination and systems transformation.
More attention to the context of the biophysical systems in different places (e.g., coastal river deltas, mountain areas)
Better understanding of feedbacks across scales in space (e.g., through globalization) and time (e.g., affecting future generations) that could ultimately determine the success or failure of the SDGs.
Ethical aspects
There are concerns about the ethical orientation of the SDGs: they remain "underpinned by strong (Western) modernist notions of development: sovereignty of humans over their environment (anthropocentricism), individualism, competition, freedom (rights rather than duties), self-interest, belief in the market leading to collective welfare, private property (protected by legal systems), rewards based on merit, materialism, quantification of value, and instrumentalization of labor.": 146 Some studies warn that the SDGs could be used to camouflage business-as-usual by disguising it using SDG-related sustainability rhetoric. A meta-analysis review study in 2022 found that: "There is even emerging evidence that the SDGs might have even adverse effects, by providing a "smokescreen of hectic political activity" that blurs a reality of stagnation, dead ends and business-as-usual.": 220
Monitoring mechanism
UN High-Level Political Forum on Sustainable Development (HLPF)
The High-level Political Forum on Sustainable Development (HLPF) replaced the United Nations Commission on Sustainable Development in 2012.: 206 It should be a "regular meeting place for governments and non-state representatives to assess global progress towards sustainable development.": 206 The meetings take place under the auspices of the United Nations economic and Social Council. In July 2020 the meeting took place online for the first time due to the COVID-19 pandemic. The theme was "Accelerated action and transformative pathways: realizing the decade of action and delivery for sustainable development" and a ministerial declaration was adopted.High-level progress reports for all the SDGs are published in the form of reports by the United Nations Secretary General. The most recent one is from April 2020.However, the HLPF has a range of problems. It has not been able to promote system-wide coherence. The reasons for this include its broad and unclear mandate combined with a lack of resources and divergent national interests. Therefore, this reporting system is mainly just a platform for voluntary reporting and peer learning among governments.
Monitoring tools and websites
The online publication SDG-Tracker was launched in June 2018 and presents data across all available indicators. It relies on the Our World in Data database and is also based at the University of Oxford. The publication has global coverage and tracks whether the world is making progress towards the SDGs. It aims to make the data on the 17 goals available and understandable to a wide audience. The SDG-Tracker highlights that the world is currently (early 2019) very far away from achieving the goals.
The Global SDG Index and Dashboards Report is the first publication to track countries' performance on all 17 Sustainable Development Goals. The annual publication, co-produced by Bertelsmann Stiftung and SDSN, includes a ranking and dashboards that show key challenges for each country in terms of implementing the SDGs. The publication also shows an analysis of government efforts to implement the SDGs.
Reporting on progress
Overall status
Reports by the United Nations (for example the UN Global Sustainable Development Report in 2019) and by other organizations who are tracking progress on the SDGs have repeatedly pointed out that the world is unlikely to achieve most of the targets by 2030: 41 (or earlier for those targets that have an earlier target yet). This is called the world is not on track.: 41 Of particular concern - which cut across many of the SDGs – are rising inequalities, ongoing climate change and increasing biodiversity loss.: 41 In addition, there is a trade-off between the planetary boundaries of Earth and the aspirations for wealth and well-being. This has been described as follows: "the world's social and natural biophysical systems cannot support the aspirations for universal human well-being embedded in the SDGs.": 41 Due to various economic and social issues, many countries are seeing a major decline in the progress made. In Asia for example, data shows a loss of progress on goals 2, 8,10,11, and 15. Recommended approaches to still achieve the SDGs are: "Set priorities, focus on harnessing the environmental dimension of the SDGs, understand how the SDGs work as an indivisible system, and look for synergies."
Assessing the political impact of the SDGs
A scientific assessment released in 2022 analysed the political impacts of the SDGs. It reviewed over 3,000 scientific articles, mainly from the social sciences. The study looked at possible discursive, normative and institutional effects. The presence of all three types of effects throughout a political system is defined as transformative impact, which is the eventual goal of the 2030 Agenda.Discursive effects relate to changes in global and national debates that make them more aligned with the SDGs. Normative effects would be adjustments in legislative and regulatory frameworks and policies in line with, and because of, the SDGs. Institutional effects would be the creation of new departments, committees, offices or programs linked to the achievement of the SDGs or the realignment of existing institutions.The review found that the SDGs have had only limited transformative political impact thus far. They have had mainly discursive effects only. For example, the broad uptake of the principle of leaving no one behind in pronouncements by policymakers and civil society activists is a discursive effect. The SDGs have also led to some isolated normative and institutional reforms. However, there is widespread doubt that the SDGs can steer societies towards more ecological integrity at the planetary scale. This is because countries generally prioritize the more socioeconomic SDGs (e.g. SDGs 8 to 12) over the environmentally oriented ones (e.g. SDGs 13 to 15), which is in alignment with their long-standing national development policies.
Impacts of COVID-19 pandemic
The COVID-19 pandemic in 2020 had impacts on all 17 goals. It has become "the worst human and economic crisis in a lifetime.": 2 The pandemic threatened progress made in particular for SDG 3 (health), SDG 4 (education), SDG 6 (water and sanitation for all), SDG 10 (reduce inequality) and SDG 17 (partnerships).The International Monetary Fund (IMF) has also taken the initiative to achieve the SDGs by offering their support to developing countries. For example, the IMF works to reduce poverty in low-income developing countries by offering financial support during the COVID-19 pandemic.
Uneven priorities of goals
In 2019 five progress reports on the 17 SDGs were published. Three came from the United Nations Department of Economic and Social Affairs (UNDESA), one from the Bertelsmann Foundation and one from the European Union. A review of the five reports analyzed which of the 17 Goals were addressed in priority and which ones were left behind. In explanation of the findings, the Basel Institute of Commons and Economics said Biodiversity, Peace and Social Inclusion were "left behind" by quoting the official SDGs motto "Leaving no one behind."It has been argued that governments and businesses actively prioritize the social and economic goals over the environmental goals (such as Goal 14 and 15) in both rhetoric and practice.
Costs
Cost estimates
The United Nations estimates that for Africa, considering the continent's population growth, yearly funding of $1.3 trillion would be needed to achieve the Sustainable Development Goals in Africa. The International Monetary Fund also estimates that $50 billion may be needed only to cover the expenses of climate adaptation.Estimates for providing clean water and sanitation for the whole population of all continents have been as high as US$200 billion. The World Bank says that estimates need to be made country by country, and reevaluated frequently over time.In 2014, UNCTAD estimated the annual costs to achieving the UN Goals at US$2.5 trillion per year. Another estimate from 2018 (by the Basel Institute of Commons and Economics, that conducts the World Social Capital Monitor) found that to reach all of the SDGs this would require between US$2.5 and $5.0 trillion per year.
Allocation of funds
In 2017 the UN launched the Inter-agency Task Force on Financing for Development (UN IATF on FfD) that invited to a public dialogue. The top-5 sources of financing for development were estimated in 2018 to be: Real new sovereign debt OECD countries, military expenditures, official increase sovereign debt OECD countries, remittances from expats to developing countries, official development assistance (ODA).The Rockefeller Foundation asserted in 2017 that "The key to financing and achieving the SDGs lies in mobilizing a greater share of the $200+ trillion in annual private capital investment flows toward development efforts, and philanthropy has a critical role to play in catalyzing this shift." Large-scale funders participating in a Rockefeller Foundation-hosted design thinking workshop concluded that "while there is a moral imperative to achieve the SDGs, failure is inevitable if there aren't drastic changes to how we go about financing large scale change."A meta-analysis published in 2022 found that there was scant evidence that governments have substantially reallocated funding to implement the SDGs, either for national implementation or for international cooperation. The SDGs do not seem to have changed public budgets and financial allocation mechanisms in any important way, except for some local governance contexts. National budgets cannot easily be reallocated.: 81
SDG-driven investment
Capital stewardship is expected to play a crucial part in the progressive advancement of the SDG agenda to "shift the economic system towards sustainable investment by using the SDG framework across all asset classes." The notion of SDG Driven Investment gained further ground amongst institutional investors in 2019.In 2017, 2018 and early 2019, the World Pensions Council (WPC) held a series of ESG-focused (Environmental, Social and Governance) discussions with pension board members (trustees) and senior investment executives from across G20 nations. Many pension investment executives and board members confirmed they were in the process of adopting or developing SDG-informed investment processes, with more ambitious investment governance requirements – notably when it comes to climate action, gender equality and social fairness.Some studies, however, warn of selective implementation of SDGs and political risks linked to private investments in the context of continued shortage of public funding.
Communication and advocacy
The 2030 Agenda did not create specific authority for communicating the SDGs; however, both international and local advocacy organizations have pursued significant non-state resources to communicate the SDGS. UN agencies which are part of the United Nations Development Group decided to support an independent campaign to communicate the new SDGs to a wider audience. This campaign, Project Everyone, had the support of corporate institutions and other international organizations.Using the text drafted by diplomats at the UN level, a team of communication specialists developed icons for every goal. They also shortened the title The 17 Sustainable Development Goals to Global Goals, then ran workshops and conferences to communicate the Global Goals to a global audience.The Aarhus Convention is a United Nations convention passed in 2001, explicitly to encourage and promote effective public engagement in environmental decision making. Information transparency related to social media and the engagement of youth are two issues related to the Sustainable Development Goals that the convention has addressed.
Advocates
In 2019 and then in 2021, United Nations Secretary-General António Guterres appointed 17 SDG advocates. The role of the public figures is to raise awareness, inspire greater ambition, and push for faster action on the SDGs. The co-chairs are: Mia Mottley, Prime Minister of Barbados and Justin Trudeau, Prime Minister of Canada.
Global events
Global Goals Week is an annual week-long event in September for action, awareness, and accountability for the Sustainable Development Goals. It is a shared commitment for over 100 partners to ensure quick action on the SDGs by sharing ideas and transformative solutions to global problems. It first took place in 2016. It is often held concurrently with Climate Week NYC.The Arctic Film Festival is an annual film festival organized by HF Productions and supported by the SDGs' Partnership Platform. Held for the first time in 2019, the festival is expected to take place every year in September in Longyearbyen, Svalbard, Norway.
History
The Post-2015 Development Agenda was a process from 2012 to 2015 led by the United Nations to define the future global development framework that would succeed the Millennium Development Goals. The SDGs were developed to succeed the Millennium Development Goals (MDGs) which ended in 2015.
In 1983, the United Nations created the World Commission on Environment and Development (later known as the Brundtland Commission), which defined sustainable development as "meeting the needs of the present without compromising the ability of future generations to meet their own needs." In 1992, the first United Nations Conference on Environment and Development (UNCED) or Earth Summit was held in Rio de Janeiro, where the first agenda for Environment and Development, also known as Agenda 21, was developed and adopted.
In 2012, the United Nations Conference on Sustainable Development (UNCSD), also known as Rio+20, was held as a 20-year follow up to UNCED. Colombia proposed the idea of the SDGs at a preparation event for Rio+20 held in Indonesia in July 2011. In September 2011, this idea was picked up by the United Nations Department of Public Information 64th NGO Conference in Bonn, Germany. The outcome document proposed 17 sustainable development goals and associated targets. In the run-up to Rio+20 there was much discussion about the idea of the SDGs. At the Rio+20 Conference, a resolution known as "The Future We Want" was reached by member states. Among the key themes agreed on were poverty eradication, energy, water and sanitation, health, and human settlement.
In January 2013, the 30-member UN General Assembly Open Working Group (OWG) on Sustainable Development Goals was established to identify specific goals for the SDGs. The OWG submitted their proposal of 8 SDGs and 169 targets to the 68th session of the General Assembly in September 2014. On 5 December 2014, the UN General Assembly accepted the Secretary General's Synthesis Report, which stated that the agenda for the post-2015 SDG process would be based on the OWG proposals.
Country examples
Asia and Pacific
Australia
Africa
The United Nations Development Programme (UNDP) has collected information to show how awareness about the SDGs among government officers, civil society and others has been created in many African countries.
Nigeria
Europe and Middle East
Baltic nations, via the Council of the Baltic Sea States, have created the Baltic 2030 Action Plan., ,
Lebanon
Syria
Higher education in Syria began with sustainable development steps through Damascus University. With unique environmental measures, starting from the Barada River, green space and health.
United Kingdom
The UK's approach to delivering the Global SDGs is outlined in Agenda 2030: Delivering the Global Goals, developed by the Department for International Development. In 2019, the Bond network analyzed the UK's global progress on the Sustainable Development Goals (SDGs). The Bond report highlights crucial gaps where attention and investment are most needed. The report was compiled by 49 organizations and 14 networks and working groups.
See also
Sustainability
SDG Publishers Compact
References
External links
UN Sustainable Development Knowledge Platform – The SDGs
"Global Goals" Campaign Campaign on the SDGs published by Project Everyone
Global SDG Indicators Database of the United Nations
SDG-Tracker.org – Visualized tracking of progress towards the SDGs
SDG Pathfinder – Explore content on SDGs from six international organizations (powered by the OECD) |
earth's energy budget | Earth's energy budget (or Earth's energy balance) accounts for the balance between the energy that Earth receives from the Sun and the energy the Earth loses back into outer space. Smaller energy sources, such as Earth's internal heat, are taken into consideration, but make a tiny contribution compared to solar energy. The energy budget also accounts for how energy moves through the climate system.: 2227 Because the Sun heats the equatorial tropics more than the polar regions, received solar irradiance is unevenly distributed. As the energy seeks equilibrium across the planet, it drives interactions in Earth's climate system, i.e., Earth's water, ice, atmosphere, rocky crust, and all living things.: 2224 The result is Earth's climate.
Earth's energy budget depends on many factors, such as atmospheric aerosols, greenhouse gases, the planet's surface albedo (reflectivity), clouds, vegetation, land use patterns, and more. When the incoming and outgoing energy fluxes are in balance, Earth is in radiative equilibrium and the climate system will be relatively stable. Global warming occurs when earth receives more energy than it gives back to space, and global cooling takes place when the outgoing energy is greater.Multiple types of measurements and observations show a warming imbalance since at least year 1970. The rate of heating from this human-caused event is without precedent.: 54 The main origin of changes in the Earth's energy is from human-induced changes in the composition of the atmosphere. During 2005 to 2019 the Earth's energy imbalance (EEI) averaged about 460 TW or globally 0.90 ± 0.15 W per m2.When the energy budget changes, there is a delay before average global surface temperature changes significantly. This is due to the thermal inertia of the oceans, land and cryosphere. Accurate quantification of these energy flows and storage amounts is a requirement within most climate models.
Definition
Earth’s energy budget includes the "major energy flows of relevance for the climate system". These are "the top-of-atmosphere energy budget; the surface energy budget; changes in the global energy inventory and internal flows of energy within the climate system".: 2227
Earth's energy flows
In spite of the enormous transfers of energy into and from the Earth, it maintains a relatively constant temperature because, as a whole, there is little net gain or loss: Earth emits via atmospheric and terrestrial radiation (shifted to longer electromagnetic wavelengths) to space about the same amount of energy as it receives via solar insolation (all forms of electromagnetic radiation).
The main origin of changes in the Earth's energy is from human-induced changes in the composition of the atmosphere, amounting to about 460 TW or globally 0.90 ± 0.15 W per m2.
Incoming solar energy (shortwave radiation)
The total amount of energy received per second at the top of Earth's atmosphere (TOA) is measured in watts and is given by the solar constant times the cross-sectional area of the Earth corresponded to the radiation. Because the surface area of a sphere is four times the cross-sectional area of a sphere (i.e. the area of a circle), the globally and yearly averaged TOA flux is one quarter of the solar constant and so is approximately 340 watts per square meter (W/m2). Since the absorption varies with location as well as with diurnal, seasonal and annual variations, the numbers quoted are multi-year averages obtained from multiple satellite measurements.Of the ~340 W/m2 of solar radiation received by the Earth, an average of ~77 W/m2 is reflected back to space by clouds and the atmosphere and ~23 W/m2 is reflected by the surface albedo, leaving ~240 W/m2 of solar energy input to the Earth's energy budget. This amount is called the absorbed solar radiation (ASR). It implies a value of about 0.3 for the mean net albedo of Earth, also called its Bond albedo (A):
A
S
R
=
(
1
−
A
)
×
340
W
m
−
2
≃
240
W
m
−
2
.
{\displaystyle ASR=(1-A)\times 340~\mathrm {W} ~\mathrm {m} ^{-2}\simeq 240~\mathrm {W} ~\mathrm {m} ^{-2}.}
Outgoing longwave radiation
Thermal energy leaves the planet in the form of outgoing longwave radiation (OLR). Longwave radiation is electromagnetic thermal radiation emitted by Earth's surface and atmosphere. Longwave radiation is in the infrared band. But, the terms are not synonymous, as infrared radiation can be either shortwave or longwave. Sunlight contains significant amounts of shortwave infrared radiation. A threshold wavelength of 4 microns is sometimes used to distinguish longwave and shortwave radiation.
Generally, absorbed solar energy is converted to different forms of heat energy. Some of the solar energy absorbed by the surface is converted to thermal radiation at wavelengths in the "atmospheric window"; this radiation is able to pass through the atmosphere unimpeded and directly escape to space, contributing to OLR. The remainder of absorbed solar energy is transported upwards through the atmosphere through a variety of heat transfer mechanisms, until the atmosphere emits that energy as thermal energy which is able to escape to space, again contributing to OLR. For example, heat is transported into the atmosphere via evapotranspiration and latent heat fluxes or conduction/convection processes, as well as via radiative heat transport. Ultimately, all outgoing energy is radiated into space in the form of longwave radiation.
The transport of longwave radiation from Earth's surface through its multi-layered atmosphere is governed by radiative transfer equations such as Schwarzschild's equation for radiative transfer (or more complex equations if scattering is present) and obeys Kirchoff's law of thermal radiation.
A one-layer model produces an approximate description of OLR which yields temperatures at the surface (Ts=288 Kelvin) and at the middle of the troposphere (Ta=242 Kelvin) that are close to observed average values:
O
L
R
≃
ϵ
σ
T
a
4
+
(
1
−
ϵ
)
σ
T
s
4
.
{\displaystyle OLR\simeq \epsilon \sigma T_{a}^{4}+(1-\epsilon )\sigma T_{s}^{4}.}
In this expression σ is the Stefan-Boltzmann constant and ε represents the emissivity of the atmosphere, which is less than 1 because the atmosphere does not emit within the wavelength range known as the atmospheric window.
Aerosols, clouds, water vapor, and trace greenhouse gases contribute to an effective value of about ε=0.78. The strong (fourth-power) temperature sensitivity maintains a near-balance of the outgoing energy flow to the incoming flow via small changes in the planet's absolute temperatures.
Role of the greenhouse effect
As viewed from Earth's surrounding space, greenhouse gases influence the planet's atmospheric emissivity (ε). Changes in atmospheric composition can thus shift the overall radiation balance. For example, an increase in heat trapping by a growing concentration of greenhouse gases (i.e. an enhanced greenhouse effect) forces a decrease in OLR and a warming (restorative) energy imbalance. Ultimately when the amount of greenhouse gases increases or decreases, in-situ surface temperatures rise or fall until the ASR = OLR balance is again achieved.
Earth's internal heat sources and other small effects
The geothermal heat flow from the Earth's interior is estimated to be 47 terawatts (TW) and split approximately equally between radiogenic heat and heat left over from the Earth's formation. This corresponds to an average flux of 0.087 W/m2 and represents only 0.027% of Earth's total energy budget at the surface, being dwarfed by the 173,000 TW of incoming solar radiation.Human production of energy is even lower at an estimated 160,000 TW-hr for all of year 2019. This corresponds to an average continuous heat flow of about 18 TW. However, consumption is growing rapidly and energy production with fossil fuels also produces an increase in atmospheric greenhouse gases, leading to a more than 20 times larger imbalance in the incoming/outgoing flows that originate from solar radiation.Photosynthesis also has a significant effect: An estimated 140 TW (or around 0.08%) of incident energy gets captured by photosynthesis, giving energy to plants to produce biomass. A similar flow of thermal energy is released over the course of a year when plants are used as food or fuel.
Other minor sources of energy are usually ignored in the calculations, including accretion of interplanetary dust and solar wind, light from stars other than the Sun and the thermal radiation from space. Earlier, Joseph Fourier had claimed that deep space radiation was significant in a paper often cited as the first on the greenhouse effect.
Budget analysis
In simplest terms, Earth's energy budget is balanced when the incoming flow equals the outgoing flow. Since a portion of incoming energy is directly reflected, the balance can also be stated as absorbed incoming solar (shortwave) radiation equal to outgoing longwave radiation:
A
S
R
=
O
L
R
.
{\displaystyle ASR=OLR.}
Internal flow analysis
To describe some of the internal flows within the budget, let the insolation received at the top of the atmosphere be 100 units (=340 W/m2), as shown in the accompanying Sankey diagram. Called the albedo of Earth, around 35 units in this example are directly reflected back to space: 27 from the top of clouds, 2 from snow and ice-covered areas, and 6 by other parts of the atmosphere. The 65 remaining units (ASR=220 W/m2) are absorbed: 14 within the atmosphere and 51 by the Earth's surface.
The 51 units reaching and absorbed by the surface are emitted back to space through various forms of terrestrial energy: 17 directly radiated to space and 34 absorbed by the atmosphere (19 through latent heat of vaporisation, 9 via convection and turbulence, and 6 as absorbed infrared by greenhouse gases). The 48 units absorbed by the atmosphere (34 units from terrestrial energy and 14 from insolation) are then finally radiated back to space. This simplified example neglects some details of mechanisms that recirculate, store, and thus lead to further buildup of heat near the surface.
Ultimately the 65 units (17 from the ground and 48 from the atmosphere) are emitted as OLR. They approximately balance the 65 units (ASR) absorbed from the sun in order to maintain a net-zero gain of energy by Earth.
Heat storage reservoirs
Land, ice, and oceans are active material constituents of Earth's climate system along with the atmosphere. They have far greater mass and heat capacity, and thus much more thermal inertia. When radiation is directly absorbed or the surface temperature changes, thermal energy will flow as sensible heat either into or out of the bulk mass of these components via conduction/convection heat transfer processes. The transformation of water between its solid/liquid/vapor states also acts as a source or sink of potential energy in the form of latent heat. These processes buffer the surface conditions against some of the rapid radiative changes in the atmosphere. As a result, the daytime versus nighttime difference in surface temperatures is relatively small. Likewise, Earth's climate system as a whole shows a slow response to shifts in the atmospheric radiation balance.The top few meters of Earth's oceans harbor more thermal energy than its entire atmosphere. Like atmospheric gases, fluidic ocean waters transport vast amounts of such energy over the planet's surface. Sensible heat also moves into and out of great depths under conditions that favor downwelling or upwelling.Over 90 percent of the extra energy that has accumulated on Earth from ongoing global warming since 1970 has been stored in the ocean. About one-third has propagated to depths below 700 meters. The overall rate of growth has also risen during recent decades, reaching close to 500 TW (1 W/m2) as of 2020. That led to about 14 zettajoules (ZJ) of heat gain for the year, exceeding the 570 exajoules (=160,000 TW-hr) of total primary energy consumed by humans by a factor of at least 20.
Heating/cooling rate analysis
Generally speaking, changes to Earth's energy flux balance can be thought of as being the result of external forcings (both natural and anthropogenic, radiative and non-radiative), system feedbacks, and internal system variability. Such changes are primarily expressed as observable shifts in temperature (T), clouds (C), water vapor (W), aerosols (A), trace greenhouse gases (G), land/ocean/ice surface reflectance (S), and as minor shifts in insolaton (I) among other possible factors. Earth's heating/cooling rate can then be analyzed over selected timeframes (Δt) as the net change in energy (ΔE) associated with these attributes:
Δ
E
/
Δ
t
=
(
Δ
E
T
+
Δ
E
C
+
Δ
E
W
+
Δ
E
A
+
Δ
E
G
+
Δ
E
S
+
Δ
E
I
+
.
.
.
)
/
Δ
t
=
A
S
R
−
O
L
R
.
{\displaystyle {\begin{aligned}\Delta E/\Delta t&=(\ \Delta E_{T}+\Delta E_{C}+\Delta E_{W}+\Delta E_{A}+\Delta E_{G}+\Delta E_{S}+\Delta E_{I}+...\ )/\Delta t\\\\&=ASR-OLR.\end{aligned}}}
Here the term ΔET, corresponding to the Planck response, is negative-valued when temperature rises due to its strong direct influence on OLR.The recent increase in trace greenhouse gases produces an enhanced greenhouse effect, and thus a positive ΔEG forcing term. By contrast, a large volcanic eruption (e.g. Mount Pinatubo 1991, El Chichón 1982) can inject sulfur-containing compounds into the upper atmosphere. High concentrations of stratospheric sulfur aerosols may persist for up to a few years, yielding a negative forcing contribution to ΔEA. Various other types of anthropogenic aerosol emissions make both positive and negative contributions to ΔEA. Solar cycles produce ΔEI smaller in magnitude than those of recent ΔEG trends from human activity.Climate forcings are complex since they can produce direct and indirect feedbacks that intensify (positive feedback) or weaken (negative feedback) the original forcing. These often follow the temperature response. Water vapor trends as a positive feedback with respect to temperature changes due to evaporation shifts and the Clausius-Clapeyron relation. An increase in water vapor results in positive ΔEW due to further enhancement of the greenhouse effect. A slower positive feedback is the ice-albedo feedback. For example, the loss of Arctic ice due to rising temperatures makes the region less reflective, leading to greater absorption of energy and even faster ice melt rates, thus positive influence on ΔES. Collectively, feedbacks tend to amplify global warming or cooling.: 94 Clouds are responsible for about half of Earth's albedo and are powerful expressions of internal variability of the climate system. They may also act as feedbacks to forcings, and could be forcings themselves if for example a result of cloud seeding activity. Contributions to ΔEC vary regionally and depending upon cloud type. Measurements from satellites are gathered in concert with simulations from models in an effort to improve understanding and reduce uncertainty.
Earth's energy imbalance (EEI)
The Earth's energy imbalance (EEI) is defined as "the persistent and positive (downward) net top of atmosphere energy flux associated with greenhouse gas forcing of the climate system".: 2227 If Earth's incoming energy flux is larger or smaller than the outgoing energy flux, then the planet will gain (warm) or lose (cool) net heat energy in accordance with the law of energy conservation:
E
E
I
≡
A
S
R
−
O
L
R
{\displaystyle EEI\equiv ASR-OLR}
When Earth's energy imbalance (EEI) shifts by a sufficiently large amount, the shift is measurable by orbiting satellite-based radiometric instruments. Imbalances that fail to reverse over time will also drive long-term temperature changes in the atmospheric, oceanic, land, and ice components of the climate system. In situ temperature changes and related effects thus also provide measures of EEI. From mid-2005 to mid-2019, satellite and ocean temperature observations have each independently shown an approximate doubling of the (global) warming imbalance in Earth's energy budget.The biggest changes in EEI arise from changes in the composition of the atmosphere through human activities, thereby interfering with the natural flow of energy through the climate system. The main changes are from increases in carbon dioxide and other greenhouse gases, that produce heating (EEI increasing), and pollution. The latter refers to atmospheric aerosols of various kinds, some of which absorb energy while others reflect energy and produce cooling.
It is not (yet) possible to measure the absolute magnitude of EEI directly at top of atmosphere, although changes over time as observed by satellite-based instruments are thought to be accurate. The only practical way to estimate the absolute magnitude of EEI is through an inventory of the changes in energy in the climate system. The biggest of these energy reservoirs is the ocean.
Measurements at top of atmosphere (TOA)
Several satellites measure the energy absorbed and radiated by Earth, and thus by inference the energy imbalance. These are located top of atmosphere (TOA) and provide data covering the globe. The NASA Earth Radiation Budget Experiment (ERBE) project involved three such satellites: the Earth Radiation Budget Satellite (ERBS), launched October 1984; NOAA-9, launched December 1984; and NOAA-10, launched September 1986.NASA's Clouds and the Earth's Radiant Energy System (CERES) instruments are part of its Earth Observing System (EOS) since March 2000. CERES is designed to measure both solar-reflected (short wavelength) and Earth-emitted (long wavelength) radiation. The CERES data showed increases in EEI from +0.42±0.48 W/m2 in 2005 to +1.12±0.48 W/m2 in 2019. Contributing factors included more water vapor, less clouds, increasing greenhouse gases, and declining ice that were partially offset by rising temperatures. Subsequent investigation of the behavior using the GFDL CM4/AM4 climate model concluded there was a less than 1% chance that internal climate variability alone caused the trend.
Other researchers have used data from CERES, AIRS, CloudSat, and other EOS instruments to look for trends of radiative forcing embedded within the EEI data. Their analysis showed a forcing rise of +0.53±0.11 W/m2 from years 2003 to 2018. About 80% of the increase was associated with the rising concentration of greenhouse gases which reduced the outgoing longwave radiation.Further satellite measurements including TRMM and CALIPSO data have indicated additional precipitation, which is sustained by increased energy leaving the surface through evaporation (the latent heat flux), offsetting some of the increase in the longwave greenhouse flux to the surface.It is noteworthy that radiometric calibration uncertainties limit the capability of the current generation of satellite-based instruments, which are otherwise stable and precise. As a result, relative changes in EEI are quantifiable with an accuracy which is not also achievable for any single measurement of the absolute imbalance.
In situ measurements
Global surface temperature (GST): GST is calculated by averaging temperatures measured at the surface of the sea along with air temperatures measured over land. Reliable data extending to at least 1880 shows that GST has undergone a steady increase of about 0.18°C per decade since about year 1970.Ocean heat content (OHC): Ocean waters are especially effective absorbents of solar energy and have a far greater total heat capacity than the atmosphere. Research vessels and stations have sampled sea temperatures at depth and around the globe since before 1960. Additionally, after the year 2000, an expanding network of over 3000 Argo robotic floats has measured the temperature anomaly, or equivalently the heat content change (ΔOHC). Since at least 1990, OHC has increased at a steady or accelerating rate. These changes provide the most robust indirect measure of EEI since the oceans take up over 90% of the excess heat:
E
E
I
≳
Δ
O
H
C
/
Δ
t
{\displaystyle EEI\gtrsim \Delta OHC/\Delta t}
Global ice loss: The extent of floating and grounded ice is measured by satellites, while the change in mass is then inferred from measured changes in sea level in concert with computational models that account for thermal expansion and other factors. Observations since 1994 show that ice has retreated from every part of Earth at an accelerating rate.
Importance as a climate change metric
Climate scientists Kevin Trenberth, James Hansen, and colleagues have identified the monitoring of Earth's energy imbalance as an important metric to help policymakers guide the pace for mitigation and adaptation measures. Because of climate system inertia, longer-term EEI (Earth's energy imbalance) trends can forecast further changes that are "in the pipeline".Scientists found that the EEI is the most important metric related to climate change. It is the net result of all the processes and feedbacks in play in the climate system. Knowing how much extra energy affects weather systems and rainfall is vital to understand the increasing weather extremes.In 2012, NASA scientists reported that to stop global warming atmospheric CO2 concentration would have to be reduced to 350 ppm or less, assuming all other climate forcings were fixed. As of 2020, atmospheric CO2 reached 415 ppm and all long-lived greenhouse gases exceeded a 500 ppm CO2-equivalent concentration due to continued growth in human emissions.
See also
Lorenz energy cycle
Planetary equilibrium temperature
Climate sensitivity
Tipping points in the climate system
Climate change portal
References
External links
NASA: The Atmosphere's Energy Budget
Clouds and Earth's Radiant Energy System (CERES)
NASA/GEWEX Surface Radiation Budget (SRB) Project |
transport | Transport (in British English) or transportation (in American English) is the intentional movement of humans, animals, and goods from one location to another. Modes of transport include air, land (rail and road), water, cable, pipelines, and space. The field can be divided into infrastructure, vehicles, and operations. Transport enables human trade, which is essential for the development of civilizations.
Transport infrastructure consists of both fixed installations, including roads, railways, airways, waterways, canals, and pipelines, and terminals such as airports, railway stations, bus stations, warehouses, trucking terminals, refueling depots (including fuel docks and fuel stations), and seaports. Terminals may be used both for the interchange of passengers and cargo and for maintenance.
Means of transport are any of the different kinds of transport facilities used to carry people or cargo. They may include vehicles, riding animals, and pack animals. Vehicles may include wagons, automobiles, bicycles, buses, trains, trucks, helicopters, watercraft, spacecraft, and aircraft.
Modes
A mode of transport is a solution that makes use of a certain type of vehicle, infrastructure, and operation. The transport of a person or of cargo may involve one mode or several of the modes, with the latter case being called inter-modal or multi-modal transport. Each mode has its own advantages and disadvantages, and will be chosen on the basis of cost, capability, and route.
Governments deal with the way the vehicles are operated, and the procedures set for this purpose, including financing, legalities, and policies. In the transport industry, operations and ownership of infrastructure can be either public or private, depending on the country and mode.
Passenger transport may be public, where operators provide scheduled services, or private. Freight transport has become focused on containerization, although bulk transport is used for large volumes of durable items. Transport plays an important part in economic growth and globalization, but most types cause air pollution and use large amounts of land. While it is heavily subsidized by governments, good planning of transport is essential to make traffic flow and restrain urban sprawl.
Human-powered
Human-powered transport, a form of sustainable transport, is the transport of people or goods using human muscle-power, in the form of walking, running, and swimming. Modern technology has allowed machines to enhance human power. Human-powered transport remains popular for reasons of cost-saving, leisure, physical exercise, and environmentalism; it is sometimes the only type available, especially in underdeveloped or inaccessible regions.
Although humans are able to walk without infrastructure, the transport can be enhanced through the use of roads, especially when using the human power with vehicles, such as bicycles and inline skates. Human-powered vehicles have also been developed for difficult environments, such as snow and water, by watercraft rowing and skiing; even the air can be entered with human-powered aircraft.
Animal-powered
Animal-powered transport is the use of working animals for the movement of people and commodities. Humans may ride some of the animals directly, use them as pack animals for carrying goods, or harness them, alone or in teams, to pull sleds or wheeled vehicles.
Air
A fixed-wing aircraft, commonly called an airplane, is a heavier-than-air craft where movement of the air in relation to the wings is used to generate lift. The term is used to distinguish this from rotary-wing aircraft, where the movement of the lift surfaces relative to the air generates lift. A gyroplane is both fixed-wing and rotary wing. Fixed-wing aircraft range from small trainers and recreational aircraft to large airliners and military cargo aircraft.
Two things necessary for aircraft are air flow over the wings for lift and an area for landing. The majority of aircraft also need an airport with the infrastructure for maintenance, restocking, and refueling and for the loading and unloading of crew, cargo, and passengers. While the vast majority of aircraft land and take off on land, some are capable of take-off and landing on ice, snow, and calm water.
The aircraft is the second fastest method of transport, after the rocket. Commercial jets can reach up to 955 kilometres per hour (593 mph), single-engine aircraft 555 kilometres per hour (345 mph). Aviation is able to quickly transport people and limited amounts of cargo over longer distances, but incurs high costs and energy use; for short distances or in inaccessible places, helicopters can be used. As of April 28, 2009, The Guardian article notes that "the WHO estimates that up to 500,000 people are on planes at any time."
Land
Land transport covers all land-based transport systems that provide for the movement of people, goods, and services. Land transport plays a vital role in linking communities to each other. Land transport is a key factor in urban planning. It consists of two kinds, rail and road.
Rail
Rail transport is where a train runs along a set of two parallel steel rails, known as a railway or railroad. The rails are anchored perpendicular to ties (or sleepers) of timber, concrete, or steel, to maintain a consistent distance apart, or gauge. The rails and perpendicular beams are placed on a foundation made of concrete or compressed earth and gravel in a bed of ballast. Alternative methods include monorail and maglev.
A train consists of one or more connected vehicles that operate on the rails. Propulsion is commonly provided by a locomotive, that hauls a series of unpowered cars, that can carry passengers or freight. The locomotive can be powered by steam, by diesel, or by electricity supplied by trackside systems. Alternatively, some or all the cars can be powered, known as a multiple unit. Also, a train can be powered by horses, cables, gravity, pneumatics, and gas turbines. Railed vehicles move with much less friction than rubber tires on paved roads, making trains more energy efficient, though not as efficient as ships.
Intercity trains are long-haul services connecting cities; modern high-speed rail is capable of speeds up to 350 km/h (220 mph), but this requires specially built track. Regional and commuter trains feed cities from suburbs and surrounding areas, while intra-urban transport is performed by high-capacity tramways and rapid transits, often making up the backbone of a city's public transport. Freight trains traditionally used box cars, requiring manual loading and unloading of the cargo. Since the 1960s, container trains have become the dominant solution for general freight, while large quantities of bulk are transported by dedicated trains.
Road
A road is an identifiable route, way, or path between two or more places. Roads are typically smoothed, paved, or otherwise prepared to allow easy travel; though they need not be, and historically many roads were simply recognizable routes without any formal construction or maintenance. In urban areas, roads may pass through a city or village and be named as streets, serving a dual function as urban space easement and route.The most common road vehicle is the automobile; a wheeled passenger vehicle that carries its own motor. Other users of roads include buses, trucks, motorcycles, bicycles, and pedestrians. As of 2010, there were 1.015 billion automobiles worldwide.
Road transport offers complete freedom to road users to transfer the vehicle from one lane to the other and from one road to another according to the need and convenience. This flexibility of changes in location, direction, speed, and timings of travel is not available to other modes of transport. It is possible to provide door-to-door service only by road transport.
Automobiles provide high flexibility with low capacity, but require high energy and area use, and are the main source of harmful noise and air pollution in cities; buses allow for more efficient travel at the cost of reduced flexibility. Road transport by truck is often the initial and final stage of freight transport.
Water
Water transport is movement by means of a watercraft—such as a barge, boat, ship, or sailboat—over a body of water, such as a sea, ocean, lake, canal, or river. The need for buoyancy is common to watercraft, making the hull a dominant aspect of its construction, maintenance, and appearance.
In the 19th century, the first steam ships were developed, using a steam engine to drive a paddle wheel or propeller to move the ship. The steam was produced in a boiler using wood or coal and fed through a steam external combustion engine. Now most ships have an internal combustion engine using a slightly refined type of petroleum called bunker fuel. Some ships, such as submarines, use nuclear power to produce the steam. Recreational or educational craft still use wind power, while some smaller craft use internal combustion engines to drive one or more propellers or, in the case of jet boats, an inboard water jet. In shallow draft areas, hovercraft are propelled by large pusher-prop fans. (See Marine propulsion.)
Although it is slow compared to other transport, modern sea transport is a highly efficient method of transporting large quantities of goods. Commercial vessels, nearly 35,000 in number, carried 7.4 billion tons of cargo in 2007. Transport by water is significantly less costly than air transport for transcontinental shipping; short sea shipping and ferries remain viable in coastal areas.
Other modes
Pipeline transport sends goods through a pipe; most commonly liquid and gases are sent, but pneumatic tubes can also send solid capsules using compressed air. For liquids/gases, any chemically stable liquid or gas can be sent through a pipeline. Short-distance systems exist for sewage, slurry, water, and beer, while long-distance networks are used for petroleum and natural gas.
Cable transport is a broad mode where vehicles are pulled by cables instead of an internal power source. It is most commonly used at steep gradient. Typical solutions include aerial tramways, elevators, and ski lifts; some of these are also categorized as conveyor transport.
Spaceflight is transport out of Earth's atmosphere into outer space by means of a spacecraft. While large amounts of research have gone into technology, it is rarely used except to put satellites into orbit and conduct scientific experiments. However, man has landed on the moon, and probes have been sent to all the planets of the Solar System.
Suborbital spaceflight is the fastest of the existing and planned transport systems from a place on Earth to a distant "other place" on Earth. Faster transport could be achieved through part of a low Earth orbit or by following that trajectory even faster, using the propulsion of the rocket to steer it.
Elements
Infrastructure
Infrastructure is the fixed installations that allow a vehicle to operate. It consists of a roadway, a terminal, and facilities for parking and maintenance. For rail, pipeline, road, and cable transport, the entire way the vehicle travels must be constructed. Air and watercraft are able to avoid this, since the airway and seaway do not need to be constructed. However, they require fixed infrastructure at terminals.
Terminals such as airports, ports, and stations, are locations where passengers and freight can be transferred from one vehicle or mode to another. For passenger transport, terminals are integrating different modes to allow riders, who are interchanging between modes, to take advantage of each mode's benefits. For instance, airport rail links connect airports to the city centres and suburbs. The terminals for automobiles are parking lots, while buses and coaches can operate from simple stops. For freight, terminals act as transshipment points, though some cargo is transported directly from the point of production to the point of use.
The financing of infrastructure can either be public or private. Transport is often a natural monopoly and a necessity for the public; roads, and in some countries railways and airports, are funded through taxation. New infrastructure projects can have high costs and are often financed through debt. Many infrastructure owners, therefore, impose usage fees, such as landing fees at airports or toll plazas on roads. Independent of this, authorities may impose taxes on the purchase or use of vehicles. Because of poor forecasting and overestimation of passenger numbers by planners, there is frequently a benefits shortfall for transport infrastructure projects.
Means of transport
Animals
Animals used in transportation include pack animals and riding animals.
Vehicles
A vehicle is a non-living device that is used to move people and goods. Unlike the infrastructure, the vehicle moves along with the cargo and riders. Unless being pulled/pushed by a cable or muscle-power, the vehicle must provide its own propulsion; this is most commonly done through a steam engine, combustion engine, electric motor, jet engine, or rocket, though other means of propulsion also exist. Vehicles also need a system of converting the energy into movement; this is most commonly done through wheels, propellers, and pressure.
Vehicles are most commonly staffed by a driver. However, some systems, such as people movers and some rapid transits, are fully automated. For passenger transport, the vehicle must have a compartment, seat, or platform for the passengers. Simple vehicles, such as automobiles, bicycles, or simple aircraft, may have one of the passengers as a driver. Recently, the progress related to the Fourth Industrial Revolution has brought a lot of new emerging technologies for transportation and automotive fields such as Connected Vehicles and Autonomous Driving. These innovations are said to form future mobility, but concerns remain on safety and cybersecurity, particularly concerning connected and autonomous mobility.
Operation
Private transport is only subject to the owner of the vehicle, who operates the vehicle themselves. For public transport and freight transport, operations are done through private enterprise or by governments. The infrastructure and vehicles may be owned and operated by the same company, or they may be operated by different entities. Traditionally, many countries have had a national airline and national railway. Since the 1980s, many of these have been privatized. International shipping remains a highly competitive industry with little regulation, but ports can be public-owned.
Policy
As the population of the world increases, cities grow in size and population—according to the United Nations, 55% of the world's population live in cities, and by 2050 this number is expected to rise to 68%. Public transport policy must evolve to meet the changing priorities of the urban world. The institution of policy enforces order in transport, which is by nature chaotic as people attempt to travel from one place to another as fast as possible. This policy helps to reduce accidents and save lives.
Functions
Relocation of travelers and cargo are the most common uses of transport. However, other uses exist, such as the strategic and tactical relocation of armed forces during warfare, or the civilian mobility construction or emergency equipment.
Passenger
Passenger transport, or travel, is divided into public and private transport. Public transport is scheduled services on fixed routes, while private is vehicles that provide ad hoc services at the riders desire. The latter offers better flexibility, but has lower capacity and a higher environmental impact. Travel may be as part of daily commuting or for business, leisure, or migration.
Short-haul transport is dominated by the automobile and mass transit. The latter consists of buses in rural and small cities, supplemented with commuter rail, trams, and rapid transit in larger cities. Long-haul transport involves the use of the automobile, trains, coaches, and aircraft, the last of which have become predominantly used for the longest, including intercontinental, travel. Intermodal passenger transport is where a journey is performed through the use of several modes of transport; since all human transport normally starts and ends with walking, all passenger transport can be considered intermodal. Public transport may also involve the intermediate change of vehicle, within or across modes, at a transport hub, such as a bus or railway station.
Taxis and buses can be found on both ends of the public transport spectrum. Buses are the cheapest mode of transport but are not necessarily flexible, and taxis are very flexible but more expensive. In the middle is demand-responsive transport, offering flexibility whilst remaining affordable.
International travel may be restricted for some individuals due to legislation and visa requirements.
Medical
An ambulance is a vehicle used to transport people from or between places of treatment, and in some instances will also provide out-of-hospital medical care to the patient. The word is often associated with road-going "emergency ambulances", which form part of emergency medical services, administering emergency care to those with acute medical problems.
Air medical services is a comprehensive term covering the use of air transport to move patients to and from healthcare facilities and accident scenes. Personnel provide comprehensive prehospital and emergency and critical care to all types of patients during aeromedical evacuation or rescue operations, aboard helicopters, propeller aircraft, or jet aircraft.
Freight
Freight transport, or shipping, is a key in the value chain in manufacturing. With increased specialization and globalization, production is being located further away from consumption, rapidly increasing the demand for transport. Transport creates place utility by moving the goods from the place of production to the place of consumption. While all modes of transport are used for cargo transport, there is high differentiation between the nature of the cargo transport, in which mode is chosen. Logistics refers to the entire process of transferring products from producer to consumer, including storage, transport, transshipment, warehousing, material-handling, and packaging, with associated exchange of information. Incoterm deals with the handling of payment and responsibility of risk during transport.
Containerization, with the standardization of ISO containers on all vehicles and at all ports, has revolutionized international and domestic trade, offering a huge reduction in transshipment costs. Traditionally, all cargo had to be manually loaded and unloaded into the haul of any ship or car; containerization allows for automated handling and transfer between modes, and the standardized sizes allow for gains in economy of scale in vehicle operation. This has been one of the key driving factors in international trade and globalization since the 1950s.Bulk transport is common with cargo that can be handled roughly without deterioration; typical examples are ore, coal, cereals, and petroleum. Because of the uniformity of the product, mechanical handling can allow enormous quantities to be handled quickly and efficiently. The low value of the cargo combined with high volume also means that economies of scale become essential in transport, and gigantic ships and whole trains are commonly used to transport bulk. Liquid products with sufficient volume may also be transported by pipeline.
Air freight has become more common for products of high value; while less than one percent of world transport by volume is by airline, it amounts to forty percent of the value. Time has become especially important in regards to principles such as postponement and just-in-time within the value chain, resulting in a high willingness to pay for quick delivery of key components or items of high value-to-weight ratio. In addition to mail, common items sent by air include electronics and fashion clothing.
Industry
Impact
Economic
Transport is a key necessity for specialization—allowing production and consumption of products to occur at different locations. Throughout history, transport has been a spur to expansion; better transport allows more trade and a greater spread of people. Economic growth has always been dependent on increasing the capacity and rationality of transport. But the infrastructure and operation of transport have a great impact on the land, and transport is the largest drainer of energy, making transport sustainability a major issue.
Due to the way modern cities and communities are planned and operated, a physical distinction between home and work is usually created, forcing people to transport themselves to places of work, study, or leisure, as well as to temporarily relocate for other daily activities. Passenger transport is also the essence of tourism, a major part of recreational transport. Commerce requires the transport of people to conduct business, either to allow face-to-face communication for important decisions or to move specialists from their regular place of work to sites where they are needed.
In lean thinking, transporting materials or work in process from one location to another is seen as one of the seven wastes (Japanese term: muda) which do not add value to a product.
Planning
Transport planning allows for high use and less impact regarding new infrastructure. Using models of transport forecasting, planners are able to predict future transport patterns. On the operative level, logistics allows owners of cargo to plan transport as part of the supply chain. Transport as a field is also studied through transport economics, a component for the creation of regulation policy by authorities. Transport engineering, a sub-discipline of civil engineering, must take into account trip generation, trip distribution, mode choice, and route assignment, while the operative level is handled through traffic engineering.
Because of the negative impacts incurred, transport often becomes the subject of controversy related to choice of mode, as well as increased capacity. Automotive transport can be seen as a tragedy of the commons, where the flexibility and comfort for the individual deteriorate the natural and urban environment for all. Density of development depends on mode of transport, with public transport allowing for better spatial use. Good land use keeps common activities close to people's homes and places higher-density development closer to transport lines and hubs, to minimize the need for transport. There are economies of agglomeration. Beyond transport, some land uses are more efficient when clustered. Transport facilities consume land, and in cities pavement (devoted to streets and parking) can easily exceed 20 percent of the total land use. An efficient transport system can reduce land waste.
Too much infrastructure and too much smoothing for maximum vehicle throughput mean that in many cities there is too much traffic and many—if not all—of the negative impacts that come with it. It is only in recent years that traditional practices have started to be questioned in many places; as a result of new types of analysis which bring in a much broader range of skills than those traditionally relied on—spanning such areas as environmental impact analysis, public health, sociology, and economics—the viability of the old mobility solutions is increasingly being questioned.
Environment
Transport is a major use of energy and burns most of the world's petroleum. This creates air pollution, including nitrous oxides and particulates, and is a significant contributor to global warming through emission of carbon dioxide, for which transport is the fastest-growing emission sector. By sub-sector, road transport is the largest contributor to global warming. Environmental regulations in developed countries have reduced individual vehicles' emissions; however, this has been offset by increases in the numbers of vehicles and in the use of each vehicle. Some pathways to reduce the carbon emissions of road vehicles considerably have been studied. Energy use and emissions vary largely between modes, causing environmentalists to call for a transition from air and road to rail and human-powered transport, as well as increased transport electrification and energy efficiency.
Other environmental impacts of transport systems include traffic congestion and automobile-oriented urban sprawl, which can consume natural habitat and agricultural lands. By reducing transport emissions globally, it is predicted that there will be significant positive effects on Earth's air quality, acid rain, smog, and climate change.While electric cars are being built to cut down CO2 emission at the point of use, an approach that is becoming popular among cities worldwide is to prioritize public transport, bicycles, and pedestrian movement. Redirecting vehicle movement to create 20-minute neighbourhoods that promotes exercise while greatly reducing vehicle dependency and pollution. Some policies are levying a congestion charge to cars for travelling within congested areas during peak time.
Sustainable development
The United Nations first formally recognized the role of transport in sustainable development in the 1992 United Nations Earth summit. In the 2012 United Nations World Conference, global leaders unanimously recognized that transport and mobility are central to achieving the sustainability targets. In recent years, data has been collected to show that the transport sector contributes to a quarter of the global greenhouse gas emissions, and therefore sustainable transport has been mainstreamed across several of the 2030 Sustainable Development Goals, especially those related to food, security, health, energy, economic growth, infrastructure, and cities and human settlements. Meeting sustainable transport targets is said to be particularly important to achieving the Paris Agreement.There are various Sustainable Development Goals (SDGs) that are promoting sustainable transport to meet the defined goals. These include SDG 3 on health (increased road safety), SDG 7 on energy, SDG 8 on decent work and economic growth, SDG 9 on resilient infrastructure, SDG 11 on sustainable cities (access to transport and expanded public transport), SDG 12 on sustainable consumption and production (ending fossil fuel subsidies), and SDG 14 on oceans, seas, and marine resources.
History
Natural
Humans' first ways to move included walking, running, and swimming. The domestication of animals introduced a new way to lay the burden of transport on more powerful creatures, allowing the hauling of heavier loads, or humans riding animals for greater speed and duration. Inventions such as the wheel and the sled (U.K. sledge) helped make animal transport more efficient through the introduction of vehicles.
The first forms of road transport involved animals, such as horses (domesticated in the 4th or the 3rd millennium BCE), oxen (from about 8000 BCE), or humans carrying goods over dirt tracks that often followed game trails.
Infrastructure
Many early civilizations, including those in Mesopotamia and the Indus Valley, constructed paved roads. In classical antiquity, the Persian and Roman empires built stone-paved roads to allow armies to travel quickly. Deep roadbeds of crushed stone underneath kept such roads dry. The medieval Caliphate later built tar-paved roads.
Water transport
Water transport, including rowed and sailed vessels, dates back to time immemorial and was the only efficient way to transport large quantities or over large distances prior to the Industrial Revolution. The first watercraft were canoes cut out from tree trunks. Early water transport was accomplished with ships that were either rowed or used the wind for propulsion, or a combination of the two. The importance of water has led to most cities that grew up as sites for trading being located on rivers or on the sea-shore, often at the intersection of two bodies of water.
Mechanical
Until the Industrial Revolution, transport remained slow and costly, and production and consumption gravitated as close to each other as feasible. The Industrial Revolution in the 19th century saw several inventions fundamentally change transport. With telegraphy, communication became instant and independent of the transport of physical objects. The invention of the steam engine, closely followed by its application in rail transport, made land transport independent of human or animal muscles. Both speed and capacity increased, allowing specialization through manufacturing being located independently of natural resources. The 19th century also saw the development of the steam ship, which sped up global transport.
With the development of the combustion engine and the automobile around 1900, road transport became more competitive again, and mechanical private transport originated. The first "modern" highways were constructed during the 19th century with macadam. Later, tarmac and concrete became the dominant paving materials.
In 1903 the Wright brothers demonstrated the first successful controllable airplane, and after World War I (1914–1918) aircraft became a fast way to transport people and express goods over long distances.After World War II (1939–1945) the automobile and airlines took higher shares of transport, reducing rail and water to freight and short-haul passenger services. Scientific spaceflight began in the 1950s, with rapid growth until the 1970s, when interest dwindled. In the 1950s the introduction of containerization gave massive efficiency gains in freight transport, fostering globalization. International air travel became much more accessible in the 1960s with the commercialization of the jet engine. Along with the growth in automobiles and motorways, rail and water transport declined in relative importance. After the introduction of the Shinkansen in Japan in 1964, high-speed rail in Asia and Europe started attracting passengers on long-haul routes away from the airlines.Early in U.S. history, private joint-stock corporations owned most aqueducts, bridges, canals, railroads, roads, and tunnels. Most such transport infrastructure came under government control in the late 19th and early 20th centuries, culminating in the nationalization of inter-city passenger rail-service with the establishment of Amtrak. Recently, however, a movement to privatize roads and other infrastructure has gained some ground and adherents.
See also
References
Bibliography
Bardi, Edward; Coyle, John & Novack, Robert (2006). Management of Transportation. Australia: Thomson South-Western. ISBN 0-324-31443-4. OCLC 62259402.
Chopra, Sunil & Meindl, Peter (2007). Supply chain management : strategy, planning, and operation (3rd ed.). Upper Saddle River, N.J.: Pearson. ISBN 978-0-13-208608-0. OCLC 63808135.
Cooper, Christopher P.; Shepherd, Rebecca (1998). Tourism: Principles and Practice (2nd ed.). Harlow, England: Financial Times Prent. Int. ISBN 978-0-582-31273-9. OCLC 39945061. Retrieved 22 December 2012.
Lay, Maxwell G (1992). Ways of the World: A History of the World's Roads and of the Vehicles that Used Them. New Brunswick, N.J.: Rutgers University Press. ISBN 0-8135-2691-4. OCLC 804297312.
Stopford, Martin (1997). Maritime Economics (2nd ed.). London: Routledge. ISBN 0-415-15310-7. OCLC 36824728.
Further reading
McKibben, Bill, "Toward a Land of Buses and Bikes" (review of Ben Goldfarb, Crossings: How Road Ecology Is Shaping the Future of Our Planet, Norton, 2023, 370 pp.; and Henry Grabar, Paved Paradise: How Parking Explains the World, Penguin Press, 2023, 346 pp.), The New York Review of Books, vol. LXX, no. 15 (5 October 2023), pp. 30-32. "Someday in the not impossibly distant future, if we manage to prevent a global warming catastrophe, you could imagine a post-auto world where bikes and buses and trains are ever more important, as seems to be happening in Europe at the moment." (p. 32.)
External links
Transportation from UCB Libraries GovPubs
Transportation at Curlie
America On the Move An online transportation exhibition from the National Museum of American History, Smithsonian Institution
World Transportation Organization The world transportation organization (The non-profit Advisory Organization) |
sevoflurane | Sevoflurane, sold under the brand name Sevorane, among others, is a sweet-smelling, nonflammable, highly fluorinated methyl isopropyl ether used as an inhalational anaesthetic for induction and maintenance of general anesthesia. After desflurane, it is the volatile anesthetic with the fastest onset. While its offset may be faster than agents other than desflurane in a few circumstances, its offset is more often similar to that of the much older agent isoflurane. While sevoflurane is only half as soluble as isoflurane in blood, the tissue blood partition coefficients of isoflurane and sevoflurane are quite similar. For example, in the muscle group: isoflurane 2.62 vs. sevoflurane 2.57. In the fat group: isoflurane 52 vs. sevoflurane 50. As a result, the longer the case, the more similar will be the emergence times for sevoflurane and isoflurane.It is on the World Health Organization's List of Essential Medicines.
Medical uses
It is one of the most commonly used volatile anesthetic agents, particularly for outpatient anesthesia, across all ages, as well as in veterinary medicine. Together with desflurane, sevoflurane is replacing isoflurane and halothane in modern anesthesia practice. It is often administered in a mixture of nitrous oxide and oxygen.
Physiological effects
Sevoflurane is a potent vasodilator, as such it induces a dose dependent reduction in blood pressure and cardiac output. It is a bronchodilator, however, in patients with pre-existing lung pathology it may precipitate coughing and laryngospasm. It reduces the ventilatory response to hypoxia and hypercapnia and impedes hypoxic pulmonary vasoconstriction. Sevoflurane vasodilatory properties also cause it to increase intracranial pressure and cerebral blood flow. However, it reduces cerebral metabolic rate.
Adverse effects
Sevoflurane has an excellent safety record, but is under review for potential hepatotoxicity, and may accelerate Alzheimer's. There were rare reports involving adults with symptoms similar to halothane hepatotoxicity. Sevoflurane is the preferred agent for mask induction due to its lesser irritation to mucous membranes.
Sevoflurane is an inhaled anaesthetic that is often used to induction and maintenance of anaesthesia in children for surgery. During the process of awakening from the medication, it has been associated with a high incidence (>30%) of agitation and delirium in preschool children undergoing minor noninvasive surgery. It is not clear if this can be prevented.Studies examining a current significant health concern, anesthetic-induced neurotoxicity (including with sevoflurane, and especially with children and infants) are "fraught with confounders, and many are underpowered statistically", and so are argued to need "further data... to either support or refute the potential connection".Concern regarding the safety of anaesthesia is especially acute with regard to children and infants, where preclinical evidence from relevant animal models suggest that common clinically important agents, including sevoflurane, may be neurotoxic to the developing brain, and so cause neurobehavioural abnormalities in the long term; two large-scale clinical studies (PANDA and GAS) were ongoing as of 2010, in hope of supplying "significant [further] information" on neurodevelopmental effects of general anaesthesia in infants and young children, including where sevoflurane is used.In 2021, researchers at Massachusetts General Hospital published in Communications Biology research that sevoflurane may accelerate existing Alzheimer's or existing tau protein to spread: "These data demonstrate anesthesia-associated tau spreading and its consequences. [...] This tau spreading could be prevented by inhibitors of tau phosphorylation or extracellular vesicle generation." According to Neuroscience News, "Their previous work showed that sevoflurane can cause a change (specifically, phosphorylation, or the addition of phosphate) to tau that leads to cognitive impairment in mice. Other researchers have also found that sevoflurane and certain other anesthetics may affect cognitive function."Additionally, there has been some investigation into potential correlation of sevoflurane use and renal damage (nephrotoxicity). However, this should be subject to further investigation, as a recent study shows no correlation between sevoflurane use and renal damage as compared to other control anesthetic agents.
Pharmacology
The exact mechanism of the action of general anaesthetics has not been delineated. Sevoflurane acts as a positive allosteric modulator of the GABAA receptor in electrophysiology studies of neurons and recombinant receptors. However, it also acts as an NMDA receptor antagonist, potentiates glycine receptor currents, and inhibits nAChR and 5-HT3 receptor currents.
History
Sevoflurane was discovered by Ross Terrell and independently by Bernard M Regan. A detailed report of its development and properties appeared in 1975 in a paper authored by Richard Wallin, Bernard Regan, Martha Napoli and Ivan Stern. It was introduced into clinical practice initially in Japan in 1990 by Maruishi Pharmaceutical Co., Ltd. Osaka, Japan. The rights for sevoflurane worldwide were held by AbbVie. It is now available as a generic drug.
Global-warming potential
Sevoflurane is a greenhouse gas. The twenty-year global-warming potential, GWP(20), for sevoflurane is 349.
Spectra Data for Compound Identification
The following is experimental spectra data to aid in sevoflurane identification.
1H NMR = 𝛅 5.39 (d, 2H), 4.40 (septet, 1H)
Degradation of Sevoflurane
Sevoflurane will degrade into what is most commonly referred to as compound A (fluoromethyl 2,2-difluoro-1-(trifluoromethyl)vinyl ether) when in contact with CO2 absorbents, and this degradation tends to enhance with decreased fresh gas flow rates, increased temperatures, and increased sevoflurane concentration. Compound A is what some believe is in correlation with renal damage.
References
== Further reading == |
chlorofluorocarbon | Chlorofluorocarbons (CFCs) and hydrochlorofluorocarbons (HCFCs) are fully or partly halogenated hydrocarbons that contain carbon (C), hydrogen (H), chlorine (Cl), and fluorine (F), produced as volatile derivatives of methane, ethane, and propane.
The most common example is dichlorodifluoromethane (R-12). R-12 is also commonly called Freon and was used as a refrigerant. Many CFCs have been widely used as refrigerants, propellants (in aerosol applications), gaseous fire suppression systems, and solvents. As a result of CFCs contributing to ozone depletion in the upper atmosphere, the manufacture of such compounds has been phased out under the Montreal Protocol, and they are being replaced with other products such as hydrofluorocarbons (HFCs) and Hydrofluoroolefins (HFOs) including R-410A,R-134a and R-1234yf.
Structure, properties and production
As in simpler alkanes, carbon in the CFCs bond with tetrahedral symmetry. Because the fluorine and chlorine atoms differ greatly in size and effective charge from hydrogen and from each other, the methane-derived CFCs deviate from perfect tetrahedral symmetry.The physical properties of CFCs and HCFCs are tunable by changes in the number and identity of the halogen atoms. In general, they are volatile but less so than their parent alkanes. The decreased volatility is attributed to the molecular polarity induced by the halides, which induces intermolecular interactions. Thus, methane boils at −161 °C whereas the fluoromethanes boil between −51.7 (CF2H2) and −128 °C (CF4). The CFCs have still higher boiling points because the chloride is even more polarizable than fluoride. Because of their polarity, the CFCs are useful solvents, and their boiling points make them suitable as refrigerants. The CFCs are far less flammable than methane, in part because they contain fewer C-H bonds and in part because, in the case of the chlorides and bromides, the released halides quench the free radicals that sustain flames.
The densities of CFCs are higher than their corresponding alkanes. In general, the density of these compounds correlates with the number of chlorides.
CFCs and HCFCs are usually produced by halogen exchange starting from chlorinated methanes and ethanes. Illustrative is the synthesis of chlorodifluoromethane from chloroform:
HCCl3 + 2 HF → HCF2Cl + 2 HClBrominated derivatives are generated by free-radical reactions of hydrochlorofluorocarbons, replacing C-H bonds with C-Br bonds. The production of the anesthetic 2-bromo-2-chloro-1,1,1-trifluoroethane ("halothane") is illustrative:
CF3CH2Cl + Br2 → CF3CHBrCl + HBr
Applications
CFCs and HCFCs are used in various applications because of their low toxicity, reactivity and flammability. Every permutation of fluorine, chlorine and hydrogen based on methane and ethane has been examined and most have been commercialized. Furthermore, many examples are known for higher numbers of carbon as well as related compounds containing bromine. Uses include refrigerants, blowing agents, aerosol propellants in medicinal applications, and degreasing solvents.
Billions of kilograms of chlorodifluoromethane are produced annually as a precursor to tetrafluoroethylene, the monomer that is converted into Teflon.
Classes of compounds, nomenclature
Chlorofluorocarbons (CFCs): when derived from methane and ethane these compounds have the formulae CClmF4−m and C2ClmF6−m, where m is nonzero.
Hydro-chlorofluorocarbons (HCFCs): when derived from methane and ethane these compounds have the formula CClmFnH4−m−n and C2ClxFyH6−x−y, where m, n, x, and y are nonzero.
and bromofluorocarbons have formulae similar to the CFCs and HCFCs but also include bromine.
Hydrofluorocarbons (HFCs): when derived from methane, ethane, propane, and butane, these compounds have the respective formulae CFmH4−m, C2FmH6−m, C3FmH8−m, and C4FmH10−m, where m is nonzero.
Numbering system
A special numbering system is to be used for fluorinated alkanes, prefixed with Freon-, R-, CFC- and HCFC-, where the rightmost value indicates the number of fluorine atoms, the next value to the left is the number of hydrogen atoms plus 1, and the next value to the left is the number of carbon atoms less one (zeroes are not stated), and the remaining atoms are chlorine.
Freon-12, for example, indicates a methane derivative (only two numbers) containing two fluorine atoms (the second 2) and no hydrogen (1-1=0). It is therefore CCl2F2. Another equation that can be applied to get the correct molecular formula of the CFC/R/Freon class compounds is to take the numbering and add 90 to it. The resulting value will give the number of carbons as the first numeral, the second numeral gives the number of hydrogen atoms, and the third numeral gives the number of fluorine atoms. The rest of the unaccounted carbon bonds are occupied by chlorine atoms. The value of this equation is always a three figure number. An easy example is that of CFC-12, which gives: 90+12=102 -> 1 carbon, 0 hydrogens, 2 fluorine atoms, and hence 2 chlorine atoms resulting in CCl2F2. The main advantage of this method of deducing the molecular composition in comparison with the method described in the paragraph above is that it gives the number of carbon atoms of the molecule.Freons containing bromine are signified by four numbers. Isomers, which are common for ethane and propane derivatives, are indicated by letters following the numbers:
Reactions
The reaction of the CFCs which is responsible for the depletion of ozone, is the photo-induced scission of a C-Cl bond:
CCl3F → CCl2F. + Cl.The chlorine atom, written often as Cl., behaves very differently from the chlorine molecule (Cl2). The radical Cl. is long-lived in the upper atmosphere, where it catalyzes the conversion of ozone into O2. Ozone absorbs UV-B radiation, so its depletion allows more of this high energy radiation to reach the Earth's surface. Bromine atoms are even more efficient catalysts; hence brominated CFCs are also regulated.
Impact as greenhouse gases
CFCs were phased out via the Montreal Protocol due to their part in ozone depletion.
The atmospheric impacts of CFCs are not limited to their role as ozone-depleting chemicals. Infrared absorption bands prevent heat at that wavelength from escaping earth's atmosphere. CFCs have their strongest absorption bands from C-F and C-Cl bonds in the spectral region of 7.8–15.3 µm—referred to as the "atmospheric window" due to the relative transparency of the atmosphere within this region.The strength of CFC absorption bands and the unique susceptibility of the atmosphere at wavelengths where CFCs (indeed all covalent fluorine compounds) absorb radiation creates a "super" greenhouse effect from CFCs and other unreactive fluorine-containing gases such as perfluorocarbons, HFCs, HCFCs, bromofluorocarbons, SF6, and NF3. This "atmospheric window" absorption is intensified by the low concentration of each individual CFC. Because CO2 is close to saturation with high concentrations and few infrared absorption bands, the radiation budget and hence the greenhouse effect has low sensitivity to changes in CO2 concentration; the increase in temperature is roughly logarithmic. Conversely, the low concentration of CFCs allow their effects to increase linearly with mass, so that chlorofluorocarbons are greenhouse gases with a much higher potential to enhance the greenhouse effect than CO2.
Groups are actively disposing of legacy CFCs to reduce their impact on the atmosphere.According to NASA in 2018, the hole in the ozone layer has begun to recover as a result of CFC bans. However, research released in 2019 reports an alarming increase in CFCs, pointing to unregulated use in China.
History
Prior to, and during the 1920s, refrigerators used toxic gases as refrigerants, including ammonia, sulphur dioxide, and chloromethane. Later in the 1920s after a series of fatal accidents involving the leaking of chloromethane from refrigerators, a major collaborative effort began between American corporations Frigidaire, General Motors, and DuPont to develop a safer, non-toxic alternative. Thomas Midgley Jr. of General Motors is credited for synthesizing the first chlorofluorocarbons. The Frigidaire corporation was issued the first patent, number 1,886,339, for the formula for CFCs on December 31, 1928. In a demonstration for the American Chemical Society, Midgley flamboyantly demonstrated all these properties by inhaling a breath of the gas and using it to blow out a candle in 1930.By 1930, General Motors and Du Pont formed the Kinetic Chemical Company to produce Freon, and by 1935, over 8 million refrigerators utilizing R-12 were sold by Frigidaire and its competitors. In 1932, Carrier began using R-11 in the worlds first self-contained home air conditioning unit known as the "atmospheric cabinet". As a result of CFCs being largely non-toxic, they quickly became the coolant of choice in large air-conditioning systems. Public health codes in cities were revised to designate chlorofluorocarbons as the only gases that could be used as refrigerants in public buildings.Growth in CFCs continued over the following decades leading to peak annual sales of over 1 billion USD with greater than 1 million metric tonnes being produced annually. It wasn't until 1974 that it was first discovered by two University of California chemists, Professor F. Sherwood Rowland and Dr. Mario Molina, that the use of chlorofluorocarbons were causing a significant depletion in atmospheric ozone concentrations. This initiated the environmental effort which eventually resulted in the enactment of the Montreal Protocol.
Commercial development and use in fire extinguishing
During World War II, various chloroalkanes were in standard use in military aircraft, although these early halons suffered from excessive toxicity. Nevertheless, after the war they slowly became more common in civil aviation as well. In the 1960s, fluoroalkanes and bromofluoroalkanes became available and were quickly recognized as being highly effective fire-fighting materials. Much early research with Halon 1301 was conducted under the auspices of the US Armed Forces, while Halon 1211 was, initially, mainly developed in the UK. By the late 1960s they were standard in many applications where water and dry-powder extinguishers posed a threat of damage to the protected property, including computer rooms, telecommunications switches, laboratories, museums and art collections. Beginning with warships, in the 1970s, bromofluoroalkanes also progressively came to be associated with rapid knockdown of severe fires in confined spaces with minimal risk to personnel.
By the early 1980s, bromofluoroalkanes were in common use on aircraft, ships, and large vehicles as well as in computer facilities and galleries. However, concern was beginning to be expressed about the impact of chloroalkanes and bromoalkanes on the ozone layer. The Vienna Convention for the Protection of the Ozone Layer did not cover bromofluoroalkanes under the same restrictions, instead, the consumption of bromofluoroalkanes were frozen at 1986 levels. This is due to the fact that emergency discharge of extinguishing systems was thought to be too small in volume to produce a significant impact, and too important to human safety for restriction.
Regulation
Since the late 1970s, the use of CFCs has been heavily regulated because of their destructive effects on the ozone layer. After the development of his electron capture detector, James Lovelock was the first to detect the widespread presence of CFCs in the air, finding a mole fraction of 60 ppt of CFC-11 over Ireland. In a self-funded research expedition ending in 1973, Lovelock went on to measure CFC-11 in both the Arctic and Antarctic, finding the presence of the gas in each of 50 air samples collected, and concluding that CFCs are not hazardous to the environment. The experiment did however provide the first useful data on the presence of CFCs in the atmosphere. The damage caused by CFCs was discovered by Sherry Rowland and Mario Molina who, after hearing a lecture on the subject of Lovelock's work, embarked on research resulting in the first publication suggesting the connection in 1974. It turns out that one of CFCs' most attractive features—their low reactivity—is key to their most destructive effects. CFCs' lack of reactivity gives them a lifespan that can exceed 100 years, giving them time to diffuse into the upper stratosphere. Once in the stratosphere, the sun's ultraviolet radiation is strong enough to cause the homolytic cleavage of the C-Cl bond. In 1976, under the Toxic Substances Control Act, the EPA banned commercial manufacturing and use of CFCs and aerosol propellants. This was later superseded in the 1990 amendments to the Clean Air Act to address stratospheric ozone depletion.
By 1987, in response to a dramatic seasonal depletion of the ozone layer over Antarctica, diplomats in Montreal forged a treaty, the Montreal Protocol, which called for drastic reductions in the production of CFCs. On 2 March 1989, 12 European Community nations agreed to ban the production of all CFCs by the end of the century. In 1990, diplomats met in London and voted to significantly strengthen the Montreal Protocol by calling for a complete elimination of CFCs by 2000. By 2010, CFCs should have been completely eliminated from developing countries as well.
Because the only CFCs available to countries adhering to the treaty is from recycling, their prices have increased considerably. A worldwide end to production should also terminate the smuggling of this material. However, there are current CFC smuggling issues, as recognized by the United Nations Environmental Programme (UNEP) in a 2006 report titled "Illegal Trade in Ozone Depleting Substances". UNEP estimates that between 16,000–38,000 tonnes of CFCs passed through the black market in the mid-1990s. The report estimated between 7,000 and 14,000 tonnes of CFCs are smuggled annually into developing countries. Asian countries are those with the most smuggling; as of 2007, China, India and South Korea were found to account for around 70% of global CFC production, South Korea later to ban CFC production in 2010. Possible reasons for continued CFC smuggling were also examined: the report noted that many banned CFC producing products have long lifespans and continue to operate. The cost of replacing the equipment of these items is sometimes cheaper than outfitting them with a more ozone-friendly appliance. Additionally, CFC smuggling is not considered a significant issue, so the perceived penalties for smuggling are low. In 2018 public attention was drawn to the issue, that at an unknown place in east Asia an estimated amount of 13,000 metric tons annually of CFCs have been produced since about 2012 in violation of the protocol. While the eventual phaseout of CFCs is likely, efforts are being taken to stem these current non-compliance problems.
By the time of the Montreal Protocol, it was realised that deliberate and accidental discharges during system tests and maintenance accounted for substantially larger volumes than emergency discharges, and consequently halons were brought into the treaty, albeit with many exceptions.
Regulatory gap
While the production and consumption of CFCs are regulated under the Montreal Protocol, emissions from existing banks of CFCs are not regulated under the agreement. In 2002, there were an estimated 5,791 kilotons of CFCs in existing products such as refrigerators, air conditioners, aerosol cans and others. Approximately one-third of these CFCs are projected to be emitted over the next decade if action is not taken, posing a threat to both the ozone layer and the climate. A proportion of these CFCs can be safely captured and destroyed by means of high temperature, controlled incineration which destroys the CFC molecule.
Regulation and DuPont
In 1978 the United States banned the use of CFCs such as Freon in aerosol cans, the beginning of a long series of regulatory actions against their use. The critical DuPont manufacturing patent for Freon ("Process for Fluorinating Halohydrocarbons", U.S. Patent #3258500) was set to expire in 1979. In conjunction with other industrial peers DuPont formed a lobbying group, the "Alliance for Responsible CFC Policy", to combat regulations of ozone-depleting compounds. In 1986 DuPont, with new patents in hand, reversed its previous stance and publicly condemned CFCs. DuPont representatives appeared before the Montreal Protocol urging that CFCs be banned worldwide and stated that their new HCFCs would meet the worldwide demand for refrigerants.
Phasing-out of CFCs
Use of certain chloroalkanes as solvents for large scale application, such as dry cleaning, have been phased out, for example, by the IPPC directive on greenhouse gases in 1994 and by the volatile organic compounds (VOC) directive of the EU in 1997. Permitted chlorofluoroalkane uses are medicinal only.
Bromofluoroalkanes have been largely phased out and the possession of equipment for their use is prohibited in some countries like the Netherlands and Belgium, from 1 January 2004, based on the Montreal Protocol and guidelines of the European Union.
Production of new stocks ceased in most (probably all) countries in 1994. However many countries still require aircraft to be fitted with halon fire suppression systems because no safe and completely satisfactory alternative has been discovered for this application. There are also a few other, highly specialized uses. These programs recycle halon through "halon banks" coordinated by the Halon Recycling Corporation to ensure that discharge to the atmosphere occurs only in a genuine emergency and to conserve remaining stocks.
The interim replacements for CFCs are hydrochlorofluorocarbons (HCFCs), which deplete stratospheric ozone, but to a much lesser extent than CFCs. Ultimately, hydrofluorocarbons (HFCs) will replace HCFCs. Unlike CFCs and HCFCs, HFCs have an ozone depletion potential (ODP) of 0. DuPont began producing hydrofluorocarbons as alternatives to Freon in the 1980s. These included Suva refrigerants and Dymel propellants. Natural refrigerants are climate friendly solutions that are enjoying increasing support from large companies and governments interested in reducing global warming emissions from refrigeration and air conditioning.
Phasing-out of HFCs and HCFCs
Hydrofluorocarbons are included in the Kyoto Protocol and are regulated under the Kigali Amendment to the Montreal Protocol due to their very high Global Warming Potential and the recognition of halocarbon contributions to climate change.On September 21, 2007, approximately 200 countries agreed to accelerate the elimination of hydrochlorofluorocarbons entirely by 2020 in a United Nations-sponsored Montreal summit. Developing nations were given until 2030. Many nations, such as the United States and China, who had previously resisted such efforts, agreed with the accelerated phase out schedule. India successfully phased out HCFCs by 2020.
Properly collecting, controlling, and destroying CFCs and HCFCs
While new production of these refrigerants has been banned, large volumes still exist in older systems and pose an immediate threat to our environment. Preventing the release of these harmful refrigerants has been ranked as one of the single most effective actions we can take to mitigate catastrophic climate change.
Development of alternatives for CFCs
Work on alternatives for chlorofluorocarbons in refrigerants began in the late 1970s after the first warnings of damage to stratospheric ozone were published.
The hydrochlorofluorocarbons (HCFCs) are less stable in the lower atmosphere, enabling them to break down before reaching the ozone layer. Nevertheless, a significant fraction of the HCFCs do break down in the stratosphere and they have contributed to more chlorine buildup there than originally predicted. Later alternatives lacking the chlorine, the hydrofluorocarbons (HFCs) have an even shorter lifetimes in the lower atmosphere. One of these compounds, HFC-134a, were used in place of CFC-12 in automobile air conditioners. Hydrocarbon refrigerants (a propane/isobutane blend) were also used extensively in mobile air conditioning systems in Australia, the US and many other countries, as they had excellent thermodynamic properties and performed particularly well in high ambient temperatures. 1,1-Dichloro-1-fluoroethane (HCFC-141b) has replaced HFC-134a, due to its low ODP and GWP values. And according to the Montreal Protocol, HCFC-141b is supposed to be phased out completely and replaced with zero ODP substances such as cyclopentane, HFOs, and HFC-345a before January 2020.Among the natural refrigerants (along with ammonia and carbon dioxide), hydrocarbons have negligible environmental impacts and are also used worldwide in domestic and commercial refrigeration applications, and are becoming available in new split system air conditioners.
Various other solvents and methods have replaced the use of CFCs in laboratory analytics.In Metered-dose inhalers (MDI), a non-ozone effecting substitute was developed as a propellant, known as "hydrofluoroalkane."
Development of Hydrofluoroolefins as alternatives to CFCs and HCFCs
The development of Hydrofluoroolefins (HFOs) as replacements for Hydrochlorofluorocarbons and Hydrofluorocarbons began after the Kigali amendment to the Montreal Protocol in 2016, which called for the phase out of high global warming potential (GWP) refrigerants and to replace them with other refrigerants with a lower GWP, closer to that of carbon dioxide. HFOs have an ozone depletion potential of 0.0, compared to the 1.0 of principal CFC-11, and a low GWP which make them environmentally safer alternatives to CFCs, HCFCs and HFCs.Hydrofluoroolefins serve as functional replacements for applications where high GWP hydrofluorocarbons were once used. In April of 2022, the EPA signed a pre-published final rule Listing of HFO-1234yf under the Significant New Alternatives Policy (SNAP) Program for Motor Vehicle Air Conditioning in Nonroad Vehicles and Servicing Fittings for Small Refrigerant Cans. This ruling allows HFO-1234yf to take over in applications where ozone depleting CFCs such as R-12, and high GWP HFCs such as R-134a were once used. The phaseout and replacement of CFCs and HFCs in the automotive industry will ultimately reduce the release of these gases to atmosphere and intern have a positive contribution to the mitigation climate change.
Tracer of ocean circulation
Because the time history of CFC concentrations in the atmosphere is relatively well known, they have provided an important constraint on ocean circulation. CFCs dissolve in seawater at the ocean surface and are subsequently transported into the ocean interior. Because CFCs are inert, their concentration in the ocean interior reflects simply the convolution of their atmospheric time evolution and ocean circulation and mixing.
CFC and SF6 tracer-derived age of ocean water
Chlorofluorocarbons (CFCs) are anthropogenic compounds that have been released into the atmosphere since the 1930s in various applications such as in air-conditioning, refrigeration, blowing agents in foams, insulations and packing materials, propellants in aerosol cans, and as solvents. The entry of CFCs into the ocean makes them extremely useful as transient tracers to estimate rates and pathways of ocean circulation and mixing processes. However, due to production restrictions of CFCs in the 1980s, atmospheric concentrations of CFC-11 and CFC-12 has stopped increasing, and the CFC-11 to CFC-12 ratio in the atmosphere have been steadily decreasing, making water dating of water masses more problematic. Incidentally, production and release of sulfur hexafluoride (SF6) have rapidly increased in the atmosphere since the 1970s. Similar to CFCs, SF6 is also an inert gas and is not affected by oceanic chemical or biological activities. Thus, using CFCs in concert with SF6 as a tracer resolves the water dating issues due to decreased CFC concentrations.
Using CFCs or SF6 as a tracer of ocean circulation allows for the derivation of rates for ocean processes due to the time-dependent source function. The elapsed time since a subsurface water mass was last in contact with the atmosphere is the tracer-derived age. Estimates of age can be derived based on the partial pressure of an individual compound and the ratio of the partial pressure of CFCs to each other (or SF6).
Partial pressure and ratio dating techniques
The age of a water parcel can be estimated by the CFC partial pressure (pCFC) age or SF6 partial pressure (pSF6) age. The pCFC age of a water sample is defined as:
p
C
F
C
=
[
C
F
C
]
F
(
T
,
S
)
{\displaystyle pCFC={\frac {[CFC]}{F(T,S)}}}
where [CFC] is the measured CFC concentration (pmol kg−1) and F is the solubility of CFC gas in seawater as a function of temperature and salinity. The CFC partial pressure is expressed in units of 10–12 atmospheres or parts-per-trillion (ppt). The solubility measurements of CFC-11 and CFC-12 have been previously measured by Warner and Weiss Additionally, the solubility measurement of CFC-113 was measured by Bu and Warner and SF6 by Wanninkhof et al. and Bullister et al. Theses authors mentioned above have expressed the solubility (F) at a total pressure of 1 atm as:
ln
F
=
a
1
+
a
2
(
100
T
)
+
a
3
ln
(
T
100
)
+
a
4
(
T
100
)
2
+
S
[
b
1
+
b
2
(
T
100
)
+
b
3
(
T
100
)
]
,
{\displaystyle \ln F=a_{1}+a_{2}\left({\frac {100}{T}}\right)+a_{3}\ln \left({\frac {T}{100}}\right)+a_{4}\left({\frac {T}{100}}\right)^{2}+S\left[b_{1}+b_{2}\left({\frac {T}{100}}\right)+b_{3}\left({\frac {T}{100}}\right)\right],}
where F = solubility expressed in either mol l−1 or mol kg−1 atm−1,
T = absolute temperature,
S = salinity in parts per thousand (ppt),
a1, a2, a3, b1, b2, and b3 are constants to be determined from the least squares fit to the solubility measurements. This equation is derived from the integrated Van 't Hoff equation and the logarithmic Setchenow salinity dependence.It can be noted that the solubility of CFCs increase with decreasing temperature at approximately 1% per degree Celsius.Once the partial pressure of the CFC (or SF6) is derived, it is then compared to the atmospheric time histories for CFC-11, CFC-12, or SF6 in which the pCFC directly corresponds to the year with the same. The difference between the corresponding date and the collection date of the seawater sample is the average age for the water parcel. The age of a parcel of water can also be calculated using the ratio of two CFC partial pressures or the ratio of the SF6 partial pressure to a CFC partial pressure.
Safety
According to their material safety data sheets, CFCs and HCFCs are colorless, volatile, non-toxic liquids and gases with a faintly sweet ethereal odor. Overexposure at concentrations of 11% or more may cause dizziness, loss of concentration, central nervous system depression or cardiac arrhythmia. Vapors displace air and can cause asphyxiation in confined spaces. Dermal absorption of chlorofluorocarbons is possible, but low. Where the pulmonary uptake of inhaled chlorofluorocarbons occurs quickly with peak blood concentrations occurring in as little as 15 seconds with steady concentrations evening out after 20 minutes. Absorption of orally ingested chlorofluorocarbons is 35-48 times lower compared to inhalation.
Although non-flammable, their combustion products include hydrofluoric acid and related species.
Normal occupational exposure is rated at 0.07% and does not pose any serious health risks.
References
External links
Gas conversion table
Nomenclature FAQ
Class I Ozone-Depleting Substances
Class II Ozone-Depleting Substances (HCFCs)
History of halon-use by the US Navy Archived 2000-08-19 at the Wayback Machine
Process using pyrolysis in an ultra high temperature plasma arc, for the elimination of CFCs Archived 2016-04-15 at the Wayback Machine
Freon in car A/C
Phasing out halons in extinguishers |
salp | A salp (plural salps, also known colloquially as “sea grape”) or salpa (plural salpae or salpas) is a barrel-shaped, planktonic tunicate in the family Salpidae. It moves by contracting, thereby pumping water through its gelatinous body, one of the most efficient examples of jet propulsion in the animal kingdom. The salp strains the pumped water through its internal feeding filters, feeding on phytoplankton.
Distribution
Salps are common in equatorial, temperate, and cold seas, where they can be seen at the surface, singly or in long, stringy colonies. The most abundant concentrations of salps are in the Southern Ocean (near Antarctica), where they sometimes form enormous swarms, often in deep water, and are sometimes even more abundant than krill. Since 1910, while krill populations in the Southern Ocean have declined, salp populations appear to be increasing. Salps have been seen in increasing numbers along the coast of Washington.
Life cycle
Salps have a complex life cycle, with an obligatory alternation of generations. Both portions of the life cycle exist together in the seas—they look quite different, but both are mostly transparent, tubular, gelatinous animals that are typically between 1 and 10 cm (0.4 and 3.9 in) long. The solitary life history phase, also known as an oozooid, is a single, barrel-shaped animal that reproduces asexually by producing a chain of tens to hundreds of individuals, which are released from the parent at a small size.
The chain of salps is the 'aggregate' portion of the life cycle. The aggregate individuals are also known as blastozooids; they remain attached together while swimming and feeding, and each individual grows in size. Each blastozooid in the chain reproduces sexually (the blastozooids are sequential hermaphrodites, first maturing as females, and are fertilized by male gametes produced by older chains), with a growing embryo oozooid attached to the body wall of the parent. The growing oozooids are eventually released from the parent blastozooids, and then continue to feed and grow as the solitary asexual phase, closing the life cycle of salps. The alternation of generations allows for a fast generation time, with both solitary individuals and aggregate chains living and feeding together in the sea. When phytoplankton is abundant, this rapid reproduction leads to fairly short-lived blooms of salps, which eventually filter out most of the phytoplankton. The bloom ends when enough food is no longer available to sustain the enormous population of salps. Occasionally, mushroom corals and those of the genus Heteropsammia are known to feed on salps during blooms.
History
The incursion of a large number of salps (Salpa fusiformis) into the North Sea in 1920 led to a failure of the Scottish herring fishery.
Oceanographic importance
A reason for the success of salps is how they respond to phytoplankton blooms. When food is plentiful, salps can quickly bud off clones, which graze on the phytoplankton and can grow at a rate which is probably faster than that of any other multicellular animal, quickly stripping the phytoplankton from the sea. But if the phytoplankton is too dense, the salps can clog and sink to the bottom. During these blooms, beaches can become slimy with mats of salp bodies, and other planktonic species can experience fluctuations in their numbers due to competition with the salps.
Sinking fecal pellets and bodies of salps carry carbon to the seafloor, and salps are abundant enough to have an effect on the ocean's biological pump. Consequently, large changes in their abundance or distribution may alter the ocean's carbon cycle, and potentially play a role in climate change.
Nervous systems and relationships to other animals
Salps are closely related to the pelagic tunicate groups Doliolida and Pyrosoma, as well as to other bottom-living (benthic) tunicates.
Although salps appear similar to jellyfish because of their simple body form and planktonic behavior, they are chordates: animals with dorsal nerve cords, related to vertebrates (animals with backbones).
Small fish swim inside salps as protection from predators.
Classification
The World Register of Marine Species lists the following genera and species in the order Salpida:
Order Salpida
Family Salpidae
Subfamily Cyclosalpinae
Genus Cyclosalpa de Blainville, 1827Cyclosalpa affinis (Chamisso, 1819)
Cyclosalpa bakeri Ritter, 1905
Cyclosalpa foxtoni Van Soest, 1974
Cyclosalpa ihlei van Soest, 1974
Cyclosalpa pinnata (Forskål, 1775)
Cyclosalpa polae Sigl, 1912
Cyclosalpa quadriluminis Berner, 1955
Cyclosalpa sewelli Metcalf, 1927
Cyclosalpa strongylenteron Berner, 1955
Genus Helicosalpa Todaro, 1902Helicosalpa komaii (Ihle & Ihle-Landenberg, 1936)
Helicosalpa virgula (Vogt, 1854)
Helicosalpa younti Kashkina, 1973
Subfamily Salpinae
Genus Brooksia Metcalf, 1918 Brooksia berneri van Soest, 1975
Brooksia rostrata (Traustedt, 1893)
Genus Ihlea Metcalf, 1919Ihlea magalhanica (Apstein, 1894)
Ihlea punctata (Forskål, 1775)
Ihlea racovitzai (van Beneden & Selys Longchamp, 1913)
Genus MetcalfinaMetcalfina hexagona (Quoy & Gaimard, 1824)
Genus Pegea Savigny, 1816Pegea bicaudata (Quoy & Gaimard, 1826)
Pegea confederata (Forsskål, 1775)
Genus Ritteriella Metcalf, 1919Ritteriella amboinensis (Apstein, 1904)
Ritteriella picteti (Apstein, 1904)
Ritteriella retracta (Ritter, 1906)
Genus Salpa Forskål, 1775Salpa aspera Chamisso, 1819
Salpa fusiformis Cuvier, 1804
Salpa gerlachei Foxton, 1961
Salpa maxima Forskål, 1775
Salpa thompsoni (Foxton, 1961)
Salpa tuberculata Metcalf, 1918
Salpa younti van Soest, 1973
Genus Soestia (also accepted as Iasis)Soestia cylindrica (Cuvier, 1804)
Soestia zonaria (Pallas, 1774)
Genus ThaliaThalia cicar van Soest, 1973
Thalia democratica Forskål, 1775
Thalia longicauda Quoy & Gaimard, 1824
Thalia orientalis Tokioka, 1937
Thalia rhinoceros Van Soest, 1975
Thalia rhomboides Quoy & Gaimard, 1824
Thalia sibogae Van Soest, 1973
Genus Thetys Tilesius, 1802Thetys vagina Tilesius, 1802
Genus TraustedtiaTraustedtia multitentaculata Quoy & Gaimard, 1834
Genus Weelia Yount, 1954Weelia cylindrica (Cuvier, 1804)
References
External links
Plankton Chronicles Short documentary films & photos
Pelagic tunicates (including salps) overview
Scientific expedition to study salps near Antarctica - many details, with interviews, photos, videos, graphs
Sludge of slimy organisms coats beaches of New England Boston Globe October 9, 2006
The salps on earthlife.net
The role of salps in the study of origin of the vertebrate brain
Jellyfish-like Creatures May Play Major Role In Fate Of Carbon Dioxide In The Ocean, ScienceDaily.com, July 2, 2006
"Ocean 'Gummy Bears' Fight Global Warming", LiveScience.com, July 20, 2006
How salps might help counteract global warming BBC News, September 26, 2007
Jelly blobs may hold key to climate change ABC Radio, The World Today - Monday, 17 November 2008
Salp Fact Sheet |
environmental threats to the great barrier reef | The Great Barrier Reef is the world's largest reef systems, stretching along the East coast of Australia from the northern tip down at Cape York to the town of Bundaberg, is composed of roughly 2,900 individual reefs and 940 islands and cays that stretch for 2,300 kilometres (1,616 mi) and cover an area of approximately 344,400 square kilometres (133,000 sq mi). The reef is located in the Coral Sea, off the coast of Queensland in northeast Australia. A large part of the reef is protected by the Great Barrier Reef Marine Park.
According to the 2014 report of the Australian Government's Great Barrier Reef Marine Park Authority (GBRMPA), says that climate change is the most significant environmental threat to the Great Barrier Reef, while the other major environmental pressures are listed as decreased water quality from land-based runoff, impacts from coastal development and some persistent impacts from fishing activities. The reef is also threatened by storms, coral bleaching and ocean acidification. The 2014 report also shows that, while numerous marine life species have recovered after previous declines, the strength of the dugong population is continuing to decline. Terry Hughes, Federation Fellow, ARC Centre of Excellence for Coral Reef Studies at James Cook University, wrote in a 14 August 2014 Conversation piece that harmful government policies and ongoing conflicts of interest over mining royalties are risks of an equivalent magnitude.The GBRMPA consider climate change, poor water quality, coastal development, and some impacts from fishing to be the area's major threats, but reef scientists Jon Day, Bob Pressey, Jon Brodie and Hughes stated that the "cumulative effects of many combined impacts" is the real issue.In a Conversation article, Mathieu Mongin, a biogeochemical modeller at CSIRO and colleagues mapped parts of the Great Barrier Reef that are most exposed to ocean acidification. This map of pH on the Great Barrier Reef presents the exposure to ocean acidification on each of the 3,581 reefs, providing managers with the information they need to tailor management to individual reefs. The Great Barrier Reef is not a singular reef nor a physical barrier that prevents exchange between reefs; it is a mixture of thousands of productive reefs and shallow areas lying on a continental shelf with complex oceanic circulation.
In March 2022, UNESCO launched a monitoring mission to assess the impact of pollution, fishing, climate change and coral bleaching. The report concluded that the Great Barrier Reef should be included on the list of World Heritage in Danger, which would probably have had an impact on tourism. In May 2023, after years of warnings from UNESCO, Australian Environment Minister Tanya Plibersek promised in a letter to UNESCO Director Audrey Azoulay a "combined investment of A$4.4 billion" to protect the reef. In the letter, Australia committed to following UNESCO's recommendations, creating no-fishing zones in one third of the site by the end of 2024, completely banning gillnetting by 2027 and meeting targets for improving water quality by 2025. The Albanese government has pledged to set targets for reducing CO2 emissions, in order to align with the objective of limiting global temperature rise to 1.5 °C.
History
In 1967, efforts began to conserve the Great Barrier Reef. It was proposed to mine lime from Ellison Reef, but surveys showed that the reef supported a diverse community of corals and fish.The Australian and Queensland Governments committed to act in partnership in 2007 to protect the reef, and water quality monitoring programmes were implemented. However, the World Wildlife Fund criticised the slow progress of the governments, raising a concern that as many as 700 reefs continued to be at risk from sediment runoff.The World Heritage Committee has raised concerns about the Great Barrier Reef since 2010. The Australian government then outlined further action after the World Heritage Committee (WHC) called for the completion of a strategic assessment of the Reef area in 2011. The committee also urged the government to use the assessment data to develop a long-term plan for protecting the "Outstanding Universal Value" of the reef, which is the basis for its World Heritage listing. Again, criticisms emerged from the expert community—due to vague quantitative targets, the absence of clear, specific strategies, and no mention of the implications of climate change—but the significant efforts of both state and federal governments addressed key recommendations from the WHC.A 2012 UNESCO report, published by the WHC, then criticised the government's management of the Great Barrier Reef, warning that the area could be downgraded to a world heritage site "in danger" unless major changes were implemented. The report expressed "extreme concern" at the rapid rate of coastal development, highlighting the construction of liquefied natural gas plants at Gladstone and Curtis Island, and recommended that thorough assessments are made before any new developments that could affect the reef are approved. UNESCO specifically recommended no new port development in Abbot Point, Gladstone, Hay Point, Mackay and Townsville.Minister for Foreign Affairs Julie Bishop informed the Australian media that she would use climate change talks, held in Lima, Peru, in December 2014, to avoid the WHC—consisting of experts from 20 nations—applying the "in danger" listing in 2015. Bishop believed that "no justification" existed for the downgrading:
It would send a message around the world that even if you meet all of the criteria set out by the world heritage committee, there is still a risk that they will place an area on the in-danger list ... It [downgrading] would have significant implications for Australia but it would also set a very dangerous precedent for countries who don’t have the opportunity to take the action that Australia has.
To avoid the Great Barrier Reef being listed as "in danger", the Queensland Government introduced the Ports Bill 2014 on 25 November 2014. The Bill seeks to restrict further port development along the coast to Brisbane and four "Priority Port Development Areas", with the latter including four ports identified by the WHC in its 2012 report. The Bill also restricts dredging over a decade-long period, with the exception of priority ports. Additionally, a long-term sustainability plan and the expansion of water-quality activities were introduced by state and federal governments, and their partner agencies.However, in response to the Ports Bill, University of Queensland (UQ) academics said on 19 December 2014 that, although the issues are "not insurmountable", "the health of the reef is still declining and consequently more needs to be done." Australian Marine Conservation Society (AMCS) Great Barrier Reef campaign director Felicity Wishart was more damning and stated in a press release:
The new Ports Bill fails to rule out any currently proposed new dredging, the dumping of dredge spoil in the Reef's waters and is silent on maintenance dredging across the region. The millions of tonnes of dredging and dumping for mega port developments that are in the pipeline will be able to go ahead under the Bill. Despite the establishment of four Priority Port Development Areas along the Reef (Townsville, Abbot Point, Mackay/Hay Point and Gladstone), the Bill will still allow port expansion in Cairns. This fails to meet the recommendation by the World Heritage Committee that no new port developments be permitted outside of the existing port areas. The Bill contains no protections for the most northern section of the Reef or the Fitzroy Delta, and it does nothing to improve water quality in Reef waters, all matters which the World Heritage Committee wants action on.
UNESCO considers that the Reef 2050 Long-Term Sustainability Plan has been effective, noting that progress had been made to reduce agricultural runoff sediments.
Water quality
Water quality was first identified as a threat to the Great Barrier Reef in 1989.
Thirty "major rivers" and hundreds of small streams comprise the Great Barrier Reef catchment area, which covers 423,000 square kilometres (163,000 sq mi) of land. Queensland has several major urban centers on the coast including Cairns, Townsville, Mackay, Rockhampton and the industrial city of Gladstone. Dredging in the Port of Gladstone is raising concern after dead and diseased fish where found in the harbor. Cairns and Townsville are the largest of the coastal cities, with populations of approximately 150,000 each.There are many major water quality variables affecting coral reef health including water temperature, salinity, nutrients, suspended sediment concentrations, and pesticides. The species in the Great Barrier Reef area are adapted to tolerable variations in water quality however when critical thresholds are exceeded they may be adversely impacted. River discharges are the single biggest source of nutrients, providing significant pollution of the Reef during tropical flood events with over 90% of this pollution being sourced from farms. When the 2019 Townsville flood waters reached the Great Barrier Reef, the flood plumes covered a large area of corals, even reaching 60 km out to sea, however water quality analysis showed damage to the outer reef from the flooding was less than previously feared.As of 1995, water visibility had decreased to 10 metres.Due to the range of human uses made of the water catchment area adjacent to the Great Barrier Reef, some 700 of the 3,000 reefs are within a risk zone where water quality has declined due to the naturally acidic sediment and chemical runoff from farming. Coastal development and the loss of coastal wetlands—the latter acts as natural filter—are also major factors From mid 2012 to mid 2016, 596,000 hectares of forest in the catchment zone was cleared.
Industries in the water catchment area are cotton growing, comprising approximately 262 km2 (101 sq mi); 340 dairy farms with an average area of 2 km2 (0.77 sq mi) each, 158 km2 (61 sq mi) cattle grazing, 288 km2 (111 sq mi) horticulture including banana growing, sugarcane farming, and cropping of approximately 8,000 km2 (3,100 sq mi) wheat, 1,200 km2 (460 sq mi) barley, and 6,000 to 7000 km2 sorghum and maize. Fertiliser use in the cotton, dairy, beef, horticulture and sugar industries is essential to ensure productivity and profitability. However, fertiliser and byproducts from sugar cane harvesting methods form a component of surface runoff into the Great Barrier Reef lagoon.The principal agricultural activity in the wet tropics is sugar cane farming, while cattle grazing is the primary industry in the dry tropics regions. Both are considered significant threats to high water quality. Copper, a common industrial pollutant in the waters of the Great Barrier Reef, has been shown to interfere with the development of coral polyps.Flood plumes are flooding events associated with higher levels of nitrogen and phosphorus. In February 2007, due to a monsoonal climate system, plumes of sediment runoff have been observed reaching to the out most regions of the reef.Runoff is especially concerning in the region south of Cairns, as it receives over 3,000 millimetres (120 in) of rain per year and the reefs are less than 30 kilometres (19 mi) away from the coastline. Farm run off is polluted as a result of overgrazing and excessive fertilizer and pesticide use. Mud pollution has increased by 800% and inorganic nitrogen pollution by 3,000% since the introduction of European farming practices on the Australian landscape. This pollution has been linked to a range of very significant risks to the reef system, including intensified outbreaks of the coral-eating crown-of-thorns starfish which contributed to a loss of 66% of live coral cover on sampled reefs in 2000.It is thought that the mechanism behind excess nutrients affecting the reefs is due to increased light and oxygen competition from algae, but unless herbivory is unusually low, this will not create a phase shift from the Great Barrier Reef being primarily made up of coral to being primarily made up of algae.
It has been suggested that poor water quality due to excess nutrients encourages the spread of infectious diseases among corals. In general, the Great Barrier Reef is considered to have low incidences of coral diseases. Skeletal Eroding Band, a disease of bony corals caused by the protozoan Halofolliculina corallasia, affects 31 species of corals from six families on the reef. The long-term monitoring program has found an increase in incidences of coral disease in the period 1999–2002, although they dispute the claim that on the Great Barrier Reef, coral diseases are caused by anthropogenic pollution.Elevated nutrient concentrations result in a range of impacts on coral communities and under extreme conditions can result in a collapse. It also affects coral by promoting phytoplankton growth which increases the number of filter-feeding organisms that compete for space. Excessive inputs of sediment from land to coral can lead to reef destruction through burial, disruption of recruitment success, or deleterious community shifts. Sediments affect coral by smothering them when particles settle out, reducing light availability and potentially reducing photosynthesis and growth. Coral reefs exist in seawater salinities from 25 to 42%. Salinity impacts to corals are increased by other flood-related stresses.While runoff from farms has historically been considered the main water quality stressor to the Great Barrier Reef, pollutants are leaching out of underground aquifers into the reef system.
Pollution from mining
A freedom of information request by the Northern Queensland Conservation Council in 2014 showed that Queensland Nickel discharged nitrate-laden water into the Great Barrier Reef in 2009 and 2011—releasing 516 tonnes (508 long tons; 569 short tons) of toxic waste water on the latter occasion. In June 2012, Queensland Nickel stated it intended to release waste water, continuously for three months, "at least 100 times the allowed maximum level as well as heavy metals and other contaminants". A GBRMPA briefing stated the company had "threatened a compensation claim of $6.4bn should the GBRMPA intend to exert authority over the company's operations". In response to the publicisation of the dumping incidents, the GBRMPA stated:
We have strongly encouraged the company to investigate options that do not entail releasing the material to the environment and to develop a management plan to eliminate this potential hazard; however, GBRMPA does not have legislative control over how the Yabulu tailings dam is managed.
Dumping
Following a tour of the Great Barrier Reef area by WHC members, a 2012 UNESCO report, which criticised management of the Great Barrier Reef, specifically recommended no new port development outside the established areas of Abbot Point, Gladstone, Hay Point/Mackay and Townsville. However, in December 2013, Greg Hunt, the Australian environment minister, approved a plan for dredging to create three shipping terminals as part of the construction of a coalport. According to corresponding approval documents, the process will create around 3 million cubic metres of dredged seabed that will be dumped within the Great Barrier Reef marine park area.
On 31 January 2014, the GBRMPA issued a dumping permit that will allow three million cubic metres of sea bed from Abbot Point, north of Bowen, to be transported and unloaded in the waters of the Great Barrier Reef Marine Park. Potential significant harms have been identified in relation to dredge spoil and the process of churning up the sea floor in the area and exposing it to air: firstly, new research shows the finer particles of dredge spoil can cloud the water and block sunlight, thereby starving sea grass and coral up to distances of 80 km away from the point of origin due to the actions of wind and currents. Furthermore, dredge spoil can literally smother reef or sea grass to death, while storms can repeatedly resuspend these particles so that the harm caused is ongoing; secondly, disturbed sea floor can release toxic substances into the surrounding environment.The dredge spoil from the Abbot Point port project is to be dumped 24 kilometres (15 mi) away, near Bowen in north Queensland, and the approval from the Authority will result in the production of an extra 70 million tonnes of coal annually, worth between A$1.4 billion and $2.8 billion. Authority chairman, Dr Russell Reichelt, stated after the confirmation of the approval:
This approval is in line with the agency's view that port development along the Great Barrier Reef coastline should be limited to existing ports. As a deepwater port that has been in operation for nearly 30 years, Abbot Point is better placed than other ports along the Great Barrier Reef coastline to undertake expansion as the capital and maintenance dredging required will be significantly less than what would be required in other areas. It's important to note the seafloor of the approved disposal area consists of sand, silt and clay and does not contain coral reefs or seagrass beds.
The approval was provided with a corresponding set of 47 new environmental conditions that include the following:
A long-term water quality monitoring plan extending five years after the disposal activity is completed.
A heritage management plan to protect the Catalina second world war aircraft wreck in Abbot Bay.
The establishment of an independent dredging and disposal technical advice panel and a management response group, to include community representatives.Numerous responses, including online petitions, were published in opposition to the proposal: Greenpeace launched the "Save the Reef" campaign in opposition to the dumping, which remained active with over 170,000 signatures on 3 March 2014; in addition to an online petition that registered more than 250,000 signatures on 3 March 2014, political activist group GetUp! are also funding a legal case in conjunction with non-profit Environmental Defenders Office of Queensland (EDO), which represents the North Queensland Conservation Council; and "Fight for the Reef", a partnership between World Wide Fund for Nature (WWF)-Australia and the Australian Marine Conservation Society (AMCS), maintains a campaign that collects online donations to fund a "legal fighting team", and displayed nearly 60,000 supporters on its website on 11 May 2014.The legal fighting team of the WWF-Australia and the AMCS received further support in April 2014 following the release of the "Sounds for the Reef" musical fundraising project. Produced by Straightup, the digital album features artists such as John Butler, The Herd, Sietta, Missy Higgins, The Cat Empire, Fat Freddys Drop, The Bamboos (featuring Kylie Auldist) and Resin Dogs. Released on 7 April, the album's 21 songs were sold on the Bandcamp website.Further support for the WWF-Australia and AMCS partnership occurred in late April 2014, when the Ben & Jerry's ice cream company signed onto the "Fight for the Reef" campaign. In early April 2014, the company withdrew the popular "Phish Food" flavour in Australia due to the aquatic association and the potential for awareness-raising. The product withdrawal decision followed tours around select parts of the nation that involved Ben & Jerry's representatives distributing free ice cream to highlight the reef damage issue.In response, Minister for the Environment and Heritage Protection of Queensland Andrew Powell said that he would be contacting parent corporation Unilever, explaining, "The only people taking a scoop out of the reef is Ben and Jerry's and Unilever. If you understand the facts, you'd want to be boycotting Ben and Jerry's". The Australian public was also informed by Australian Ben & Jerry's brand manager Kalli Swaik, who stated to the Brisbane Times newspaper: "Ben & Jerry's believes that dredging and dumping in world heritage waters surrounding the marine park area will be detrimental to the reef ecology. It threatens the health of one of Australia's most iconic treasures."A Queensland state senator, Matthew Canavan, confirmed that he raised the issue in writing with the Australian Competition & Consumer Commission (ACCC) and said to The Courier-Mail:
Ben & Jerry's can campaign on whatever issue they like but as a company they have an obligation to tell Australians the whole truth and nothing but the truth. Australia has strict laws to protect consumers against misleading and deceptive behavior. These mistruths could cost jobs and development in regional Queensland. It's irresponsible behavior from a company that should know better.
In 2015 the mining industry generated proposals for 5 additional port developments outside the existing ports. In response the Great Barrier Reef Marine Park Authority proposed a ban on disposal of all capital works dredge spoil in the GBR Marine Park, including the yet to be commenced Abbot Point development. This regulation change to the Great Barrier Reef Marine Park Act 1975 was enacted in 2015.
After Cyclone Debbie in 2017, the Adani-operated port released water with eight times the permitted sediment into the Abbot Point lagoon.
Climate change
According to the GBRMPA in 2014, the most significant threat to the status of the Great Barrier Reef is climate change, due to the consequential rise of sea temperatures, gradual ocean acidification and an increase in the number of "intense weather events". Hughes writes of "the demonstrable failure of the state and Commonwealth" to address the issue of climate change in his August 2014 article. Furthermore, a temperature rise of between two and three degrees Celsius would result in 97% of the Great Barrier Reef being bleached every year.Reef scientist Terry Done has predicted that a one-degree rise in global temperature would result in 82% of the reef bleached—two degrees would result in 97%, while three degrees would result in "total devastation". A predictive model based on the 1998 and 2002 bleaching events has concurred that a temperature rise of three degrees would result in total coral mortality.However, a few scientists hold that coral bleaching may in some cases be less of a problem than the mainstream believes. Professor Peter Ridd, from James Cook University in Townsville was quoted in The Australian (a conservative newspaper) as saying; "They are saying bleaching is the end of the world, but when you look into it, that is a highly dubious proposition". Research by scientist Ray Berkelmans "... has documented astonishing levels of recovery on the Keppel outcrops devastated by bleaching in 2006." A related article in The Australian newspaper goes on to explain that; "Those that expel their zooxanthellae have a narrow opening to recolonise with new, temperature-resistant algae before succumbing. In the Keppels in 2006, Berkelmans and his team noticed that the dominant strain of zooxanthellae changed from light and heat-sensitive type C2, to more robust types D and C1."Nevertheless, most coral reef researchers anticipate severely negative effects from climate change already occurring, and potentially disastrous effects as climate change worsens. The future of the Reef may well depend on how much the planet's climate changes, and thus, on how high atmospheric greenhouse gas concentration levels are allowed to rise. On 2 September 2009, a report by the Australian Great Barrier Reef Marine Park Authority revealed that if carbon dioxide levels reached 450 parts per million corals and reef habitats will be highly vulnerable. If carbon dioxide levels are managed at or below 380 parts per million they will be only moderately vulnerable and the reefs will remain coral-dominated.Global warming may have triggered the collapse of reef ecosystems throughout the tropics. Increased global temperatures are thought by some to bring more violent tropical storms, but reef systems are naturally resilient and recover from storm battering. Most people agree that an upward trend in temperature will cause much more coral bleaching; others suggest that while reefs may die in certain areas, other areas will become habitable for corals, and new reefs will form. However, the rate at which the mass bleaching events occur is estimated to be much faster than reefs can recover from, or adjust to.However, Kleypas et al. in their 2006 report suggest that the trend towards ocean acidification indicates that as the sea's pH decreases, corals will become less able to secrete calcium carbonate. In 2009, a study showed that Porites corals, the most robust on the Great Barrier Reef, have slowed down their growth by 14.2% since 1990. It suggested that the cause was heat stress and a lower availability of dissolved calcium to the corals.Climate change has implications for other forms of life on the Great Barrier Reef as well – some fishes' preferred temperature range lead them to seek new areas to live, thus causing chick mortality in seabirds that prey on the fish. Also, in sea turtles, higher temperatures mean that the sex ratio of their populations will change, as the sex of sea turtles is determined by temperature. The habitat of sea turtles will also shrink. A 2018 study which compared the sex ratio of green sea turtles in the northern and southern populations found that the northern population was almost all female.On 22 April 2018 scientists expressed alarm that the impact of climate change could cause massive damage to the ecosystem.The Great Barrier Reef "glue" is at risk from ocean acidification: a study in 2020 argues that in the present-day context of rapid global climate change, changes in dissolved carbon dioxide, pH and temperature, could lead to reduced microbial crust formation, thereby weakening reef frameworks in the future. The study involved extensive sampling of the Great Barrier Reef fossil record and has shown that the calcified scaffolds that help stabilize and bind its structure become thin and weaker as pH levels fall.
Crown-of-thorns starfish
The crown-of-thorns starfish is a coral reef predator which preys on coral polyps by climbing onto them, extruding its stomach over them, and releasing digestive enzymes to absorb the liquefied tissue. An individual adult of this species can eat up to six square meters of living reef in a single year. Geological evidence suggests that the crown-of-thorns starfish has been part of the Great Barrier Reef's ecology for "at least several thousand years", but there is no geological evidence for crown-of-thorns outbreaks. The first known outbreak occurred during the 1960s. Large outbreaks of these starfish can devastate reefs. In 2000, an outbreak contributed to a loss of 66% of live coral cover on sampled reefs in a study by the CRC Reefs Research Centre. Although large outbreaks of these starfish are believed to occur in natural cycles, human activity in and around the Great Barrier Reef can worsen the effects. Reduction of water quality associated with agriculture can cause the crown-of-thorns starfish larvae to thrive. Fertilizer runoff from farming increases the amount of phytoplankton available for the crown-of-thorns starfish larvae to consume. A study by the Australian Institute of Marine Science showed that a doubling of the chlorophyll in the water leads to a tenfold increase in the crown-of-thorns starfish larvae's survival rate. Overfishing of its natural predators, such as the Giant Triton, is also considered to contribute to an increase in the number of crown-of-thorns starfish. The CRC Reef Research Centre defines an outbreak as when there are more than 30 adult starfish in an area of one hectare. There have been three large outbreaks of COTS on the reef since observation began, between 1962 and 1976; 1978 and 1991; 1993 and 2005, and a fourth began in 2009. Investigation is being undertaken into mimicking a chemical scent released by the COTS' natural predator, the giant triton snail.
Overfishing
The unsustainable overfishing of keystone species, such as the Giant Triton and sharks, can cause disruption to food chains vital to life on the reef. Fishing also impacts the reef through increased pollution from boats, bycatch of unwanted species (such as dolphins and turtles) and reef habitat destruction from trawling, anchors and nets. Overfishing of herbivore populations can cause algal growths on reefs. The batfish Platax pinnatus has been observed to significantly reduce algal growths in studies simulating overfishing. Sharks are fished for their meat, and when they are part of bycatch, it is common to kill the shark and throw it overboard, as there is a belief that they interfere with fishing. As of 1 July 2004, approximately one-third of the Great Barrier Reef Marine Park is protected from species removal of any kind, including fishing, without written permission. However, illegal poaching is not unknown in these no-take zones. A 2015 study into coral trout on the Great Barrier Reef found that the no-take zones had more coral trout and more coral trout larvae after tropical cyclone events, which helped replenish those areas sooner. The GBRMPA has a hotline to report suspected poachers.
Shark culling
The government of Queensland has a "shark control" program (shark culling) that deliberately kills sharks in Queensland, including in the Great Barrier Reef. Environmentalists and scientists say that this program harms the marine ecosystem; they also say it is "outdated, cruel and ineffective". The Queensland "shark control" program uses shark nets and drum lines with baited hooks to kill sharks in the Great Barrier Reef – as of 2018, there are 173 lethal drum lines in the Great Barrier Reef. In Queensland, sharks found alive on the baited hooks are shot. Queensland's "shark control" program killed about 50,000 sharks from 1962 to 2018. In addition, Queensland's "shark control" program has killed many other animals (such as dolphins and turtles) – the program killed 84,000 marine animals from 1962 to 2015, including in the Great Barrier Reef.In 2018, Humane Society International filed a lawsuit against the government of Queensland to stop shark culling in the Great Barrier Reef. In 2019, a court (tribunal) said the lethal practices had to stop, but the government of Queensland resumed shark-killing in the Great Barrier Reef when they appealed the decision. The litigation is ongoing.
Shipping
Shipping accidents continue to be perceived as a threat, as several commercial shipping routes pass through the Great Barrier Reef. The GBRMPA estimates that about 6000 vessels greater than 50 metres (164 ft) in length use the Great Barrier Reef as a route. From 1985 to 2001, 11 collisions and 20 groundings occurred along the Great Barrier Reef shipping route, with human error identified as the leading cause of shipping accidents.Reef pilots have stated that they consider the reef route safer than outside the reef in the event of mechanical failure, since a ship can sit safely while being repaired. The inner route is used by 75% of all ships that travel over the Great Barrier Reef. As of 2007, over 1,600 known shipwrecks have occurred in the Great Barrier Reef region.Waste and foreign species discharged from ships in ballast water (when purging procedures are not followed) are a biological hazard to the Great Barrier Reef. Tributyltin (TBT) compounds found in some antifouling paint on ship hulls leaches into seawater and is toxic to marine organisms and humans; as of 2002, efforts are underway to restrict its use.In April 2010, the bulk coal carrier Shen Neng 1 ran aground on the Great Barrier Reef, causing the largest grounding scar to date. The spill caused damage to a 400,000sqm section of the Great Barrier Reef and the use of oil dispersant resulted in oil spreading to reef islands 25 km away. In 2012 and 2013, there were 9619 ship voyages through the Great Barrier Reef region, and this is forecast to increase 250% over the next 20 years.Over 9,600 ship voyages occurred in the GBRMPA between 2012 and 2013.
Oil
It was suspected that the Great Barrier Reef is the cap to an oil trap, after a 1923 paper suggested that it had the right rock formation to support "oilfields of great magnitude". After the Commonwealth Petroleum Search Subsidies Act of 1957, exploration activities increased in Queensland, including a well drilled at Wreck Island in the southern Great Barrier Reef in 1959. In the 1960s, drilling for oil and gas was investigated throughout the Great Barrier Reef, by seismic and magnetic methods in the Torres Strait, along "the eastern seaboard of Cape York to Princess Charlotte Bay" and along the coast from Cooktown to Fraser Island. In the late 1960s, more exploratory wells were drilled near Wreck Island in the Capricorn Channel, and near Darnley Island in the Torres Strait, but "all results were dry".In 1970, responding to concern about oil spills such as the Torrey Canyon, two Royal Commissions were ordered "into exploratory and production drilling for petroleum in the area of the Great Barrier Reef". After the Royal Commissions, the federal and state governments ceased allowing petroleum drilling on the Great Barrier Reef. A study in 1990 concluded that the reef is too young to contain oil reserves. Oil drilling remains prohibited on the Great Barrier Reef, yet oil spills due to shipping routes are still a threat to the reef system, with a total of 282 oil spills between 1987 and 2002.
Tropical cyclones
Tropical cyclones are a cause of ecological disturbance to the Great Barrier Reef. The types of damage caused by tropical cyclones to the Great Barrier Reef is varied, including fragmentation, sediment plumes, and decreasing salinity following heavy rains (Cyclone Joy). The patterns of reef damage are similarly "patchy". From 1910 to 1999, 170 cyclones' paths came near or through the Great Barrier Reef. Most cyclones pass through the Great Barrier Reef within a day. In general, compact corals such as Porites fare better than branching corals under cyclone conditions. The major damage caused by Tropical Cyclone Larry was to underlying reef structures, and breakage and displacement of corals, which is overall consistent with previous tropical cyclone events.
Severe tropical cyclones hit the Queensland coast every 200 to 300 years; however, during the period 1969–1999 most cyclones in the region were very weak – category one or two on the Australian Bureau of Meteorology scale.On 2 February 2011, Severe Tropical Cyclone Yasi struck northern Queensland and caused severe damage to a stretch of hundreds of kilometres within the Great Barrier Reef. The corals could take a decade to recover fully. Cyclone Yasi had wind speeds of 290 kilometers per hour.Tropical cyclones also cause the destruction of tourism infrastructure, which causes pollution on the island resorts.
See also
Environmental issues with coral reefs
Environmental issues in Australia
Resilience of coral reefs
== References == |
water vapor | Water vapor, water vapour or aqueous vapor is the gaseous phase of water. It is one state of water within the hydrosphere. Water vapor can be produced from the evaporation or boiling of liquid water or from the sublimation of ice. Water vapor is transparent, like most constituents of the atmosphere. Under typical atmospheric conditions, water vapor is continuously generated by evaporation and removed by condensation. It is less dense than most of the other constituents of air and triggers convection currents that can lead to clouds and fog.
Being a component of Earth's hydrosphere and hydrologic cycle, it is particularly abundant in Earth's atmosphere, where it acts as a greenhouse gas and warming feedback, contributing more to total greenhouse effect than non-condensable gases such as carbon dioxide and methane. Use of water vapor, as steam, has been important for cooking, and as a major component in energy production and transport systems since the industrial revolution.
Water vapor is a relatively common atmospheric constituent, present even in the solar atmosphere as well as every planet in the Solar System and many astronomical objects including natural satellites, comets and even large asteroids. Likewise the detection of extrasolar water vapor would indicate a similar distribution in other planetary systems. Water vapor can also be indirect evidence supporting the presence of extraterrestrial liquid water in the case of some planetary mass objects.
Properties
Evaporation
Whenever a water molecule leaves a surface and diffuses into a surrounding gas, it is said to have evaporated. Each individual water molecule which transitions between a more associated (liquid) and a less associated (vapor/gas) state does so through the absorption or release of kinetic energy. The aggregate measurement of this kinetic energy transfer is defined as thermal energy and occurs only when there is differential in the temperature of the water molecules. Liquid water that becomes water vapor takes a parcel of heat with it, in a process called evaporative cooling. The amount of water vapor in the air determines how frequently molecules will return to the surface. When a net evaporation occurs, the body of water will undergo a net cooling directly related to the loss of water.
In the US, the National Weather Service measures the actual rate of evaporation from a standardized "pan" open water surface outdoors, at various locations nationwide. Others do likewise around the world. The US data is collected and compiled into an annual evaporation map. The measurements range from under 30 to over 120 inches per year. Formulas can be used for calculating the rate of evaporation from a water surface such as a swimming pool. In some countries, the evaporation rate far exceeds the precipitation rate.
Evaporative cooling is restricted by atmospheric conditions. Humidity is the amount of water vapor in the air. The vapor content of air is measured with devices known as hygrometers. The measurements are usually expressed as specific humidity or percent relative humidity. The temperatures of the atmosphere and the water surface determine the equilibrium vapor pressure; 100% relative humidity occurs when the partial pressure of water vapor is equal to the equilibrium vapor pressure. This condition is often referred to as complete saturation. Humidity ranges from 0 grams per cubic metre in dry air to 30 grams per cubic metre (0.03 ounce per cubic foot) when the vapor is saturated at 30 °C.
Sublimation
Sublimation is the process by which water molecules directly leave the surface of ice without first becoming liquid water. Sublimation accounts for the slow mid-winter disappearance of ice and snow at temperatures too low to cause melting. Antarctica shows this effect to a unique degree because it is by far the continent with the lowest rate of precipitation on Earth. As a result, there are large areas where millennial layers of snow have sublimed, leaving behind whatever non-volatile materials they had contained. This is extremely valuable to certain scientific disciplines, a dramatic example being the collection of meteorites that are left exposed in unparalleled numbers and excellent states of preservation.
Sublimation is important in the preparation of certain classes of biological specimens for scanning electron microscopy. Typically the specimens are prepared by cryofixation and freeze-fracture, after which the broken surface is freeze-etched, being eroded by exposure to vacuum till it shows the required level of detail. This technique can display protein molecules, organelle structures and lipid bilayers with very low degrees of distortion.
Condensation
Water vapor will only condense onto another surface when that surface is cooler than the dew point temperature, or when the water vapor equilibrium in air has been exceeded. When water vapor condenses onto a surface, a net warming occurs on that surface. The water molecule brings heat energy with it. In turn, the temperature of the atmosphere drops slightly. In the atmosphere, condensation produces clouds, fog and precipitation (usually only when facilitated by cloud condensation nuclei). The dew point of an air parcel is the temperature to which it must cool before water vapor in the air begins to condense. Condensation in the atmosphere forms cloud droplets.
Also, a net condensation of water vapor occurs on surfaces when the temperature of the surface is at or below the dew point temperature of the atmosphere. Deposition is a phase transition separate from condensation which leads to the direct formation of ice from water vapor. Frost and snow are examples of deposition.
There are several mechanisms of cooling by which condensation occurs:
1) Direct loss of heat by conduction or radiation.
2) Cooling from the drop in air pressure which occurs with uplift of air, also known as adiabatic cooling.
Air can be lifted by mountains, which deflect the air upward, by convection, and by cold and warm fronts.
3) Advective cooling - cooling due to horizontal movement of air.
Importance and Uses
Provides water for plants and animals: Water vapour gets converted to rain and snow that serve as a natural source of water for plants and animals.
Controls evaporation: Excess water vapor in the air decreases the rate of evaporation.
Determines climatic conditions: Excess water vapor in the air produces rain, fog, snow etc. Hence, it determines climatic conditions.
Chemical reactions
A number of chemical reactions have water as a product. If the reactions take place at temperatures higher than the dew point of the surrounding air the water will be formed as vapor and increase the local humidity, if below the dew point local condensation will occur. Typical reactions that result in water formation are the burning of hydrogen or hydrocarbons in air or other oxygen containing gas mixtures, or as a result of reactions with oxidizers.
In a similar fashion other chemical or physical reactions can take place in the presence of water vapor resulting in new chemicals forming such as rust on iron or steel, polymerization occurring (certain polyurethane foams and cyanoacrylate glues cure with exposure to atmospheric humidity) or forms changing such as where anhydrous chemicals may absorb enough vapor to form a crystalline structure or alter an existing one, sometimes resulting in characteristic color changes that can be used for measurement.
Measurement
Measuring the quantity of water vapor in a medium can be done directly or remotely with varying degrees of accuracy. Remote methods such electromagnetic absorption are possible from satellites above planetary atmospheres. Direct methods may use electronic transducers, moistened thermometers or hygroscopic materials measuring changes in physical properties or dimensions.
Impact on air density
Water vapor is lighter or less dense than dry air. At equivalent temperatures it is buoyant with respect to dry air, whereby the density of dry air at standard temperature and pressure (273.15 K, 101.325 kPa) is 1.27 g/L and water vapor at standard temperature has a vapor pressure of 0.6 kPa and the much lower density of 0.0048 g/L.
Calculations
Water vapor and dry air density calculations at 0 °C:
The molar mass of water is 18.02 g/mol, as calculated from the sum of the atomic masses of its constituent atoms.
The average molar mass of air (approx. 78% nitrogen, N2; 21% oxygen, O2; 1% other gases) is 28.57 g/mol at standard temperature and pressure (STP).
Obeying Avogadro's Law and the ideal gas law, moist air will have a lower density than dry air. At max. saturation (i. e. rel. humidity = 100% at 0 °C) the density will go down to 28.51 g/mol.
STP conditions imply a temperature of 0 °C, at which the ability of water to become vapor is very restricted. Its concentration in air is very low at 0 °C. The red line on the chart to the right is the maximum concentration of water vapor expected for a given temperature. The water vapor concentration increases significantly as the temperature rises, approaching 100% (steam, pure water vapor) at 100 °C. However the difference in densities between air and water vapor would still exist (0.598 vs. 1.27 g/L).
At equal temperatures
At the same temperature, a column of dry air will be denser or heavier than a column of air containing any water vapor, the molar mass of diatomic nitrogen and diatomic oxygen both being greater than the molar mass of water. Thus, any volume of dry air will sink if placed in a larger volume of moist air. Also, a volume of moist air will rise or be buoyant if placed in a larger region of dry air. As the temperature rises the proportion of water vapor in the air increases, and its buoyancy will increase. The increase in buoyancy can have a significant atmospheric impact, giving rise to powerful, moisture rich, upward air currents when the air temperature and sea temperature reaches 25 °C or above. This phenomenon provides a significant driving force for cyclonic and anticyclonic weather systems (typhoons and hurricanes).
Respiration and breathing
Water vapor is a by-product of respiration in plants and animals. Its contribution to the pressure, increases as its concentration increases. Its partial pressure contribution to air pressure increases, lowering the partial pressure contribution of the other atmospheric gases (Dalton's Law). The total air pressure must remain constant. The presence of water vapor in the air naturally dilutes or displaces the other air components as its concentration increases.
This can have an effect on respiration. In very warm air (35 °C) the proportion of water vapor is large enough to give rise to the stuffiness that can be experienced in humid jungle conditions or in poorly ventilated buildings.
Lifting gas
Water vapor has lower density than that of air and is therefore buoyant in air but has lower vapor pressure than that of air. When water vapor is used as a lifting gas by a thermal airship the water vapor is heated to form steam so that its vapor pressure is greater than the surrounding air pressure in order to maintain the shape of a theoretical "steam balloon", which yields approximately 60% the lift of helium and twice that of hot air.
General discussion
The amount of water vapor in an atmosphere is constrained by the restrictions of partial pressures and temperature. Dew point temperature and relative humidity act as guidelines for the process of water vapor in the water cycle. Energy input, such as sunlight, can trigger more evaporation on an ocean surface or more sublimation on a chunk of ice on top of a mountain. The balance between condensation and evaporation gives the quantity called vapor partial pressure.
The maximum partial pressure (saturation pressure) of water vapor in air varies with temperature of the air and water vapor mixture. A variety of empirical formulas exist for this quantity; the most used reference formula is the Goff-Gratch equation for the SVP over liquid water below zero degrees Celsius:
log
10
(
p
)
=
−
7.90298
(
373.16
T
−
1
)
+
5.02808
log
10
373.16
T
−
1.3816
×
10
−
7
(
10
11.344
(
1
−
T
373.16
)
−
1
)
+
8.1328
×
10
−
3
(
10
−
3.49149
(
373.16
T
−
1
)
−
1
)
+
log
10
(
1013.246
)
{\displaystyle {\begin{aligned}\log _{10}\left(p\right)=&-7.90298\left({\frac {373.16}{T}}-1\right)+5.02808\log _{10}{\frac {373.16}{T}}\\&-1.3816\times 10^{-7}\left(10^{11.344\left(1-{\frac {T}{373.16}}\right)}-1\right)\\&+8.1328\times 10^{-3}\left(10^{-3.49149\left({\frac {373.16}{T}}-1\right)}-1\right)\\&+\log _{10}\left(1013.246\right)\end{aligned}}}
where T, temperature of the moist air, is given in units of kelvin, and p is given in units of millibars (hectopascals).
The formula is valid from about −50 to 102 °C; however there are a very limited number of measurements of the vapor pressure of water over supercooled liquid water. There are a number of other formulae which can be used.Under certain conditions, such as when the boiling temperature of water is reached, a net evaporation will always occur during standard atmospheric conditions regardless of the percent of relative humidity. This immediate process will dispel massive amounts of water vapor into a cooler atmosphere.
Exhaled air is almost fully at equilibrium with water vapor at the body temperature. In the cold air the exhaled vapor quickly condenses, thus showing up as a fog or mist of water droplets and as condensation or frost on surfaces. Forcibly condensing these water droplets from exhaled breath is the basis of exhaled breath condensate, an evolving medical diagnostic test.
Controlling water vapor in air is a key concern in the heating, ventilating, and air-conditioning (HVAC) industry. Thermal comfort depends on the moist air conditions. Non-human comfort situations are called refrigeration, and also are affected by water vapor. For example, many food stores, like supermarkets, utilize open chiller cabinets, or food cases, which can significantly lower the water vapor pressure (lowering humidity). This practice delivers several benefits as well as problems.
In Earth's atmosphere
Gaseous water represents a small but environmentally significant constituent of the atmosphere. The percentage of water vapor in surface air varies from 0.01% at -42 °C (-44 °F) to 4.24% when the dew point is 30 °C (86 °F). Over 99% of atmospheric water is in the form of vapour, rather than liquid water or ice, and approximately 99.13% of the water vapour is contained in the troposphere. The condensation of water vapor to the liquid or ice phase is responsible for clouds, rain, snow, and other precipitation, all of which count among the most significant elements of what we experience as weather. Less obviously, the latent heat of vaporization, which is released to the atmosphere whenever condensation occurs, is one of the most important terms in the atmospheric energy budget on both local and global scales. For example, latent heat release in atmospheric convection is directly responsible for powering destructive storms such as tropical cyclones and severe thunderstorms. Water vapor is an important greenhouse gas owing to the presence of the hydroxyl bond which strongly absorbs in the infra-red.
Water vapor is the "working medium" of the atmospheric thermodynamic engine which transforms heat energy from sun irradiation into mechanical energy in the form of winds. Transforming thermal energy into mechanical energy requires an upper and a lower temperature level, as well as a working medium which shuttles forth and back between both. The upper temperature level is given by the soil or water surface of the earth, which absorbs the incoming sun radiation and warms up, evaporating water. The moist and warm air at the ground is lighter than its surroundings and rises up to the upper limit of the troposphere. There the water molecules radiate their thermal energy into outer space, cooling down the surrounding air. The upper atmosphere constitutes the lower temperature level of the atmospheric thermodynamic engine. The water vapor in the now cold air condenses out and falls down to the ground in the form of rain or snow. The now heavier cold and dry air sinks down to ground as well; the atmospheric thermodynamic engine thus establishes a vertical convection, which transports heat from the ground into the upper atmosphere, where the water molecules can radiate it to outer space. Due to the earth's rotation and the resulting Coriolis forces, this vertical atmospheric convection is also converted into a horizontal convection, in the form of cyclones and anticyclones, which transport the water evaporated over the oceans into the interior of the continents, enabling vegetation to grow.Water in Earth's atmosphere is not merely below its boiling point (100 °C), but at altitude it goes below its freezing point (0 °C), due to water's highly polar attraction. When combined with its quantity, water vapor then has a relevant dew point and frost point, unlike e. g., carbon dioxide and methane. Water vapor thus has a scale height a fraction of that of the bulk atmosphere, as the water condenses and exits, primarily in the troposphere, the lowest layer of the atmosphere. Carbon dioxide (CO2) and methane, being well-mixed in the atmosphere, tend to rise above water vapour. The absorption and emission of both compounds contribute to Earth's emission to space, and thus the planetary greenhouse effect. This greenhouse forcing is directly observable, via distinct spectral features versus water vapor, and observed to be rising with rising CO2 levels. Conversely, adding water vapor at high altitudes has a disproportionate impact, which is why jet traffic has a disproportionately high warming effect. Oxidation of methane is also a major source of water vapour in the stratosphere, and adds about 15% to methane's global warming effect.In the absence of other greenhouse gases, Earth's water vapor would condense to the surface; this has likely happened, possibly more than once. Scientists thus distinguish between non-condensable (driving) and condensable (driven) greenhouse gases, i.e., the above water vapor feedback.Fog and clouds form through condensation around cloud condensation nuclei. In the absence of nuclei, condensation will only occur at much lower temperatures. Under persistent condensation or deposition, cloud droplets or snowflakes form, which precipitate when they reach a critical mass.
Atmospheric concentration of water vapour is highly variable between locations and times, from 10 ppmv in the coldest air to 5% (50 000 ppmv) in humid tropical air, and can be measured with a combination of land observations, weather balloons and satellites. The water content of the atmosphere as a whole is constantly depleted by precipitation. At the same time it is constantly replenished by evaporation, most prominently from oceans, lakes, rivers, and moist earth. Other sources of atmospheric water include combustion, respiration, volcanic eruptions, the transpiration of plants, and various other biological and geological processes. At any given time there is about 1.29 x 1016 litres (3.4 x 1015 gal.) of water in the atmosphere. The atmosphere holds 1 part in 2500 of the fresh water, and 1 part in 100,000 of the total water on Earth. The mean global content of water vapor in the atmosphere is roughly sufficient to cover the surface of the planet with a layer of liquid water about 25 mm deep. The mean annual precipitation for the planet is about 1 metre, a comparison which implies a rapid turnover of water in the air – on average, the residence time of a water molecule in the troposphere is about 9 to 10 days.
Global mean water vapour is about 0.25% of the atmosphere by mass and also varies seasonally, in terms of contribution to atmospheric pressure between 2.62 hPa in July and 2.33 hPa in December. IPCC AR6 expresses medium confidence in increase of total water vapour at about 1-2% per decade; it is expected to increase by around 7% per °C of warming.Episodes of surface geothermal activity, such as volcanic eruptions and geysers, release variable amounts of water vapor into the atmosphere. Such eruptions may be large in human terms, and major explosive eruptions may inject exceptionally large masses of water exceptionally high into the atmosphere, but as a percentage of total atmospheric water, the role of such processes is trivial. The relative concentrations of the various gases emitted by volcanoes varies considerably according to the site and according to the particular event at any one site. However, water vapor is consistently the commonest volcanic gas; as a rule, it comprises more than 60% of total emissions during a subaerial eruption.Atmospheric water vapor content is expressed using various measures. These include vapor pressure, specific humidity, mixing ratio, dew point temperature, and relative humidity.
Radar and satellite imaging
Because water molecules absorb microwaves and other radio wave frequencies, water in the atmosphere attenuates radar signals. In addition, atmospheric water will reflect and refract signals to an extent that depends on whether it is vapor, liquid or solid.
Generally, radar signals lose strength progressively the farther they travel through the troposphere. Different frequencies attenuate at different rates, such that some components of air are opaque to some frequencies and transparent to others. Radio waves used for broadcasting and other communication experience the same effect.
Water vapor reflects radar to a lesser extent than do water's other two phases. In the form of drops and ice crystals, water acts as a prism, which it does not do as an individual molecule; however, the existence of water vapor in the atmosphere causes the atmosphere to act as a giant prism.A comparison of GOES-12 satellite images shows the distribution of atmospheric water vapor relative to the oceans, clouds and continents of the Earth. Vapor surrounds the planet but is unevenly distributed. The image loop on the right shows monthly average of water vapor content with the units are given in centimeters, which is the precipitable water or equivalent amount of water that could be produced if all the water vapor in the column were to condense. The lowest amounts of water vapor (0 centimeters) appear in yellow, and the highest amounts (6 centimeters) appear in dark blue. Areas of missing data appear in shades of gray. The maps are based on data collected by the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor on NASA's Aqua satellite. The most noticeable pattern in the time series is the influence of seasonal temperature changes and incoming sunlight on water vapor. In the tropics, a band of extremely humid air wobbles north and south of the equator as the seasons change. This band of humidity is part of the Intertropical Convergence Zone, where the easterly trade winds from each hemisphere converge and produce near-daily thunderstorms and clouds. Farther from the equator, water vapor concentrations are high in the hemisphere experiencing summer and low in the one experiencing winter. Another pattern that shows up in the time series is that water vapor amounts over land areas decrease more in winter months than adjacent ocean areas do. This is largely because air temperatures over land drop more in the winter than temperatures over the ocean. Water vapor condenses more rapidly in colder air.As water vapor absorbs light in the visible spectral range, its absorption can be used in spectroscopic applications (such as DOAS) to determine the amount of water vapor in the atmosphere. This is done operationally, e.g. from the Global Ozone Monitoring Experiment (GOME) spectrometers on ERS (GOME) and MetOp (GOME-2). The weaker water vapor absorption lines in the blue spectral range and further into the UV up to its dissociation limit around 243 nm are mostly based on quantum mechanical calculations and are only partly confirmed by experiments.
Lightning generation
Water vapor plays a key role in lightning production in the atmosphere. From cloud physics, usually clouds are the real generators of static charge as found in Earth's atmosphere. The ability of clouds to hold massive amounts of electrical energy is directly related to the amount of water vapor present in the local system.
The amount of water vapor directly controls the permittivity of the air. During times of low humidity, static discharge is quick and easy. During times of higher humidity, fewer static discharges occur. Permittivity and capacitance work hand in hand to produce the megawatt outputs of lightning.After a cloud, for instance, has started its way to becoming a lightning generator, atmospheric water vapor acts as a substance (or insulator) that decreases the ability of the cloud to discharge its electrical energy. Over a certain amount of time, if the cloud continues to generate and store more static electricity, the barrier that was created by the atmospheric water vapor will ultimately break down from the stored electrical potential energy. This energy will be released to a local oppositely charged region, in the form of lightning. The strength of each discharge is directly related to the atmospheric permittivity, capacitance, and the source's charge generating ability.
Extraterrestrial
Water vapor is common in the Solar System and by extension, other planetary systems. Its signature has been detected in the atmospheres of the Sun, occurring in sunspots. The presence of water vapor has been detected in the atmospheres of all seven extraterrestrial planets in the solar system, the Earth's Moon, and the moons of other planets, although typically in only trace amounts.
Geological formations such as cryogeysers are thought to exist on the surface of several icy moons ejecting water vapor due to tidal heating and may indicate the presence of substantial quantities of subsurface water. Plumes of water vapor have been detected on Jupiter's moon Europa and are similar to plumes of water vapor detected on Saturn's moon Enceladus. Traces of water vapor have also been detected in the stratosphere of Titan. Water vapor has been found to be a major constituent of the atmosphere of dwarf planet, Ceres, largest object in the asteroid belt The detection was made by using the far-infrared abilities of the Herschel Space Observatory. The finding is unexpected because comets, not asteroids, are typically considered to "sprout jets and plumes." According to one of the scientists, "The lines are becoming more and more blurred between comets and asteroids." Scientists studying Mars hypothesize that if water moves about the planet, it does so as vapor.The brilliance of comet tails comes largely from water vapor. On approach to the Sun, the ice many comets carry sublimes to vapor. Knowing a comet's distance from the sun, astronomers may deduce the comet's water content from its brilliance.Water vapor has also been confirmed outside the Solar System. Spectroscopic analysis of HD 209458 b, an extrasolar planet in the constellation Pegasus, provides the first evidence of atmospheric water vapor beyond the Solar System. A star called CW Leonis was found to have a ring of vast quantities of water vapor circling the aging, massive star. A NASA satellite designed to study chemicals in interstellar gas clouds, made the discovery with an onboard spectrometer. Most likely, "the water vapor was vaporized from the surfaces of orbiting comets." Other exoplanets with evidence of water vapor include HAT-P-11b and K2-18b.
See also
References
Bibliography
External links
National Science Digital Library – Water Vapor
Calculate the condensation of your exhaled breath
Water Vapor Myths: A Brief Tutorial
AGU Water Vapor in the Climate System – 1995
Free Windows Program, Water Vapor Pressure Units Conversion Calculator – PhyMetrix |
aerobic denitrification | Aerobic denitrification or co-respiration the simultaneous use of both oxygen (O2) and nitrate (NO3−) as oxidizing agents, performed by various genera of microorganisms. This process differs from anaerobic denitrification not only in its insensitivity to the presence of oxygen, but also in that it has a higher potential to create the harmful byproduct nitrous oxide.Nitrogen, acting as an oxidant, is therefore reduced in a succession of four reactions performed by the enzymes nitrate, nitrite, nitric-oxide, and nitrous oxide reductases. The pathway ultimately yields reduced molecular nitrogen (N2), as well as, when the reaction does not reach completion, the intermediate species nitrous oxide (N2O). A simple denitrification reaction proceeds as+
NO3− → NO2− → NO → N2O → N2 (g)The respiration reaction which utilizes oxygen as the oxidant is:
C6H12O6 (aq) + 6O2 (g) → 6CO2 (g) + 6H2OClassically, it was thought that denitrification would not occur in the presence of oxygen since there seems to be no energetic advantage to using nitrate as an oxidant when oxygen is available. Experiments have since proven that denitrifiers are often facultative anaerobes and that aerobic denitrification does indeed occur in a broad range of microbial organisms with varying levels of productivity, usually lower productivity than results from purely aerobic respiration. The advantages of being able to perform denitrification in the presence of oxygen are uncertain, though it is possible that the ability to adapt to changes in oxygen levels plays a role. Aerobic denitrification may be found in environments where fluctuating oxygen concentrations and reduced carbon are available. The relative harsh environment inspires the potential of denitrifiers to degrade toxic nitrate or nitrate under an aerobic atmosphere. Aerobic denitrifiers tend to work efficiently at 25 ~ 37 °C and pH 7 ~ 8, when dissolved oxygen concentration is 3 ~ 5 mg/L and C/N load ratio is 5 ~ 10.
Environmental impact
Wastewater treatment
Water treatment often relies on the activity of anaerobically denitrifying bacteria in order to remove nitrate from water. However, due to the absence of oxygen, nitrate cannot be fully reduced to dinitrogen, thus nitrate remains in the water or it is converted to nitrous oxide. Lingering nitrate in drinking water poses a plethora of health risks, and both nitrate and nitrous oxide have major environmental impacts. Some hazards include, carcinogenic nitrate ions in drinking water, or eutrophication caused by oxidized nitrogen seeding algal blooms. Conversely aerobic denitrification can further reduce oxidized nitrogen in a less specialized environment. For instance, many aerobically denitrifying bacteria from the genus Pseudomonas (P. stutzeri, P. mendocina and P. putida) were shown to be isolated from the Lingshui River in China, and could be further used in bioaugmentation to clear contaminated water. In addition to Pseudomonas, Acinetobacter sp. SYF26 was isolated from the Heihe reservoir in China. Genomic analysis revealed a napA gene encoding a periplasmic nitrate reductase, and a nirK and nirS for gene for the nitrite reductase (both enzymes needed in aerobic nitrate denitrification).
Global warming
Nitrous oxide is a 200-300 times more potent greenhouse gas than carbon dioxide, accounting for 5% of the global green house gas effect. During the reduction of nitrate in wastewater treatment, nitrous oxide is only released in the absence of appropriate oxygen regulation. Some solutions to combat the release of nitrous oxide from wastewater treatment could be to use aerobically denitrifying organisms with the capacity to fully reduce nitrogen. For instance, P. denitrificans has shown to efficiently reduce nitrate to N2 in cultured media and wastewater. Furthermore, TR2 strains of P. sturzeri and Pseudomonas sp. strain K50 were also shown to have substantially low levels of nitrous oxide production in water treatment. Thus enriching activated sludge for aerobic denitrifying bacteria can be effective in combating the global warming effects of nitrous oxide in wastewater treatment.
See also
Denitrification
Cellular respiration
Nitrogen cycle
Nitrogen fixation
== References == |
uunartoq qeqertaq | Uunartoq Qeqertaq (Greenlandic), Warming Island in English, is an island off the east central coast of Greenland, 550 kilometres (342 miles) north of the Arctic Circle. It became recognised as an island only in September 2005, by US explorer Dennis Schmitt. It was attached to the mainland of Liverpool Land by glacial ice even in 2002, when the ice shelves began retreating rapidly in this area, so that by 2005 it was no longer attached to the mainland. Members of the scientific community believe this newly discovered island is a direct result of global warming.
Controversy
Patrick Michaels, a climatologist and prominent global warming denier, created a controversy over the history of Warming Island in a post on his website, World Climate Report, in which he argued that the island had been previously uncovered in the 1950s toward the end of a brief warm period in Greenland.Despite a general lack of suitably detailed maps, Michaels found a map published by Ernst Hofer, a photographer who did aerial surveys of the area in the early 1950s, which showed the Warming Island landmass unconnected to Greenland. Michaels concluded therefore that Warming Island was also a separate island when observed by Hofer in the 1950s, and more broadly that Warming Island is an example of unjustified concern about the future outcomes of global warming.Dennis Schmitt countered Michaels' theory in an article by New York Times reporter Andy Revkin, contending that Hofer's map is inaccurate. Citing discrepancies such as the absence on Hofer's map of nearby Reynolds Island, he suggested that the discrepant features are consistent with an aerial view of the area when covered with fog, which has often obscured low-lying areas like Reynold's Island and the ice-bridge connecting Warming Island to the Greenlandic mainland. He further observed, "I see by the markings of the 1957 document that it is to be construed as indeed only schematic, that it is explicitly incomplete."Michaels explained that Hofer included the map in his book "so as to place his pictures and stories in context."No photographic evidence is available that would resolve the issue.
The island was also part of a 2011 controversy when it was included in the Times Atlas of the World, along with a revised depiction of the Greenland ice sheet that showed a 15% reduction. After being alerted through the media, the U.S. National Snow and Ice Data Center reported that the atlas editors must have used a 2001 map showing only the thickest segment of the ice sheet.
See also
Effects of global warming
Physical impacts of climate change
References
External links
The Independent: An island made by global warming, 24 April 2007
USGS Landsat Project: Warming Island, comparison of satellite pictures between 1985 and 2005 |
2007 brooklyn tornado | The 2007 Brooklyn tornado was the strongest tornado on record to strike in New York City. It formed in the early morning hours of August 8, 2007, skipping along an approximately 9 miles (14 km)-long path, from Staten Island across The Narrows to Brooklyn. The worst damage was in and around Sunset Park and Bay Ridge, in Brooklyn. The U.S. National Weather Service estimated its strength there as an EF2 on the Enhanced Fujita scale.No serious injuries or fatalities were reported as a result of the tornado, but several people were treated at area hospitals for flying glass injuries. At least 40 buildings and 100 cars were damaged. New York State Senator Martin Golden’s office estimated damage in the tens of millions of U.S. dollars.The storm system produced severe street flooding, and disrupted all modes of transportation throughout the city. Service was delayed or suspended on all 24 New York City Subway services during the morning rush hour, and nine services were still not running by the evening rush.
Timeline
The typical summer storm system that spawned the tornado gathered strength over Pennsylvania, caused heavy rain over New Jersey and continued its eastward movement, reaching New York City at sunrise.According to the National Weather Service, the first tornado first touched down in Staten Island at approximately 6:22 am EDT (1022 UTC) in the vicinity of St. Austins Place in the Livingston - Randall Manor area, before moving east, with additional damage occurring in the Tompkinsville area, probably from a subsequent tornado that formed from a new area of circulation just north of the first tornado. Most of the damage on Staten Island was to trees, and was rated EF1 intensity with estimated winds of 86–100 miles per hour (138–161 km/h).
The circulation intensified, and headed east across The Narrows tidal strait, just north of the Verrazano-Narrows Bridge, and the tornado re-developed and touched down again in Brooklyn, at Bay Ridge at 6:32 am EDT. It continued on an east-northeast path across 68th Street between Third and Fourth Avenues, damaging the roofs of 11 homes. The storm continued to move east-northeast into Leif Ericson Park Square, where severe damage to trees occurred, and where winds blew out a 15-foot-tall (4.6 m) stained glass window valued at $300,000 at the nearby Fourth Avenue Presbyterian Church. It then crossed the Brooklyn Queens Expressway. The tornado touched down farther northeast with scattered tree damage along Sixth Avenue. Based on the assessed damage this stage of the tornado was classified EF2 with wind speeds of 111 to 135 mph (179 to 217 km/h).
The tornado returned to the ground with another pocket of significant damage on 58th Street between Sixth and Seventh Avenues. Roofs were ripped off 5 homes, with tree damage indicating strong EF1 damage. The tornado then headed east and touched down again in Kensington and the Flatbush neighborhood of Prospect Park South at approximately 6:40 am EDT. Approximately 30 trees were uprooted along Ocean Parkway.The National Weather Service had issued a tornado warning for portions of Staten Island and Brooklyn at 6:28 am. Tornado warnings were also briefly issued for eastern Brooklyn and Southern Queens and Nassau County on Long Island, but no tornadoes were reported in those areas.
Media coverage
New York media coverage of the event focused on the novelty of the event and its disruption of subway service (this was the third time in 2007 when heavy rain had caused disruptions in subway service).
Tabloids
The New York Post and New York Daily News tabloids both ran the front-page headline “Twister!”
The main article in the Daily News was headlined “Brooklyn becomes Tornado Alley!” with a subheading of “First twister to rip through boro since 1889; S.I. driver dies”.
The main article inside the Post read: “Brooklyn Cyclone” (playing on the pun of the famous Coney Island Cyclone in Brooklyn and the baseball team of the same name). The teaser on the front page depicted Dorothy Gale from The Wonderful Wizard of Oz proclaiming “This ain't Kansas”. An inside side bar in the Post had eyewitness accounts headlined “Wet & Wild”.
The Long Island based Newsday front-page headline was "What’s Up With the Weather? – LI Drenched Again – Tornado in City – Subways Swamped."
Broadsheets
The New York Times front page main headline was “Subways to Rooftops, a Storm Brings Havoc to New York” The three front-page stories were headlined “Transportation Paralysis”, "Déjà vu Down Under", "Yes a Tornado in Brooklyn."
The New York Times quoted an eyewitness, who said, "It was a funnel shape...It looked kind of black and blue...it was way up high and came right down on the roof of (a department store)...Pieces of the roof were all over the place. It was a big bang."The New York Sun read: “It’s Frustrating, It’s Insanity” with a subhead “Anger Erupts At Subway; Tornado Hits”.
Global warming
The press coverage also examined the link between the storm and global warming, given the tornado's historic nature, and the other recent subway service interruptions caused by torrential rain on July 18 and the previous winter. Official statements alluded to this as well. "We may be dealing with meteorological conditions that are unprecedented," said Metropolitan Transportation Authority Executive Director Elliot G. Sander in the immediate aftermath, and New York Governor Eliot Spitzer said the day after, "This is supposed to be a rainfall event that is a once-in-a-decade occurrence – we've had three in the past seven months." Climate scientist James E. Hansen of the NASA Goddard Institute for Space Studies at Columbia University in New York City cautioned against linking any single event with global warming, but did say that the probability of severe weather events is related. "You cannot blame a single specific event, such as this week's storm, on climate change. However, it is fair to ask whether the human changes have altered the likelihood of such events. There the answer seems to be 'yes.'", he was quoted as saying.
See also
Other tornadoes
There were five previous tornadoes in New York City on record, but none as strong as this one. The New York City borough of Staten Island has had the most tornadoes on record of any of the five boroughs, with a total of three, all since 1990. Meteorologists believed this storm produced the first tornado in Brooklyn since 1889, before reliable records were kept. The five previous twisters on record are:
October 27, 2003 — An F0 tornado touches down briefly in Staten Island
October 28, 1995 — An F1 tornado touches down in Staten Island with light damage
August 10, 1990 — An F0 tornado on Staten Island injures three people
October 5, 1985 — An F1 tornado in Fresh Meadows Park, Queens injures six people.
September 2, 1974 — An F1 tornado moved from Westchester into the Bronx
Other links
List of North American tornadoes and tornado outbreaks
List of tornadoes striking downtown areas
Tornadoes of 2007
References
External links
Public Information Statement...Revised, Tornado Damage Survey, National Weather Service, Upton NY, issued 2:32 PM EDT Tuesday August 14, 2007
A Tree Blows Down in Brooklyn |
global warming: the signs and the science | Global Warming: The Signs and The Science is a 2005 documentary film on global warming made by ETV, the PBS affiliate in South Carolina, and hosted by Alanis Morissette. The documentary examines the science behind global warming and pulls together segments filmed in the United States, Asia and South America and shows how people in these different locales are responding in different ways to the challenges of global warming to show some of the ways that the world can respond.
See also
External links
Global Warming: The Signs and The Science PBS 2005-11-02 |
air conditioning | Air conditioning, often abbreviated as A/C (US) or air con (UK), is the process of removing heat from an enclosed space to achieve a more comfortable interior environment (sometimes referred to as 'comfort cooling') and in some cases also strictly controlling the humidity of internal air. Air conditioning can be achieved using a mechanical 'air conditioner' or alternatively a variety of other methods, including passive cooling and ventilative cooling. Air conditioning is a member of a family of systems and techniques that provide heating, ventilation, and air conditioning (HVAC). Heat pumps are similar in many ways to air conditioners, but use a reversing valve to allow them both to heat and to cool an enclosed space.
Air conditioners, which typically use vapor-compression refrigeration, range in size from small units used within vehicles or single rooms to massive units that can cool large buildings. Air source heat pumps, which can be used for heating as well as cooling, are becoming increasingly common in cooler climates.
According to the International Energy Agency (IEA), as of 2018, 1.6 billion air conditioning units were installed, which accounted for an estimated 20% of electricity usage in buildings globally with the number expected to grow to 5.6 billion by 2050. The United Nations called for the technology to be made more sustainable to mitigate climate change and for the use of alternatives, like passive cooling, evaporative cooling, selective shading, windcatchers, and better thermal insulation. CFC and HCFC refrigerants such as R-12 and R-22, respectively, used within air conditioners have caused damage to the ozone layer, and HFC refrigerants such as R-410a and R-404a, which were designed to replace CFCs and HCFCs, are instead exacerbating climate change. Both issues happen due to the venting of refrigerant to the atmosphere, such as during repairs. HFO refrigerants, used in some if not most new equipment, solve both issues with an ozone damage potential (ODP) of zero and a much lower global warming potential (GWP) in the single or double digits vs. the three or four digits of HFCs.
History
Air conditioning dates back to prehistory. Ancient Egyptian buildings used a wide variety of passive air-conditioning techniques. These became widespread from the Iberian Peninsula through North Africa, the Middle East, and Northern India.Passive techniques remained widespread until the 20th century, when they fell out of fashion, replaced by powered air conditioning. Using information from engineering studies of traditional buildings, passive techniques are being revived and modified for 21st-century architectural designs.
Air conditioners allow the building's indoor environment to remain relatively constant largely independent of changes in external weather conditions and internal heat loads. They also allow deep plan buildings to be created and have allowed people to live comfortably in hotter parts of the world.
Development
Preceding discoveries
In 1558, Giambattista della Porta described a method of chilling ice to temperatures far below its freezing point by mixing it with potassium nitrate (then called "nitre") in his popular science book Natural Magic. In 1620, Cornelis Drebbel demonstrated "Turning Summer into Winter" for James I of England, chilling part of the Great Hall of Westminster Abbey with an apparatus of troughs and vats. Drebbel's contemporary Francis Bacon, like della Porta a believer in science communication, may not have been present at the demonstration, but in a book published later the same year, he described it as "experiment of artificial freezing" and said that "Nitre (or rather its spirit) is very cold, and hence nitre or salt when added to snow or ice intensifies the cold of the latter, the nitre by adding to its own cold, but the salt by supplying activity to the cold of the snow."In 1758, Benjamin Franklin and John Hadley, a chemistry professor at University of Cambridge, conducted an experiment to explore the principle of evaporation as a means to rapidly cool an object. Franklin and Hadley confirmed that the evaporation of highly volatile liquids (such as alcohol and ether) could be used to drive down the temperature of an object past the freezing point of water. They conducted their experiment with the bulb of a mercury-in-glass thermometer as their object and with a bellows used to speed up the evaporation. They lowered the temperature of the thermometer bulb down to −14 °C (7 °F) while the ambient temperature was 18 °C (64 °F). Franklin noted that soon after they passed the freezing point of water 0 °C (32 °F), a thin film of ice formed on the surface of the thermometer's bulb and that the ice mass was about 6 mm (1⁄4 in) thick when they stopped the experiment upon reaching −14 °C (7 °F). Franklin concluded: "From this experiment one may see the possibility of freezing a man to death on a warm summer's day."The 19th century included a number of developments in compression technology. In 1820, English scientist and inventor Michael Faraday discovered that compressing and liquefying ammonia could chill air when the liquefied ammonia was allowed to evaporate. In 1842, Florida physician John Gorrie used compressor technology to create ice, which he used to cool air for his patients in his hospital in Apalachicola, Florida. He hoped to eventually use his ice-making machine to regulate the temperature of buildings and envisioned centralized air conditioning that could cool entire cities. Gorrie was granted a patent in 1851, but following the death of his main backer he was not able to realise his invention. In 1851, James Harrison created the first mechanical ice-making machine in Geelong, Australia, and was granted a patent for an ether vapor-compression refrigeration system in 1855 that produced three tons of ice per day. In 1860, Harrison established a second ice company and later entered the debate over how to compete against the American advantage of ice-refrigerated beef sales to the United Kingdom.
First devices
Electricity made development of effective units possible. In 1901, American inventor Willis H. Carrier built what is considered the first modern electrical air conditioning unit. In 1902, he installed his first air-conditioning system, in the Sackett-Wilhelms Lithographing & Publishing Company in Brooklyn, New York; his invention controlled both the temperature and humidity, which helped maintain consistent paper dimensions and ink alignment at the printing plant. Later, together with six other employees, Carrier formed The Carrier Air Conditioning Company of America, a business that in 2020 employed 53,000 people and was valued at $18.6 billion.In 1906, Stuart W. Cramer of Charlotte, North Carolina was exploring ways to add moisture to the air in his textile mill. Cramer coined the term "air conditioning", using it in a patent claim he filed that year as analogous to "water conditioning", then a well-known process for making textiles easier to process. He combined moisture with ventilation to "condition" and change the air in the factories, controlling the humidity so necessary in textile plants. Willis Carrier adopted the term and incorporated it into the name of his company.Domestic air conditioning soon took off. In 1914, the first domestic air conditioning was installed in Minneapolis in the home of Charles Gilbert Gates. It is, however, possible that the huge device (c. 2.1 m × 1.8 m × 6.1 m; 7 ft × 6 ft × 20 ft) was never used, as the house remained uninhabited (Gates had already died in October 1913).
In 1931, H.H. Schultz and J.Q. Sherman developed what would become the most common type of individual room air conditioner: one designed to sit on a window ledge. The units went on sale in 1932 at US$10,000 to $50,000 (the equivalent of $200,000 to $1,100,000 in 2022.) A year later the first air conditioning systems for cars were offered for sale. Chrysler Motors introduced the first practical semi-portable air conditioning unit in 1935, and Packard became the first automobile manufacturer to offer an air conditioning unit in its cars in 1939.
Further development
Innovations in the latter half of the 20th century allowed for much more ubiquitous air conditioner use. In 1945, Robert Sherman of Lynn, Massachusetts invented a portable, in-window air conditioner that cooled, heated, humidified, dehumidified, and filtered the air.As international development has increased wealth across countries, global use of air conditioners has increased. By 2018, an estimated 1.6 billion air conditioning units were installed worldwide, with the International Energy Agency expecting this number to grow to 5.6 billion units by 2050. Between 1995 and 2004, the proportion of urban households in China with air conditioners increased from 8% to 70%. As of 2015, nearly 100 million homes, or about 87% of US households, had air conditioning systems. In 2019, it was estimated that 90% of new single-family homes constructed in the US included air conditioning (ranging from 99% in the South to 62% in the West).
Types of air conditioner
* where the typical capacity is in kilowatt as follows:
very small: <1.5 kw
small: 1.5 - 3.5 kw
medium: 4.2 - 7.1 kw
large: 7.2 - 14 kw
very large: 14> kw
Mini-split and multi-split systems
Ductless systems (often mini-split, though there are now ducted mini-split) typically supply conditioned and heated air to a single or a few rooms of a building, without ducts and in a decentralized manner. Multi-zone or multi-split systems are a common application of ductless systems and allow up to eight rooms (zones or locations) to be conditioned independently from each other, each with its own indoor unit and simultaneously from a single outdoor unit.
The first mini-split systems were sold in 1954–1968 by Mitsubishi Electric and Toshiba in Japan, where small home size motivated their development. Multi-zone ductless systems were invented by Daikin in 1973, and variable refrigerant flow systems (which can be thought of as larger multi-split systems) were also invented by Daikin in 1982. Both were first sold in Japan. Variable refrigerant flow systems when compared with central plant cooling from an air handler, eliminate the need for large cool air ducts, air handlers, and chillers; instead cool refrigerant is transported through much smaller pipes to the indoor units in the spaces to be conditioned, thus allowing for less space above dropped ceilings and a lower structural impact, while also allowing for more individual and independent temperature control of spaces, and the outdoor and indoor units can be spread across the building. Variable refrigerant flow indoor units can also be turned off individually in unused spaces. The lower start-up power of VRF's DC inverter compressors and their inherent DC power requirements also allow VRF solar-powered heat pumps to be run using DC-providing solar panels.
Ducted central systems
Split-system central air conditioners consist of two heat exchangers, an outside unit (the condenser) from which heat is rejected to the environment and an internal heat exchanger (the fan coil unit (FCU), air handling unit, or evaporator) with the piped refrigerant being circulated between the two. The FCU is then connected to the spaces to be cooled by ventilation ducts.
Central plant cooling
Large central cooling plants may use intermediate coolant such as chilled water pumped into air handlers or fan coil units near or in the spaces to be cooled which then duct or deliver cold air into the spaces to be conditioned, rather than ducting cold air directly to these spaces from the plant, which is not done due to the low density and heat capacity of air, which would require impractically large ducts. The chilled water is cooled by chillers in the plant, which uses a refrigeration cycle to cool water, often transferring its heat to the atmosphere even in liquid-cooled chillers through the use of cooling towers. Chillers may be air- or liquid-cooled.
Portable units
A portable system has an indoor unit on wheels connected to an outdoor unit via flexible pipes, similar to a permanently fixed installed unit (such as a ductless split air conditioner).
Hose systems, which can be monoblock or air-to-air, are vented to the outside via air ducts. The monoblock type collects the water in a bucket or tray and stops when full. The air-to-air type re-evaporates the water and discharges it through the ducted hose and can run continuously. Such portable units draw indoor air and expel it outdoors through a single duct, which negatively impacts their overall cooling efficiency.
Many portable air conditioners come with heat as well as a dehumidification function.
Window unit and packaged terminal
The packaged terminal air conditioner (PTAC), through-the-wall, and window air conditioners are similar. PTAC systems may be adapted to provide heating in cold weather, either directly by using an electric strip, gas, or other heaters, or by reversing the refrigerant flow to heat the interior and draw heat from the exterior air, converting the air conditioner into a heat pump. They may be installed in a wall opening with the help of a special sleeve on the wall and a custom grill that is flush with the wall and window air conditioners can also be installed in a window, but without a custom grill.
Packaged air conditioner
Packaged air conditioners (also known as self-contained units) are central systems that integrate into a single housing all the components of a split central system, and deliver air, possibly through ducts, to the spaces to be cooled. Depending on their construction they may be outdoors or indoors, on roofs (rooftop units), draw the air to be conditioned from inside or outside a building and be water, refrigerant or air-cooled. Often, outdoor units are air-cooled while indoor units are liquid-cooled using a cooling tower.
Operation
Operating principles
Cooling in traditional air conditioner systems is accomplished using the vapor-compression cycle, which uses the forced circulation and phase change of a refrigerant between gas and liquid to transfer heat. The vapor-compression cycle can occur within a unitary, or packaged piece of equipment; or within a chiller that is connected to terminal cooling equipment (such as a fan coil unit in an air handler) on its evaporator side and heat rejection equipment such as a cooling tower on its condenser side. An air source heat pump shares many components with an air conditioning system, but includes a reversing valve which allows the unit to be used to heat as well as cool a space.Air conditioning equipment will reduce the absolute humidity of the air processed by the system if the surface of the evaporator coil is significantly cooler than the dew point of the surrounding air. An air conditioner designed for an occupied space will typically achieve a 30% to 60% relative humidity in the occupied space.Most modern air-conditioning systems feature a dehumidification cycle during which the compressor runs while the fan is slowed to reduce the evaporator temperature and therefore condense more water. A dehumidifier uses the same refrigeration cycle but incorporates both the evaporator and the condenser into the same air path; the air first passes over the evaporator coil where it is cooled and dehumidified before passes over the condenser coil where it is warmed again before being released back into the room again.Free cooling can sometimes be selected when the external air happens to be cooler than the internal air and therefore the compressor needs not be used, resulting in high cooling efficiencies for these times. This may also be combined with seasonal thermal energy storage.
Heating
Some air conditioning systems have the option to reverse the refrigeration cycle and act as an air source heat pump, thereby heating instead of cooling the indoor environment. They are also commonly referred to as "reverse cycle air conditioners". The heat pump is significantly more energy-efficient than electric resistance heating, because it moves energy from air or groundwater to the heated space, as well as the heat from purchased electrical energy. When the heat pump is in heating mode, the indoor evaporator coil switches roles and becomes the condenser coil, producing heat. The outdoor condenser unit also switches roles to serve as the evaporator and discharges cold air (colder than the ambient outdoor air).
Most air source heat pumps become less efficient in outdoor temperatures lower than 4°C or 40°F; this is partly because ice forms on the outdoor unit's heat exchanger coil, which blocks air flow over the coil. To compensate for this, the heat pump system must temporarily switch back into the regular air conditioning mode to switch the outdoor evaporator coil back to being the condenser coil, so that it can heat up and defrost. Some heat pump systems will therefore have a form of electric resistance heating in the indoor air path that is activated only in this mode in order to compensate for the temporary indoor air cooling, which would otherwise be uncomfortable in the winter.
Newer models have improved cold-weather performance, with efficient heating capacity down to −14 °F (−26 °C). However there is always a chance that the humidity that condenses on the heat exchanger of the outdoor unit could freeze, even in models that have improved cold-weather performance, requiring a defrosting cycle to be performed.
The icing problem becomes much more severe with lower outdoor temperatures, so heat pumps are sometimes installed in tandem with a more conventional form of heating, such as an electrical heater, a natural gas, heating oil, or wood-burning fireplace or central heating, which is used instead of or in addition to the heat pump during harsher winter temperatures. In this case, the heat pump is used efficiently during milder temperatures, and the system is switched to the conventional heat source when the outdoor temperature is lower.
Performance
The coefficient of performance (COP) of an air conditioning system is a ratio of useful heating or cooling provided to the work required. Higher COPs equate to lower operating costs. The COP usually exceeds 1; however, the exact value is highly dependent on operating conditions, especially absolute temperature and relative temperature between sink and system, and is often graphed or averaged against expected conditions. Air conditioner equipment power in the U.S. is often described in terms of "tons of refrigeration", with each approximately equal to the cooling power of one short ton (2,000 pounds (910 kg) of ice melting in a 24-hour period. The value is equal to 12,000 BTUIT per hour, or 3,517 watts. Residential central air systems are usually from 1 to 5 tons (3.5 to 18 kW) in capacity.The efficiency of air conditioners is often rated by the seasonal energy efficiency ratio (SEER), which is defined by the Air Conditioning, Heating and Refrigeration Institute in its 2008 standard AHRI 210/240, Performance Rating of Unitary Air-Conditioning and Air-Source Heat Pump Equipment. A similar standard is the European seasonal energy efficiency ratio (ESEER).
Different modulating technologies
There are several ways to modulate the cooling capacity in refrigeration or air conditioning and heating systems. The most common in air conditioning are: on-off cycling, hot gas bypass, use or not of liquid injection, manifold configurations of multiple compressors, mechanical modulation (also called digital) and inverter technology. Each have advantages and drawbacks.
On-off cycling
Results in switching off the fixed-speed compressor under light load conditions and could lead to short cycling and the reduction in compressor lifetime. Efficiency of the unit is reduced by pressure cycling and transient losses. The turn down capacity is 100% or 0%.
This is the simplest & most common way to modulate an air conditioner's capacity.
Hot gas bypass
Involves injecting a quantity of gas from discharge to the suction side. The compressor will keep operating at the same speed but thanks to the bypass, the refrigerant mass flow circulating with the system is reduced and thus the cooling capacity. This naturally causes the compressor to run uselessly during the periods where the bypass is operating. The turn down capacity varies between 0 and 100%.
Manifold configurations
Several compressors can be installed in the system to provide the peak cooling capacity. Each compressor can run or not in order to stage the cooling capacity of the unit. The turn down capacity is either 0/33/66 or 100% for a trio configuration and either 0/50 or 100% for a tandem.
Mechanically modulated compressor
This internal mechanical capacity modulation is based on periodic compression process with a control valve, the 2 scroll set move apart stopping the compression for a given time period. This method varies refrigerant flow by changing the average time of compression, but not the actual speed of the motor. Despite an excellent turndown ratio – from 10 to 100% of the cooling capacity, mechanically modulated scrolls have high energy consumption as the motor continuously runs.
Variable-speed compressor
This system uses a variable-frequency drive (also called an Inverter) to control the speed of the compressor. The refrigerant flow rate is changed by the change in the speed of compressor. The turn down ratio depends on the system configuration and manufacturer. It modulates from 15 or 25% up to 100% at full capacity with a single inverter from 12 to 100% with a hybrid tandem. This method is the most efficient way to modulate an air conditioner's capacity. It is up to 58% more efficient than a fixed speed system.
Impact
Health effects
In hot weather, air conditioning can prevent heat stroke, dehydration from excessive perspiration, and other problems related to hyperthermia. Heat waves are the most lethal type of weather phenomenon in the United States. Air conditioning (including filtration, humidification, cooling and disinfection) can be used to provide a clean, safe, hypoallergenic atmosphere in hospital operating rooms and other environments where proper atmosphere is critical to patient safety and well-being. It is sometimes recommended for home use by people with allergies, especially mold.Poorly maintained water cooling towers can promote the growth and spread of microorganisms such as Legionella pneumophila, the infectious agent responsible for Legionnaires' disease. As long as the cooling tower is kept clean (usually by means of a chlorine treatment), these health hazards can be avoided or reduced. The state of New York has codified requirements for registration, maintenance, and testing of cooling towers to protect against Legionella.
Environmental impacts
Refrigerants have caused and continue to cause serious environmental issues, including ozone depletion and climate change, as several countries have not yet ratified the Kigali Amendment to reduce the consumption and production of hydrofluorocarbons.Current air conditioning accounts for 20% of energy consumption in buildings globally, and the expected growth of the usage of air conditioning due to climate change and technology uptake will drive significant energy demand growth. Alternatives to continual air conditioning include passive cooling, passive solar cooling, natural ventilation, operating shades to reduce solar gain, using trees, architectural shades, windows (and using window coatings) to reduce solar gain.In 2018 the United Nations called for the technology to be made more sustainable to mitigate climate change.
Economic effects
Air conditioning caused various shifts in demography, notably that of the United States starting from the 1970s:
In the US the birth rate was lower in the spring than during other seasons until the 1970s but this difference then declined since then.
In the US, the summer mortality rate, which had been higher in regions subject to a heat wave during the summer, also evened out.
The Sun Belt now contains 30% of the total US population while it was inhabited by 24% of Americans at the beginning of the 20th century.First designed to benefit targeted industries such as the press as well as large factories, the invention quickly spread to public agencies and administrations with studies with claims of increased productivity close to 24% in places equipped with air conditioning.
Other techniques
Buildings designed with passive air conditioning are generally less expensive to construct and maintain than buildings with conventional HVAC systems with lower energy demands. While tens of air changes per hour, and cooling of tens of degrees, can be achieved with passive methods, site-specific microclimate must be taken into account, complicating building design.Many techniques can be used to increase comfort and reduce the temperature in buildings. These include evaporative cooling, selective shading, wind, thermal convection, and heat storage.
Passive ventilation
Passive cooling
Daytime radiative cooling
Passive daytime radiative cooling (PDRC) surfaces reflect incoming solar radiation and heat back into outer space through the infrared window for cooling during the daytime. Daytime radiative cooling became possible with the ability to suppress solar heating using photonic structures, which emerged through a study by Raman et al. (2014). PDRCs can come in a variety of forms, including paint coatings and films, that are designed to be high in solar reflectance and thermal emittance.PDRC applications on building roofs and envelopes have demonstrated significant decreases in energy consumption and costs. In suburban single-family residential areas, PDRC application on roofs can potentially lower energy costs by 26% to 46%. PDRCs are predicted to show a market size of ~$27 billion for indoor space cooling by 2025 and have undergone a surge in research and development since the 2010s.
Fans
Hand fans have existed since prehistory. Large human-powered fans built into buildings include the punkah.
The 2nd-century Chinese inventor Ding Huan of the Han dynasty invented a rotary fan for air conditioning, with seven wheels 3 m (10 ft) in diameter and manually powered by prisoners.: 99, 151, 233 In 747, Emperor Xuanzong (r. 712–762) of the Tang dynasty (618–907) had the Cool Hall (Liang Dian 涼殿) built in the imperial palace, which the Tang Yulin describes as having water-powered fan wheels for air conditioning as well as rising jet streams of water from fountains. During the subsequent Song dynasty (960–1279), written sources mentioned the air conditioning rotary fan as even more widely used.: 134, 151
Thermal buffering
In areas that are cold at night or in winter, heat storage is used. Heat may be stored in earth or masonry; air is drawn past the masonry to heat or cool it.In areas that are below freezing at night in winter, snow and ice can be collected and stored in ice houses for later use in cooling. This technique is over 3,700 years old in the Middle East. Harvesting outdoor ice during winter and transporting and storing for use in summer was practiced by wealthy Europeans in the early 1600s, and became popular in Europe and the Americas towards the end of the 1600s. This practice was replaced by mechanical compression-cycle icemakers.
Evaporative cooling
In dry, hot climates, the evaporative cooling effect may be used by placing water at the air intake, such that the draft draws air over water and then into the house. For this reason, it is sometimes said that the fountain, in the architecture of hot, arid climates, is like the fireplace in the architecture of cold climates. Evaporative cooling also makes the air more humid, which can be beneficial in a dry desert climate.Evaporative coolers tend to feel as if they are not working during times of high humidity, when there is not much dry air with which the coolers can work to make the air as cool as possible for dwelling occupants. Unlike other types of air conditioners, evaporative coolers rely on the outside air to be channeled through cooler pads that cool the air before it reaches the inside of a house through its air duct system; this cooled outside air must be allowed to push the warmer air within the house out through an exhaust opening such as an open door or window.
See also
Air filter
Air purifier
Cleanroom
Crankcase heater
Energy recovery ventilation
Indoor air quality
Particulates
References
External links
U.S. Patent 808,897 Carrier's original patent
U.S. Patent 1,172,429
U.S. Patent 2,363,294
Scientific American, "Artificial Cold", 28 August 1880, p. 138
Scientific American, "The Presidential Cold Air Machine", 6 August 1881, p. 84 |
atmosphere of earth | The atmosphere of Earth is the layer of gases, known collectively as air, retained by Earth's gravity that surrounds the planet and forms its planetary atmosphere. The atmosphere of Earth creates pressure, absorbs most meteoroids and ultraviolet solar radiation, warms the surface through heat retention (greenhouse effect), allowing life and liquid water to exist on the Earth's surface, and reduces temperature extremes between day and night (the diurnal temperature variation).
As of 2023, by mole fraction (i.e., by number of molecules), dry air contains 78.08% nitrogen, 20.95% oxygen, 0.93% argon, 0.04% carbon dioxide, and small amounts of other gases. Air also contains a variable amount of water vapor, on average around 1% at sea level, and 0.4% over the entire atmosphere. Air composition, temperature, and atmospheric pressure vary with altitude. Within the atmosphere, air suitable for use in photosynthesis by terrestrial plants and breathing of terrestrial animals is found only in Earth's troposphere.Earth's early atmosphere consisted of gases in the solar nebula, primarily hydrogen. The atmosphere changed significantly over time, affected by many factors such as volcanism, life, and weathering. Recently, human activity has also contributed to atmospheric changes, such as global warming, ozone depletion and acid deposition.
The atmosphere has a mass of about 5.15×1018 kg, three quarters of which is within about 11 km (6.8 mi; 36,000 ft) of the surface. The atmosphere becomes thinner with increasing altitude, with no definite boundary between the atmosphere and outer space. The Kármán line, at 100 km (62 mi) or 1.57% of Earth's radius, is often used as the border between the atmosphere and outer space. Atmospheric effects become noticeable during atmospheric reentry of spacecraft at an altitude of around 120 km (75 mi). Several layers can be distinguished in the atmosphere, based on characteristics such as temperature and composition.
The study of Earth's atmosphere and its processes is called atmospheric science (aerology), and includes multiple subfields, such as climatology and atmospheric physics. Early pioneers in the field include Léon Teisserenc de Bort and Richard Assmann. The study of historic atmosphere is called paleoclimatology.
Composition
The three major constituents of Earth's atmosphere are nitrogen, oxygen, and argon. Water vapor accounts for roughly 0.25% of the atmosphere by mass. The concentration of water vapor (a greenhouse gas) varies significantly from around 10 ppm by mole fraction in the coldest portions of the atmosphere to as much as 5% by mole fraction in hot, humid air masses, and concentrations of other atmospheric gases are typically quoted in terms of dry air (without water vapor).: 8 The remaining gases are often referred to as trace gases, among which are other greenhouse gases, principally carbon dioxide, methane, nitrous oxide, and ozone. Besides argon, already mentioned, other noble gases, neon, helium, krypton, and xenon are also present. Filtered air includes trace amounts of many other chemical compounds. Many substances of natural origin may be present in locally and seasonally variable small amounts as aerosols in an unfiltered air sample, including dust of mineral and organic composition, pollen and spores, sea spray, and volcanic ash. Various industrial pollutants also may be present as gases or aerosols, such as chlorine (elemental or in compounds), fluorine compounds and elemental mercury vapor. Sulfur compounds such as hydrogen sulfide and sulfur dioxide (SO2) may be derived from natural sources or from industrial air pollution.
The average molecular weight of dry air, which can be used to calculate densities or to convert between mole fraction and mass fraction, is about 28.946 or 28.96 g/mol. This is decreased when the air is humid.
The relative concentration of gases remains constant until about 10,000 m (33,000 ft).
Stratification
In general, air pressure and density decrease with altitude in the atmosphere. However, the temperature has a more complicated profile with altitude, and may remain relatively constant or even increase with altitude in some regions (see the temperature section, below). Because the general pattern of the temperature/altitude profile, or lapse rate, is constant and measurable by means of instrumented balloon soundings, the temperature behavior provides a useful metric to distinguish atmospheric layers. In this way, Earth's atmosphere can be divided (called atmospheric stratification) into five main layers: troposphere, stratosphere, mesosphere, thermosphere, and exosphere. The altitudes of the five layers are as follows:
Exosphere: 700 to 10,000 km (440 to 6,200 miles)
Thermosphere: 80 to 700 km (50 to 440 miles)
Mesosphere: 50 to 80 km (31 to 50 miles)
Stratosphere: 12 to 50 km (7 to 31 miles)
Troposphere: 0 to 12 km (0 to 7 miles)
Exosphere
The exosphere is the outermost layer of Earth's atmosphere (though it is so tenuous that some scientists consider it to be part of interplanetary space rather than part of the atmosphere). It extends from the thermopause (also known as the "exobase") at the top of the thermosphere to a poorly defined boundary with the solar wind and interplanetary medium. The altitude of the exobase varies from about 500 kilometres (310 mi; 1,600,000 ft) to about 1,000 kilometres (620 mi) in times of higher incoming solar radiation.The upper limit varies depending on the definition. Various authorities consider it to end at about 10,000 kilometres (6,200 mi) or about 190,000 kilometres (120,000 mi)—about halfway to the moon, where the influence of Earth's gravity is about the same as radiation pressure from sunlight. The geocorona visible in the far ultraviolet (caused by neutral hydrogen) extends to at least 100,000 kilometres (62,000 mi).This layer is mainly composed of extremely low densities of hydrogen, helium and several heavier molecules including nitrogen, oxygen and carbon dioxide closer to the exobase. The atoms and molecules are so far apart that they can travel hundreds of kilometres without colliding with one another. Thus, the exosphere no longer behaves like a gas, and the particles constantly escape into space. These free-moving particles follow ballistic trajectories and may migrate in and out of the magnetosphere or the solar wind. Every second, the Earth loses about 3 kg of hydrogen, 50 g of helium, and much smaller amounts of other constituents.The exosphere is too far above Earth for meteorological phenomena to be possible. However, Earth's auroras—the aurora borealis (northern lights) and aurora australis (southern lights)—sometimes occur in the lower part of the exosphere, where they overlap into the thermosphere. The exosphere contains many of the artificial satellites that orbit Earth.
Thermosphere
The thermosphere is the second-highest layer of Earth's atmosphere. It extends from the mesopause (which separates it from the mesosphere) at an altitude of about 80 km (50 mi; 260,000 ft) up to the thermopause at an altitude range of 500–1000 km (310–620 mi; 1,600,000–3,300,000 ft). The height of the thermopause varies considerably due to changes in solar activity. Because the thermopause lies at the lower boundary of the exosphere, it is also referred to as the exobase. The lower part of the thermosphere, from 80 to 550 kilometres (50 to 342 mi) above Earth's surface, contains the ionosphere.
The temperature of the thermosphere gradually increases with height and can rise as high as 1500 °C (2700 °F), though the gas molecules are so far apart that its temperature in the usual sense is not very meaningful. The air is so rarefied that an individual molecule (of oxygen, for example) travels an average of 1 kilometre (0.62 mi; 3300 ft) between collisions with other molecules. Although the thermosphere has a high proportion of molecules with high energy, it would not feel hot to a human in direct contact, because its density is too low to conduct a significant amount of energy to or from the skin.
This layer is completely cloudless and free of water vapor. However, non-hydrometeorological phenomena such as the aurora borealis and aurora australis are occasionally seen in the thermosphere. The International Space Station orbits in this layer, between 350 and 420 km (220 and 260 mi). It is this layer where many of the satellites orbiting the Earth are present.
Mesosphere
The mesosphere is the third highest layer of Earth's atmosphere, occupying the region above the stratosphere and below the thermosphere. It extends from the stratopause at an altitude of about 50 km (31 mi; 160,000 ft) to the mesopause at 80–85 km (50–53 mi; 260,000–280,000 ft) above sea level.
Temperatures drop with increasing altitude to the mesopause that marks the top of this middle layer of the atmosphere. It is the coldest place on Earth and has an average temperature around −85 °C (−120 °F; 190 K).Just below the mesopause, the air is so cold that even the very scarce water vapor at this altitude can condense into polar-mesospheric noctilucent clouds of ice particles. These are the highest clouds in the atmosphere and may be visible to the naked eye if sunlight reflects off them about an hour or two after sunset or similarly before sunrise. They are most readily visible when the Sun is around 4 to 16 degrees below the horizon. Lightning-induced discharges known as transient luminous events (TLEs) occasionally form in the mesosphere above tropospheric thunderclouds. The mesosphere is also the layer where most meteors burn up upon atmospheric entrance. It is too high above Earth to be accessible to jet-powered aircraft and balloons, and too low to permit orbital spacecraft. The mesosphere is mainly accessed by sounding rockets and rocket-powered aircraft.
Stratosphere
The stratosphere is the second-lowest layer of Earth's atmosphere. It lies above the troposphere and is separated from it by the tropopause. This layer extends from the top of the troposphere at roughly 12 km (7.5 mi; 39,000 ft) above Earth's surface to the stratopause at an altitude of about 50 to 55 km (31 to 34 mi; 164,000 to 180,000 ft).
The atmospheric pressure at the top of the stratosphere is roughly 1/1000 the pressure at sea level. It contains the ozone layer, which is the part of Earth's atmosphere that contains relatively high concentrations of that gas. The stratosphere defines a layer in which temperatures rise with increasing altitude. This rise in temperature is caused by the absorption of ultraviolet radiation (UV) from the Sun by the ozone layer, which restricts turbulence and mixing. Although the temperature may be −60 °C (−76 °F; 210 K) at the tropopause, the top of the stratosphere is much warmer, and may be near 0 °C.The stratospheric temperature profile creates very stable atmospheric conditions, so the stratosphere lacks the weather-producing air turbulence that is so prevalent in the troposphere. Consequently, the stratosphere is almost completely free of clouds and other forms of weather. However, polar stratospheric or nacreous clouds are occasionally seen in the lower part of this layer of the atmosphere where the air is coldest. The stratosphere is the highest layer that can be accessed by jet-powered aircraft.
Troposphere
The troposphere is the lowest layer of Earth's atmosphere. It extends from Earth's surface to an average height of about 12 km (7.5 mi; 39,000 ft), although this altitude varies from about 9 km (5.6 mi; 30,000 ft) at the geographic poles to 17 km (11 mi; 56,000 ft) at the Equator, with some variation due to weather. The troposphere is bounded above by the tropopause, a boundary marked in most places by a temperature inversion (i.e. a layer of relatively warm air above a colder one), and in others by a zone that is isothermal with height.Although variations do occur, the temperature usually declines with increasing altitude in the troposphere because the troposphere is mostly heated through energy transfer from the surface. Thus, the lowest part of the troposphere (i.e. Earth's surface) is typically the warmest section of the troposphere. This promotes vertical mixing (hence, the origin of its name in the Greek word τρόπος, tropos, meaning "turn"). The troposphere contains roughly 80% of the mass of Earth's atmosphere. The troposphere is denser than all its overlying layers because a larger atmospheric weight sits on top of the troposphere and causes it to be most severely compressed. Fifty percent of the total mass of the atmosphere is located in the lower 5.6 km (3.5 mi; 18,000 ft) of the troposphere.
Nearly all atmospheric water vapor or moisture is found in the troposphere, so it is the layer where most of Earth's weather takes place. It has basically all the weather-associated cloud genus types generated by active wind circulation, although very tall cumulonimbus thunder clouds can penetrate the tropopause from below and rise into the lower part of the stratosphere. Most conventional aviation activity takes place in the troposphere, and it is the only layer accessible by propeller-driven aircraft.
Other layers
Within the five principal layers above, which are largely determined by temperature, several secondary layers may be distinguished by other properties:
The ozone layer is contained within the stratosphere. In this layer ozone concentrations are about 2 to 8 parts per million, which is much higher than in the lower atmosphere but still very small compared to the main components of the atmosphere. It is mainly located in the lower portion of the stratosphere from about 15–35 km (9.3–21.7 mi; 49,000–115,000 ft), though the thickness varies seasonally and geographically. About 90% of the ozone in Earth's atmosphere is contained in the stratosphere.
The ionosphere is a region of the atmosphere that is ionized by solar radiation. It is responsible for auroras. During daytime hours, it stretches from 50 to 1,000 km (31 to 621 mi; 160,000 to 3,280,000 ft) and includes the mesosphere, thermosphere, and parts of the exosphere. However, ionization in the mesosphere largely ceases during the night, so auroras are normally seen only in the thermosphere and lower exosphere. The ionosphere forms the inner edge of the magnetosphere. It has practical importance because it influences, for example, radio propagation on Earth.
The homosphere and heterosphere are defined by whether the atmospheric gases are well mixed. The surface-based homosphere includes the troposphere, stratosphere, mesosphere, and the lowest part of the thermosphere, where the chemical composition of the atmosphere does not depend on molecular weight because the gases are mixed by turbulence. This relatively homogeneous layer ends at the turbopause found at about 100 km (62 mi; 330,000 ft), the very edge of space itself as accepted by the FAI, which places it about 20 km (12 mi; 66,000 ft) above the mesopause.Above this altitude lies the heterosphere, which includes the exosphere and most of the thermosphere. Here, the chemical composition varies with altitude. This is because the distance that particles can move without colliding with one another is large compared with the size of motions that cause mixing. This allows the gases to stratify by molecular weight, with the heavier ones, such as oxygen and nitrogen, present only near the bottom of the heterosphere. The upper part of the heterosphere is composed almost completely of hydrogen, the lightest element.The planetary boundary layer is the part of the troposphere that is closest to Earth's surface and is directly affected by it, mainly through turbulent diffusion. During the day the planetary boundary layer usually is well-mixed, whereas at night it becomes stably stratified with weak or intermittent mixing. The depth of the planetary boundary layer ranges from as little as about 100 metres (330 ft) on clear, calm nights to 3,000 m (9,800 ft) or more during the afternoon in dry regions.The average temperature of the atmosphere at Earth's surface is 14 °C (57 °F; 287 K) or 15 °C (59 °F; 288 K), depending on the reference.
Physical properties
Pressure and thickness
The average atmospheric pressure at sea level is defined by the International Standard Atmosphere as 101325 pascals (760.00 Torr; 14.6959 psi; 760.00 mmHg). This is sometimes referred to as a unit of standard atmospheres (atm). Total atmospheric mass is 5.1480×1018 kg (1.135×1019 lb), about 2.5% less than would be inferred from the average sea level pressure and Earth's area of 51007.2 megahectares, this portion being displaced by Earth's mountainous terrain. Atmospheric pressure is the total weight of the air above unit area at the point where the pressure is measured. Thus air pressure varies with location and weather.
If the entire mass of the atmosphere had a uniform density equal to sea level density (about 1.2 kg per m3) from sea level upwards, it would terminate abruptly at an altitude of 8.50 km (27,900 ft).
Air pressure actually decreases exponentially with altitude, dropping by half every 5.6 km (18,000 ft) or by a factor of 1/e (0.368) every 7.64 km (25,100 ft), (this is called the scale height) -- for altitudes out to around 70 km (43 mi; 230,000 ft). However, the atmosphere is more accurately modeled with a customized equation for each layer that takes gradients of temperature, molecular composition, solar radiation and gravity into account. At heights over 100 km, an atmosphere may no longer be well mixed. Then each chemical species has its own scale height.
In summary, the mass of Earth's atmosphere is distributed approximately as follows:
50% is below 5.6 km (18,000 ft).
90% is below 16 km (52,000 ft).
99.99997% is below 100 km (62 mi; 330,000 ft), the Kármán line. By international convention, this marks the beginning of space where human travelers are considered astronauts.By comparison, the summit of Mount Everest is at 8,848 m (29,029 ft); commercial airliners typically cruise between 10 and 13 km (33,000 and 43,000 ft) where the lower density and temperature of the air improve fuel economy; weather balloons reach 30.4 km (100,000 ft) and above; and the highest X-15 flight in 1963 reached 108.0 km (354,300 ft).
Even above the Kármán line, significant atmospheric effects such as auroras still occur. Meteors begin to glow in this region, though the larger ones may not burn up until they penetrate more deeply. The various layers of Earth's ionosphere, important to HF radio propagation, begin below 100 km and extend beyond 500 km. By comparison, the International Space Station and Space Shuttle typically orbit at 350–400 km, within the F-layer of the ionosphere where they encounter enough atmospheric drag to require reboosts every few months, otherwise, orbital decay will occur resulting in a return to Earth. Depending on solar activity, satellites can experience noticeable atmospheric drag at altitudes as high as 700–800 km.
Temperature
The division of the atmosphere into layers mostly by reference to temperature is discussed above. Temperature decreases with altitude starting at sea level, but variations in this trend begin above 11 km, where the temperature stabilizes over a large vertical distance through the rest of the troposphere. In the stratosphere, starting above about 20 km, the temperature increases with height, due to heating within the ozone layer caused by the capture of significant ultraviolet radiation from the Sun by the dioxygen and ozone gas in this region. Still another region of increasing temperature with altitude occurs at very high altitudes, in the aptly-named thermosphere above 90 km.
Speed of sound
Because in an ideal gas of constant composition the speed of sound depends only on temperature and not on pressure or density, the speed of sound in the atmosphere with altitude takes on the form of the complicated temperature profile (see illustration to the right), and does not mirror altitudinal changes in density or pressure.
Density and mass
The density of air at sea level is about 1.2 kg/m3 (1.2 g/L, 0.0012 g/cm3). Density is not measured directly but is calculated from measurements of temperature, pressure and humidity using the equation of state for air (a form of the ideal gas law). Atmospheric density decreases as the altitude increases. This variation can be approximately modeled using the barometric formula. More sophisticated models are used to predict the orbital decay of satellites.
The average mass of the atmosphere is about 5 quadrillion (5×1015) tonnes or 1/1,200,000 the mass of Earth. According to the American National Center for Atmospheric Research, "The total mean mass of the atmosphere is 5.1480×1018 kg with an annual range due to water vapor of 1.2 or 1.5×1015 kg, depending on whether surface pressure or water vapor data are used; somewhat smaller than the previous estimate. The mean mass of water vapor is estimated as 1.27×1016 kg and the dry air mass as 5.1352 ±0.0003×1018 kg."
Tabulated properties
Table of physical and thermal properties of air at atmospheric pressure:
Optical properties
Solar radiation (or sunlight) is the energy Earth receives from the Sun. Earth also emits radiation back into space, but at longer wavelengths that humans cannot see. Part of the incoming and emitted radiation is absorbed or reflected by the atmosphere. In May 2017, glints of light, seen as twinkling from an orbiting satellite a million miles away, were found to be reflected light from ice crystals in the atmosphere.
Scattering
When light passes through Earth's atmosphere, photons interact with it through scattering. If the light does not interact with the atmosphere, it is called direct radiation and is what you see if you were to look directly at the Sun. Indirect radiation is light that has been scattered in the atmosphere. For example, on an overcast day when you cannot see your shadow, there is no direct radiation reaching you, it has all been scattered. As another example, due to a phenomenon called Rayleigh scattering, shorter (blue) wavelengths scatter more easily than longer (red) wavelengths. This is why the sky looks blue; you are seeing scattered blue light. This is also why sunsets are red. Because the Sun is close to the horizon, the Sun's rays pass through more atmosphere than normal before reaching your eye. Much of the blue light has been scattered out, leaving the red light in a sunset.
Absorption
Different molecules absorb different wavelengths of radiation. For example, O2 and O3 absorb almost all radiation with wavelengths shorter than 300 nanometres. Water (H2O) absorbs at many wavelengths above 700 nm. When a molecule absorbs a photon, it increases the energy of the molecule. This heats the atmosphere, but the atmosphere also cools by emitting radiation, as discussed below.
The combined absorption spectra of the gases in the atmosphere leave "windows" of low opacity, allowing the transmission of only certain bands of light. The optical window runs from around 300 nm (ultraviolet-C) up into the range humans can see, the visible spectrum (commonly called light), at roughly 400–700 nm and continues to the infrared to around 1100 nm. There are also infrared and radio windows that transmit some infrared and radio waves at longer wavelengths. For example, the radio window runs from about one centimetre to about eleven-metre waves.
Emission
Emission is the opposite of absorption, it is when an object emits radiation. Objects tend to emit amounts and wavelengths of radiation depending on their "black body" emission curves, therefore hotter objects tend to emit more radiation, with shorter wavelengths. Colder objects emit less radiation, with longer wavelengths. For example, the Sun is approximately 6,000 K (5,730 °C; 10,340 °F), its radiation peaks near 500 nm, and is visible to the human eye. Earth is approximately 290 K (17 °C; 62 °F), so its radiation peaks near 10,000 nm, and is much too long to be visible to humans.
Because of its temperature, the atmosphere emits infrared radiation. For example, on clear nights Earth's surface cools down faster than on cloudy nights. This is because clouds (H2O) are strong absorbers and emitters of infrared radiation. This is also why it becomes colder at night at higher elevations.
The greenhouse effect is directly related to this absorption and emission effect. Some gases in the atmosphere absorb and emit infrared radiation, but do not interact with sunlight in the visible spectrum. Common examples of these are CO2 and H2O.
Refractive index
The refractive index of air is close to, but just greater than 1. Systematic variations in the refractive index can lead to the bending of light rays over long optical paths. One example is that, under some circumstances, observers on board ships can see other vessels just over the horizon because light is refracted in the same direction as the curvature of Earth's surface.
The refractive index of air depends on temperature, giving rise to refraction effects when the temperature gradient is large. An example of such effects is the mirage.
Circulation
Atmospheric circulation is the large-scale movement of air through the troposphere, and the means (with ocean circulation) by which heat is distributed around Earth. The large-scale structure of the atmospheric circulation varies from year to year, but the basic structure remains fairly constant because it is determined by Earth's rotation rate and the difference in solar radiation between the equator and poles.
Evolution of Earth's atmosphere
Earliest atmosphere
The first atmosphere consisted of gases in the solar nebula, primarily hydrogen. There were probably simple hydrides such as those now found in the gas giants (Jupiter and Saturn), notably water vapor, methane and ammonia.
Second atmosphere
Outgassing from volcanism, supplemented by gases produced during the late heavy bombardment of Earth by huge asteroids, produced the next atmosphere, consisting largely of nitrogen plus carbon dioxide and inert gases. A major part of carbon-dioxide emissions dissolved in water and reacted with metals such as calcium and magnesium during weathering of crustal rocks to form carbonates that were deposited as sediments. Water-related sediments have been found that date from as early as 3.8 billion years ago.About 3.4 billion years ago, nitrogen formed the major part of the then stable "second atmosphere". The influence of life has to be taken into account rather soon in the history of the atmosphere because hints of early life-forms appear as early as 3.5 billion years ago. How Earth at that time maintained a climate warm enough for liquid water and life, if the early Sun put out 30% lower solar radiance than today, is a puzzle known as the "faint young Sun paradox".
The geological record however shows a continuous relatively warm surface during the complete early temperature record of Earth – with the exception of one cold glacial phase about 2.4 billion years ago. In the late Archean Eon an oxygen-containing atmosphere began to develop, apparently produced by photosynthesizing cyanobacteria (see Great Oxygenation Event), which have been found as stromatolite fossils from 2.7 billion years ago. The early basic carbon isotopy (isotope ratio proportions) strongly suggests conditions similar to the current, and that the fundamental features of the carbon cycle became established as early as 4 billion years ago.
Ancient sediments in the Gabon dating from between about 2.15 and 2.08 billion years ago provide a record of Earth's dynamic oxygenation evolution. These fluctuations in oxygenation were likely driven by the Lomagundi carbon isotope excursion.
Third atmosphere
The constant re-arrangement of continents by plate tectonics influences the long-term evolution of the atmosphere by transferring carbon dioxide to and from large continental carbonate stores. Free oxygen did not exist in the atmosphere until about 2.4 billion years ago during the Great Oxygenation Event and its appearance is indicated by the end of the banded iron formations.
Before this time, any oxygen produced by photosynthesis was consumed by the oxidation of reduced materials, notably iron. Free oxygen molecules did not start to accumulate in the atmosphere until the rate of production of oxygen began to exceed the availability of reducing materials that removed oxygen. This point signifies a shift from a reducing atmosphere to an oxidizing atmosphere. O2 showed major variations until reaching a steady state of more than 15% by the end of the Precambrian. The following time span from 539 million years ago to the present day is the Phanerozoic Eon, during the earliest period of which, the Cambrian, oxygen-requiring metazoan life forms began to appear.
The amount of oxygen in the atmosphere has fluctuated over the last 600 million years, reaching a peak of about 30% around 280 million years ago, significantly higher than today's 21%. Two main processes govern changes in the atmosphere: Plants using carbon dioxide from the atmosphere and releasing oxygen, and then plants using some oxygen at night by the process of photorespiration while the remaining oxygen is used to break down organic material. Breakdown of pyrite and volcanic eruptions release sulfur into the atmosphere, which reacts with oxygen and hence reduces its amount in the atmosphere. However, volcanic eruptions also release carbon dioxide, which plants can convert to oxygen. The cause of the variation of the amount of oxygen in the atmosphere is not known. Periods with much oxygen in the atmosphere are associated with the rapid development of animals.
Air pollution
Air pollution is the introduction into the atmosphere of chemicals, particulate matter or biological materials that cause harm or discomfort to organisms. Stratospheric ozone depletion is caused by air pollution, chiefly from chlorofluorocarbons and other ozone-depleting substances.
Since 1750, human activity has increased the concentrations various greenhouse gases, most importantly carbon dioxide, methane and nitrous oxide. This increase has caused an observed rise in global temperatures. Global average surface temperatures were 1.1 °C higher in the 2011–2020 decade than they were in 1850.
Images from space
On October 19, 2015, NASA started a website containing daily images of the full sunlit side of Earth at https://epic.gsfc.nasa.gov/. The images are taken from the Deep Space Climate Observatory (DSCOVR) and show Earth as it rotates during a day.
See also
References
External links
Buchan, Alexander (1878). "Atmosphere" . Encyclopædia Britannica. Vol. III (9th ed.). pp. 28–36.
Interactive global map of current atmospheric and ocean surface conditions. |
hell and high water (book) | Hell and High Water: Global Warming – the Solution and the Politics – and What We Should Do is a book by author, scientist, and former U.S. Department of Energy official Joseph J. Romm, published December 26, 2006. The author is "one of the world's leading experts on clean energy, advanced vehicles, energy security, and greenhouse gas mitigation."The book warned of dire consequences to the U.S. and the world if wide-scale environmental changes are not enacted by the U.S. government. It reviewed the evidence that the initial global warming changes would lead to feedbacks and accelerated warming. According to Romm, the oceans, soils, Arctic permafrost, and rainforests could become sources of greenhouse gas emissions. The book claimed that, without serious government action, sea levels would rise high enough to submerge numerous coastal communities and inland areas on both U.S. coasts and around the world by the year 2100.
In 2008, Time magazine wrote that "On [Romm's] blog and in his most recent book, Hell and High Water, you can find some of the most cogent, memorable, and deployable arguments for immediate and overwhelming action to confront global warming." Romm was interviewed on Fox News on January 31, 2007, about the book and the IPCC Fourth Assessment Report climate report.
Summary of the book
Part I, comprising the first four chapters of the book, reviews the science of climate change, setting forth the evidence that humans are causing an unprecedented increase in carbon emissions that is, in turn causing global warming. The book describes the consequences of unchecked climate change, such as destruction of coastal cities due to rising sea levels and mega-hurricanes; increasing droughts and deadly water shortages; infestation of insects into new ranges; and increased famines, heat waves, forest fires and desertification. The book sets forth the research on "feedback loops" that would contribute to accelerating climate change, including:
melting ice at the poles that means less reflection of sunlight by white ice and more absorption of the sun's heat by ocean water and dark land;
an increasing amount of water vapor in the atmosphere (water vapor is a greenhouse gas);
melting permafrost in the Arctic, where more carbon is locked in Arctic permafrost than in all of the Earth's atmosphere and where methane, which is about 20 times more powerful than carbon dioxide (CO2) as a greenhouse gas, is being released by permafrost in the Arctic faster than scientists previously thought it would;
the death of algae and phytoplankton from heat and acidity in the oceans, reducing the CO2 being absorbed by them; and
the reduced ability of tropical forests to absorb CO2 as they are destroyed.Romm proposes an eight-point program, based on existing technologies, to counter and then reverse the trend toward catastrophic global warming: performance-based efficiency programs; energy efficiency gains from industry and power generation through cogeneration of heat and power; building wind farms; capturing carbon dioxide from proposed coal plants; building nuclear plants; greatly improving the fuel economy of our vehicles using PHEVs; increasing production of high-yield energy crops; and stopping tropical deforestation while planting more trees.(pp. 22–23)
Part I then offers extrapolations, based on various models and analyses, of what will happen to the U.S. and the world by 2025, 2050 and 2100 if decisive action is not taken quickly. Treehugger.com called this "an explanation that is both comprehensive and comprehensible." The book claimed that without serious government action within the following ten years, sea levels will eventually rise high enough to submerge numerous coastal communities and inland areas on both U.S. coasts and around the world causing over 100 million "environmental refugees" to flee the coasts by the year 2100.
Part II, comprising the next six chapters, discusses the politics and media issues that the author says are delaying such decisive action (and how this has a negative influence on the behavior of other countries, particularly China) and also discusses the currently available technological solutions to global warming. The book asserts that there has been a disingenuous, concerted and effective campaign to convince Americans that the science is not proven, or that global warming is the result of natural cycles, and that there needs to be more research. The book claims that, to delay action, industry and government spokesmen suggest falsely that "technology breakthroughs" will eventually save us with hydrogen cars and other fixes. It asserts that the reason for this denial and delay is that "ideology trumps rationality. ... Most conservatives cannot abide the solution to global warming – strong government regulations and a government-led effort to accelerate clean-energy technologies into the market."(p. 107) Romm says that the media have acted as enablers of this program of denial in the misguided belief that the pursuit of "balance" is superior to the pursuit of truth - even in science journalism. The book describes how this has led to skewed public opinion and to congress cutting funds for programmes aimed at accelerating the deployment into the American market of cost-effective technologies already available.The book spends many pages refuting the "hydrogen myth" (see also Romm's previous book, The Hype about Hydrogen) and "the geo-engineering fantasy". In Chapter 7, the book describes technology strategies that it claims would permit the U.S., over the next two decades, to cut its carbon dioxide emissions by two-thirds without increasing the energy costs of either consumers or businesses. These include launching "a massive performance-based efficiency program for homes, commercial buildings and new construction ... [and] a massive effort to boost the efficiency of heavy industry and expand the use of cogeneration ... [c]apture the CO2 from 800 new large coal plants and store it underground, [b]uild 1 million large wind turbines ... [and] [b]uild 700 new large nuclear power plants".The book's conclusion calls on voters to demand immediate action. The conclusion is followed by over 50 pages of extensive endnotes and an index.
Tyler Hamilton, in his review of the book for The Toronto Star, summarizes the book's contents as follows: "Whereas the first third of Romm's book presents overwhelming and disturbing evidence that human-caused greenhouse gases are the primary ingredients behind global warming, the pages that follow offer alarming detail on how the U.S. public is being misled by a federal government (backed by conservative political forces) that is intent on inaction, and that's also on a mission to derail international efforts to curb emissions."
In his book Hell and High Water, Romm discusses the urgency to act and the sad fact that America is refusing to do so. ... Romm gives a name to those such as ExxonMobil who deny that global warming is occurring and are working to persuade others of this money-making myth: they are the Denyers and Delayers. They are better rhetoriticians than scientists are. .. Global warming is happening now, and Romm... gives us 10 years to change the way we live before it’s too late to use existing technology to save the world. "...humanity already possesses the fundamental scientific, technical, and industrial know-how to solve the carbon and climate problem for the next half-century. The tragedy, then, as historians of the future will most certainly recount, is that we ruined their world not because we lacked the knowledge or the technology to save it but simply because we chose not to make the effort”(Romm, 25).
Challenges and solutions identified by the book
The book claims that U.S. politicians who deny the science and have failed to take genuine action on conservation and alternative energy initiatives are following a disastrous course by delaying serious changes that he says are imminently needed. Romm also criticizes the media for what he says is sloppy reporting and an unwillingness to probe behind political rhetoric, which he says are lulling Americans into accepting continuing delays on implementing emission-cutting technologies. The book argues that there is a limited window of opportunity to head off the most catastrophic effects of global warming, and it calls upon Americans to demand that our government "embrace an aggressive multi-decade, government-led effort to use existing and near-term clean energy technologies." (p. 230)
Romm writes that strategies to combat climate change with current technologies can significantly slow global warming and buy more time for the world to develop new technologies and take even stronger action. The book lays out a number of proposed solutions to avoiding a climate catastrophe, including:
launching massive energy efficiency programs for homes, office buildings, and heavy industry;
increasing the fuel efficiency of cars and light trucks to 60 miles per gallon while also equipping them with advanced plug-in hybrid technology;
building 1 million large wind turbines; and
ceasing tropical deforestation and reversing the trend by planting trees.The book states, "The IPCC's Fourth Assessment Report this year (2007) will present a much stronger consensus and a much clearer and darker picture of our likely future than the Third Assessment – but it will almost certainly still underestimate the likely impacts. The Fifth Assessment, due around 2013, should include many of the omitted feedbacks, like that of the [carbon emissions caused by] defrosting tundra, and validate the scenarios described on these pages." (p. 94)
Critical response
The Toronto Star's January 1, 2007 review of the book says that Romm "convincingly shoots down the arguments of those who claim global warming is a hoax or some kind of natural cycle not associated with human activities." The review laments that the "'Denyers and Delayers' are winning the political battle in the United States, the world's highest emitter of greenhouse gases and a saboteur of Kyoto talks" and that the media's policy of "giving 'equal time' to Denyers gives the public the wrong impression about our understanding and level of certainty around global warming science." The review concludes, "The book itself is a short and easy read, not as intimidating as some other works, and it hits all the main points on the science and politics behind global warming, and the policy and technological solutions to minimize damage to the planet, economy and humanity."A review in the Detroit Free Press's Freep.com stated, "Joseph Romm's Hell and High Water is a great book for people who want to understand the complexities of global warming and, perhaps more important, what we could be doing about it other than wringing our hands or sticking our collective head in the sand." Technology Review concluded, "His book provides an accurate summary of what is known about global warming and climate change, a sensible agenda for technology and policy, and a primer on how political disinformation has undermined climate science." BooksPath Reviews commented, Hell and High Water is nothing less than a wake-up call to the country. It is a searing critique of American environmental and energy policy and a passionate call to action by a writer with a unique command of the science and politics of climate change. Hell and High Water goes beyond ideological rhetoric to offer pragmatic solutions to avert the threat of global warming – solutions that must be taken seriously by every American. On February 21, 2007, Bill Moore at EV World.com wrote: "...it seemed every paragraph, every page revealed some new outrage that just got my dander up. If it doesn't do the same to you, I'll really be surprised."Grist's blog "Gristmill" noted, "Joseph Romm's Hell and High Water may be the most depressing book on global warming I've ever read. ... My hope is that a lifetime spent in insider elite politics causes him to underestimate what a bottom-up grassroots movement can accomplish. ... A coalition that supported real action on global warming, as part of movement that supported real solutions on these other issues too, would have a much better chance of winning than a single-issue group. It would have a broader base and could offer more immediate relief from problems; because global warming wouldn't be its only or even main issue, it would produce quicker results in the lives of ordinary people. ... Technically, Romm is sound." The writer amended his statement as follows: "I referred to the book as 'depressing', but the tone is frank, not truly gloomy.... Romm... is known as a level-headed, optimistic analyst. His book is no exception – he documents the problem and the (quite mainstream) solutions he endorses thoroughly and meticulously." The Foreign Policy in Focus article "An Inconvenient Truth II" cites the book with approval and references its analysis twice.The blog "Political Cortex" wrote: "Hell and High Water might be the Global Warming work of most interest to the politically engaged (Democratic and/or Republican). Romm lays a strong case as to how Global Warming could be the death sentence for the Republican Party as reality becomes ever blatantly at odds with Republican Party rhetoric. ... Romm also highlights how, in an ever more difficult world in the years to come, either the United States figures out how to lead in dealing with mitigating/muting Global Warming and its impacts or risks becoming a pariah nation, with dire implications for the Republic and its citizens." Booklist's reviewer wrote that the book "presents a clear and effective primer on climate science. But the most salient aspects of this provocative expose involve Romm's documentation of what he calls the Bush administration's irresponsible and backward energy policies, the censorship of legitimate and urgent information pertaining to global warming, and the threats rising temperatures pose to "the health and well-being of this nation and the world. Romm explains that we already possess the technologies and know-how we need to reduce greenhouse gas emissions."In 2008, the Greenpeace staff blog noted, "If you’re concerned about global warming and want to do something about it, Joseph Romm’s Hell and High Water ...is a fantastic primer. ... Romm clearly and concisely details the technologies and policies we need to adopt to avoid the worst consequences of global warming".
See also
References
External links
Publisher webpage
Romm's January 25, 2007 piece in Salon.com about global warming |
2023 south america heat wave | Between July and September 2023, a heat wave hit South America, leading to temperatures in many areas above 95 °F (35 °C) in midwinter, often 40–45 °F (22–25 °C) degrees above typical. The heat wave was especially severe in northern Argentina and Chile, along neighboring areas in and around the Andes Mountains. Some locations set all-time heat records. Several states also had the hottest September temperatures in history, often reaching more than 40°C.In mid-July, Brazil began experiencing elevated temperatures. During the third week of the month, locations in Argentina, Bolivia, Paraguay, and Uruguay set records for July temperatures. There was a heat dome above Paraguay associated with the unusual weather, which was also exacerbated by El Niño and global warming.Weather historian Maximiliano Herrera stated that "South America is living one of the extreme events the world has ever seen" and "This event is rewriting all climatic books".On 12 August 2023, Buenos Aires broke a 117 year heat record. Chile saw highs towards 40°C and Bolivia saw unseasonably high temperatures, while Asunción saw 33°C.
== References == |
perfluorobutane | Perfluorobutane (PFB) is an inert, high-density colorless gas. It is a simple fluorocarbon with a n-butane skeleton and all the hydrogen atoms replaced with fluorine atoms.
Uses
Perfluorobutane can replace Halon 1301 in fire extinguishers, as well as the gas component for newer generation microbubble ultrasound contrast agents. Sonazoid is one such microbubble formulation developed by Amersham Health that uses perfluorobutane for the gas core.
Inhaling perfluorobutane makes one's voice deeper.
Environmental impacts
If perfluorobutane is released to the environment, it will not be broken down in air. It is not expected to be broken down by sunlight. It will move into air from soil and water surfaces. If it is exposed to conditions of extreme heat from misuse, equipment failure, etc., toxic decomposition products including hydrogen fluoride can be produced.Perfluorobutane has an estimated lifetime greater than 2600 years. Perfluorobutane has a high global warming potential value of 4800. Its ozone depletion potential is zero.
== References == |
pleistocene park | Pleistocene Park (Russian: Плейстоценовый парк, romanized: Pleystotsenovyy park) is a nature reserve on the Kolyma River south of Chersky in the Sakha Republic, Russia, in northeastern Siberia, where an attempt is being made to re-create the northern subarctic steppe grassland ecosystem that flourished in the area during the last glacial period.The project is being led by Russian scientists Sergey Zimov and Nikita Zimov, testing the hypothesis that repopulating with large herbivores (and predators) can restore rich grasslands ecosystems, as expected if overhunting, and not climate change, was primarily responsible for the extinction of wildlife and the disappearance of the grasslands at the end of the Pleistocene epoch.The aim of the project is to research the climatic effects of the expected changes in the ecosystem. Here the hypothesis is that the change from tundra to grassland will result in a raised ratio of energy emission to energy absorption of the area, leading to less thawing of permafrost and thereby less emission of greenhouse gases. It is also thought that removal of snow by large herbivores will further reduce the permafrost's insulation.
To study this, large herbivores have been released, and their effect on the local fauna is being monitored. Preliminary results point at the ecologically low-grade tundra biome being converted into a productive grassland biome and at the energy emission of the area being raised.
Research goals
Effects of large herbivores on the arctic tundra/grasslands ecosystem
The primary aim of Pleistocene Park is to recreate the mammoth steppe (ancient taiga/tundra grasslands that were widespread in the region during the last ice age). The key concept is that animals, rather than climate, maintained that ecosystem. Reintroducing large herbivores to Siberia would then initiate a positive feedback loop promoting the reestablishment of grassland ecosystems. This argument is the basis for rewilding Pleistocene Park's landscape with megafauna that were previously abundant in the area, as evidenced by the fossil record.The grassland-steppe ecosystem that dominated Siberia during the Pleistocene disappeared 10,000 years ago and was replaced by a mossy and forested tundra and taiga ecosystem. Concurrently, most of the large herbivores that roamed Siberia during the Pleistocene have vanished from the region. The mainstream explanation for this used to be that at the beginning of the Holocene the arid steppe climate changed into a humid one, and when the steppe vanished so did the steppe's animals. Sergei Zimov points out that in contradiction to this scenario:
Similar climatic shifts occurred in previous interglacial periods without causing such massive environmental changes.
Those large herbivores of the former steppe that survived until today (e.g. musk oxen, bison, horses) thrive in humid environments just as well as in arid ones.
The climate (both temperatures and humidity) in today's northern Siberia is in fact similar to that of the mammoth steppe. The radiation aridity index for northern Siberia on Mikhail Budyko's scale is 2 (= steppe bordering on semi-desert). Budyko's scale compares the ratio of the energy received by the earth's surface to the energy required for the evaporation of the total annual precipitation.Zimov and colleagues argue for a reversed order of environmental change in the mammoth steppe. Humans, with their constantly improving technology, overhunted the large herbivores and led to their extinction and extirpation. Without herbivores grazing and trampling over the land, mosses, shrubs and trees were able to take over and replace the grassland ecosystem. If the grasslands were destroyed because herbivore populations were decimated by human hunting, then "it stands to reason that those landscapes can be reconstituted by the judicious return of appropriate herbivore communities."
Effects of large herbivores on permafrost and global warming
A secondary aim is to research the climatic effects of the expected changes in the ecosystem. Here the key concept is that some of the effects of the large herbivores, such as eradicating trees and shrubs or trampling snow, will result in a stronger cooling of the ground in the winter, leading to less thawing of permafrost during summer and thereby less emission of greenhouse gases.
Permafrost is a large global carbon reservoir that has remained frozen throughout much of the Holocene. Due to recent climate change, the permafrost is beginning to thaw, releasing stored carbon and forming thermokarst lakes. When the thawed permafrost enters the thermokarst lakes, its carbon is converted into carbon dioxide and methane and released into the atmosphere. Methane is a potent greenhouse gas and the methane emissions from thermokarst lakes have the potential to initiate a positive feedback cycle in which increased atmospheric methane concentrations lead to amplified global climate change, which in turn leads to more permafrost thaw and more methane and carbon dioxide emissions.As the combined carbon stored in the world's permafrost (1670 Gt) equals about twice the amount of the carbon currently released in the atmosphere (720 Gt), the setting in motion of such a positive feedback cycle could potentially lead to a runaway climate change scenario. Even if the ecological situation of the arctic were as it was 400,000 years ago (i.e., grasslands instead of tundra), a global temperature rise of 1.5 °C (2.7 °F) relative to the pre-industrial level would be enough to start the thawing of permafrost in Siberia. An increased cooling of the ground during winter would raise the current tipping point, potentially delaying such a scenario.
Implementation
Background: regional Pleistocene ecoregions
It has been proposed that the introduction of a variety of large herbivores will recreate their ancient ecological niches in Siberia and regenerate the Pleistocene terrain with its different ecological habitats such as taiga, tundra, steppe and alpine terrain.
The main objective, however, is to recreate the extensive grasslands that covered the Beringia region in the late Pleistocene.
Proposed procedure
In present-day Siberia only a few of the former species of megafauna are left; and their population density is extremely low, too low to affect the environment. To reach the desired effects, the density has to be raised artificially by fencing in and concentrating the existing large herbivores. A large variety of species is important as each species affects the environment differently and as the overall stability of the ecosystem increases with the variety of species (compare Biodiversity and ecological services). Their numbers will be raised by reintroducing species that became locally extinct (e.g., muskoxen). For species that became completely extinct, suitable replacements will be introduced if possible (e.g., wild Bactrian camels for the extinct Pleistocene camels of the genus Paracamelus). As the number of herbivores increases, the enclosure will be expanded.While this is taking place, the effects will be monitored. This concerns for example the effects on the flora (are the mosses being replaced by grasses, etc.), the effects on the atmosphere (changes in levels of methane, carbon dioxide, water vapor) and the effects on the permafrost.Finally, once a high density of herbivores over a vast area has been reached, predators larger than the wolves will have to be introduced to keep the megafauna in check.
Progress and plans
1988–1996
The first grazing experiments began in 1988 at the Northeast Science Station in Chersky with Yakutian horses.
1996–2004
In 1996 a 50 ha (125 acre) enclosure was built in Pleistocene Park. As a first step in recreating the ancient landscape, the Yakutian horses were introduced, as horses had been the most abundant ungulates on the northeastern Siberian mammoth steppe. Of the first 40 horses, 15 were killed by predators and 12 died of eating poisonous plants. More horses were imported, and they learned to cope with the environment. In 2006 approximately 20 horses lived in the park, and by 2007 more horses were being born annually than died. By 2013, the number had risen to about 30. Moose, already present in the region, were also introduced. The effects of large animals (mammoths and wisents) on nature were artificially created by using an engineering tank and an 8 wheel drive Argo all-terrain vehicle to crush pathways through the willow shrub.
The vegetation in the park started to change. In the areas where the horses grazed, the soil has been compacted and mosses, weeds and willow shrub were replaced by grasses. Flat grassland is now the dominating landscape inside the park. The permafrost was also influenced by the grazers. When air temperature sank to −40 °C (−40 °F) in winter, the temperature of the ground was found to be only –5 °C (+23 °F) under an intact cover of snow, but −30 °C (−22 °F) where the animals had trampled down the snow. The grazers thus help keep permafrost intact, thereby lessening the amount of methane released by the tundra.2004–2011
In the years 2004–2005 a new fence was erected, creating an enclosure of 16 km2 (6 sq mi).The new enclosure finally allowed a more rapid development of the project. After the fence was completed, reindeer were brought into the park from herds in the region and are now the most numerous ungulates in the park. To increase moose density in the park, special constructions were added to the fence in several places that allow animals outside the fenced area to enter the park, while not allowing them to leave. Besides that, wild moose calves were caught in other regions and transported to the park.In 2007 a 32 meter (105 foot) high tower was erected in the park that constantly monitors the levels of methane, carbon dioxide and water vapor in the park's atmosphere.In September 2010, 6 male muskox from Wrangel Island were reintroduced, but 2 muskoxen died in the first months: one from unknown causes, and the other from infighting among the muskoxen. Seven months later, in April 2011, 6 Altai wapiti and 5 wisents arrived at the park, the wapiti were from the Altai mountains and the wisents from Prioksko-Terrasny Nature Reserve, near Moscow. The enclosing fence proved too low for the wapiti, and by the end of 2012 all 6 had jumped the fence and run off.
2011–2016
In the years 2011–2016 progress slowed down as most energy was put into the construction of a 150 ha (370 ac) branch of Pleistocene Park near the city of Tula in Tula Oblast in Europe, see below (Wild Field section). A few more reindeer and moose were introduced into Pleistocene Park during this time, and a monitoring system for measuring the energy balance (ratio of energy emission and energy absorption) of the pasture was installed.
2017–2022
Attention has now been shifted back to the further development of Pleistocene Park. A successful crowdfunding effort in early 2017 provided funding for further animal acquisitions. Later that year 12 domestic yak and 30 domestic sheep were brought to the park. and the introduction of more muskoxen was planned for 2020.For the near future the focus in animal introductions will generally be placed on browsers, not grazers, i.e., bison, muskoxen, horses, and domestic yaks. Their role in this phase will be to diminish the amount of shrubs and trees and enlarge the grassy areas. Only when these areas have sufficiently increased will grazers like saiga and wild Bactrian camels be introduced.2023
In 2023, 24 European bison were brought to Pleistocene Park. The animals were sourced from Ditlevsdal Bison Farm, Denmark. Later that year, fourteen musk oxen were brought to the park.
Reception
Controversial aspects
Critics admonish that introducing alien species could damage the fragile ecosystem of the existing tundra. To this criticism Sergey Zimov replied: "The tundra is not an ecosystem. Such systems had not existed on the planet [before the disappearance of the megafauna], and there is nothing to cherish in the tundra. Of course, it would be silly to create a desert instead of the tundra, but if the same site would evolve into a steppe, then it certainly would improve the environment. If deer, foxes, bovines were more abundant, nature would only benefit from this. And people too. However, the danger still exists, of course, you have to be very careful. If it is a revival of the steppes, then, for example, small animals are really dangerous to release without control. As for large herbivores – no danger, as they are very easy to remove again."Another point of concern is doubt that the majority of species can be introduced in such harsh conditions. For example, according to some critics, the Yakutian horses, although they have been living in the park for several generations, would not have survived without human intervention. They normally tolerate –60 °C, but are said to cope poorly with an abundance of snow and possibly would have died of starvation in the first snowy winter. However, horses of much less primitive stock abandoned by the Japanese Army have been living feral on some uninhabited Kuril Islands since 1945. Despite the deep snows (two to three times deeper than in Yakutia), they have successfully survived all the winters without feeding. And in Pleistocene Park, while some of the Yakutian horses accept supplementary feeding, others keep away and survive on their own.
Positive reception
The Zimovs' concept of Pleistocene Park and repopulating the mammoth steppe is listed as one of the "100 most substantive solutions to global warming" by Project Drawdown. The list, encompassing only technologically viable, existing solutions, was compiled by a team of over 200 scholars, scientists, policymakers, business leaders and activists; for each solution the carbon impact through the year 2050, the total and net cost to society, and the total lifetime savings were measured and modeled.In January 2020, a study co-authored by Nikita Zimov and three University of Oxford researchers assessed the viability of the park's goals when implemented on a larger scale. It was estimated that if three large-scale experimental areas were set up, each containing 1000 animals and costing 114 million US dollars over a ten year period, that 72,000 metric tons of carbon could be held and generate 360,000 US dollars in carbon revenues.
Visitors
The park is a hub for international scientists and students, who come from around the world to conduct their own ecological research and experiments. The Polaris Project was a yearly visitor from 2009 to 2015, sending American students on excursions to the park each summer.Another group of visitors are journalists. The park is steadily gaining more media attention and while most journalists do not come to the park itself the number of visitors is increasing. In 2016 for example, the park was visited by a filmmaker, two print media (Swiss 24 Heures and American The Atlantic), and two TV broadcasting companies (German ARD and American HBO).The total of visitors for 2016 (summer months only) was 45.
Size and administration
Pleistocene Park is a 160 km2 scientific nature reserve (zakaznik) consisting of willow brush, grasslands, swamps, forests and a multitude of lakes. The average temperature in January is about –33 °C and in July +12 °C; annual precipitation is 200–250 mm.Pleistocene Park is owned and administered by a non-profit corporation, the Pleistocene Park Association, consisting of the ecologists from the Northeast Science Station in Chersky and the Grassland Institute in Yakutsk. The present park area was signed over to the association by the state and is exempt from land tax. The reserve is surrounded by a 600 km2 buffer zone that will be added to the park by the regional government once the animals have successfully established themselves.In July 2015 the "Pleistocene Park Foundation". was founded, a non-profit organization (registered in Pennsylvania, US, with 501(c)(3) status) dedicated to acquiring private donations for funding Pleistocene Park. Hitherto Pleistocene Park had been financed solely through the funds of the founders, a practice that grew increasingly insufficient.In 2019 the "Pleistocene & Permafrost Foundation". 26 April 2023. was founded in Germany by Michael Kurzeja and Bernd Zehentbauer and serves as a bridge between science, politics, companies, and society. It takes care of the project's financing, seeks donations in kind such as tractors, utility vehicles, and pick-ups to build the park, and funds further research projects with the Max Planck Institute. "Dirk Steffens". and "Anabel Ternès". are involved as ambassadors.
Animals
Animals already present in the park
Herbivores
Reindeer (Rangifer tarandus): Present before the project started (although more are being brought to help simulate Pleistocene conditions). They mainly graze in the southern highlands of the park. This territory is not affected by spring flooding and dominated by larch forests and shrubland. Reindeer rarely visit the flood plain. Besides actively grazing (especially in winter) they browse on willow shrubs, tree moss, and lichens. (Numbers in park in November 2021: 20–30)
Elk[BE]/moose[AE] (Alces alces): Present before the project started, although in low numbers. Immigration from neighboring areas is stimulated. Due to poaching the density of moose in the region has substantially decreased in the last 20 years. To increase moose density in the park, special constructions were added to the fence in several places that allow animals outside the fenced area to enter the park, while not allowing them to leave. Besides that, wild moose calves are being caught in other regions and transported to the park. It is the largest extant species of the deer family and one of the largest herbivores in the park today. (Numbers in park in November 2021: 5–15)
Yakutian horse (a domestic breed of horse): The first species to be introduced for the project, they were imported from the surrounding Srednekolymsk region beginning in 1988. Yakutian horses have developed a range of remarkable morphologic, metabolic and physiologic adaptions to the harsh environment of Siberia, including an extremely dense and long winter coat, a compact build, a metabolism adjusted to seasonal needs, and an increased production of antifreezing compounds. In summer they grow very large hooves, which they wear down in winter scraping away snow to get at food. Despite their size, they proved to be dominant over the wisents, who often fled from them. Yakutian horses are purely grazing animals – they eat only grass species and visit the park's forests only during the spring flood. In the spring of 2015, ten more Yakutian horses were acquired to increase genetic diversity. (Numbers in park in November 2021: approximately 40)Muskox (Ovibos moschatus): Muskoxen arrived at the park in September 2010. They were brought from Wrangel Island (itself repopulated with animals from Canada). They are doing well and are now fully grown. Unfortunately only males could be acquired, after an attempt to get both males and females was thwarted during the expedition when a polar bear broke the fence to eat one of them, and the Zimovs are now urgently looking for females. The introduction of more muskoxen was planned for 2019. A new expedition to go to Wrangel Island was planned to take place in late 2020, but ultimately cancelled due to various delays by the time they had the boats ready, including by the COVID-19 pandemic. The original muskoxen managed to escape the park several times, eventually escaping it for good, but in July 2023, they would retrieve 14 young muskoxen from the Yamal Peninsula in exchange for several plains bison. (Numbers in park in September 2023: approximately 14)
Wisent (AKA European bison, Bison bonasus): During the last ice age, wisents were the most cold-adapted of the Bison species and thrived in the glacial grassland-steppe biome. Their dietary needs are very different from the American bison. Year-round 10% of their diet necessarily consists of trees and shrubs, and they will ignore their main forage (grasses, sedges and forbs) in favour of woody forage to reach this quota. Without supplementary feeding in winter, the yearly average may rise to 20% even in countries with mild winters. Five wisents, one adult male and four juvenile females, were introduced in the park in April 2011. The wisents were brought to the park from the Prioksko-Terrasny Nature Reserve near Moscow. The transportation was more complicated and took a longer time than originally thought, but all the animals recovered rapidly after the trip. Unfortunately, the wisents did not sufficiently acclimatize in the first months. They started to moult in November, when temperatures already were down to –30 °C (–35 °F) in Cherskii. The four juveniles died; only the adult bull survived. He is now fully acclimatized. (Numbers in park in November 2021: 1 male) The park announced via an Instagram comment that after 12 years of residence, the remaining wisent died sometime during the winter of 2022.Domestic yak (Bos mutus grunniens): Ten domestic yaks acquired in Irkutsk Oblast were introduced in Pleistocene Park in June 2017; two calves were born a few days after the arrival. Another calf was born after that. Yaks are adapted to extreme cold, short growing seasons for grazing herbage, and rough grazing conditions with sedges and shrubby plants. Wild yaks once lived in western Beringia. (Numbers in park in November 2021: approximately 8)
Edilbaevskaya sheep (a domestic breed of sheep): 30 domestic sheep acquired in Irkutsk Oblast were introduced in Pleistocene Park in October 2017. The sheep are from a breed that is adapted to the Siberian cold. They belong to the breed group of fat-tailed sheep; their fatty rump evolved to store fat as a reserve for lean seasons, analogous to a camel's humps. (Numbers in park in November 2021: 18)
Kalmykian cattle (a domestic breed of cattle adapted for the Mongolian steppe): A population was introduced to the park in October 2018. (Numbers in park in November 2021: 15)
Plains bison (Bison bison bison): Twelve yearling plains bison, nine males and three females, were acquired and would have been introduced in the park once the United States' FAA gave clearance for the flight. The plains bison were bought from the Stevens Village Bison Reserve near Delta Junction in Alaska; as the climate there is comparable to that of Siberia, the young bison were expected to thrive. Plains bison are grazers of grasses and sedges. Unlike wisents, plains bison are almost pure grazers, which will consume other plant material mainly in time of need. While wood bison were the preferred choice of subspecies, they are not easy to acquire; plains bison simply are the subspecies that could be brought to the Park most easily. They got bison from Denmark, from the Ditlevsdal bison farm. The bison began traveling on 7 May, and officially arrived safely in the park on 9 June. A second expedition to the Ditlevsdal bison farm allowed for another herd to be brought to the park. (Numbers in park in September 2023: 35)
Orenburg fur goat (Capra aegagrus hircus): Its presence is necessary due to their ability to eat anything, including plant poisonous to other herbivores. Only difficulty with acquiring them is due to them being only found in Orenburg, due to veterinary services not allowing shipping out of that region. Current plans involve bringing the goats from a farm belonging to a park ranger that formerly worked for Pleistocene Park into the park around May 2021. The trip to acquire them began on May 5, with the goats being loaded on May 8, then the long trek to bring them to Pleistocene Park finished with their arrival at the park on June 18. (Numbers in park in November 2021: 35)
Bactrian camel (Camelus bactrianus): Either of the two-humped camel species could act as a proxy for extinct Pleistocene camel species, whose fossils have been found in areas that once formed part of Beringia. The camel evolved in the high arctic as a large boreal browser; its hump presumably evolved to store fat as a resource for the long winter. Bactrian camels will eat almost anything, preferably any plant material such as grass, shrubs, bark, etc., but in times of need also carrion. In the winter they will dig under snow to get at forage. Camels are not suitable for wet environments, preferring uplands, and are mainly sought out in order to browse away at plants like willow shrubs, though they do sometimes eat the wet grasses. Domesticated Bactrian camels are currently set to be brought to the park around May 2021 from a farm in Orsk. The trip to acquire them began on May 5, with the camels being loaded on May 8, and then the expedition would wrap up with the transport truck carrying the camels arriving at Pleistocene Park on June 18. (Numbers in park in November 2021: 10)
Several non-ungulate herbivores were already present before establishment of the park and remain resident; these include the mountain hare (Lepus timidus), the black-capped marmot (Marmota camtschatica), the Arctic ground squirrel (Spermophilus parryii), the muskrat (Ondatra zibethicus), and diverse species of voles.
Carnivores
Eurasian lynx (Lynx lynx): Resident before the project started. It is an important predator of medium-sized herbivores like hares and roe deer.
Tundra wolf (Canis lupus albus): Before the project started the area was already home to a family of wolves, despite the originally low concentration of prey ungulates.
Arctic fox (Vulpes lagopus): Resident before the project started.East Siberian brown bear (Ursus arctos collaris): Resident before the project started.
Wolverine (Gulo gulo): Present before the project started.
Red fox (Vulpes vulpes): Resident before the project started.
Sable (Martes zibellina): Resident before the project started.
Stoat (Mustela erminea): Resident before the project started.
Animals considered for reintroduction
Herbivores
Wood bison (Bison bison athabascae): Better adapted to life in the Far North than the plains bison. Mainly a grazer of grasses and sedges, seasonally supplements this diet with other plant material like forbs, lichen, and silverberry and willow leaves. Wet meadows in bottomlands (like the Kolyma river plain) are an important habitat for wood bison. The original plans for the rewilding of Bison had called for the introduction of wood bison as an ecological proxy for the extinct steppe wisent, Bison priscus. These plans did not work out and wisents were acquired instead.Altai wapiti or Altai maral (Cervus canadensis sibiricus): Had been introduced in April 2011. The wapiti made their way to the park all the way from the mountainous regions of Altai in central southern Siberia. Wapiti are very good jumpers and all six escaped within the first two years. The fence has been strengthened to cope with future introductions.
Wild yak (Bos mutus): Could be brought from the Tibetan Plateau.
Snow sheep (Ovis nivicola): Immigration from neighboring areas is encouraged.
Wild Bactrian camel (Camelus ferus): Like the domesticated Bactrian camel, could act as a proxy for extinct Pleistocene camel species, whose fossils have been found in areas that once formed part of Beringia.
Siberian roe deer (Capreolus pygargus): Immigration from neighboring areas is encouraged.Saiga antelope (Saiga tatarica): Introduction is in the planning stage.
Carnivores
Siberian tiger (Panthera tigris tigris): Introduction planned for a later stage, when herbivores have multiplied.
Animals that can be placed in the park if revived from extinction
Woolly mammoth (Mammuthus primigenius): In January 2011, the Yomiuri Shimbun reported that a team of scientists from Kyoto University were planning to extract DNA from a mammoth carcass preserved in a Russian laboratory and insert it into egg cells of Asian elephants in hope of creating a mammoth embryo. If the experiment succeeded, the calf would be taken to the park along with others to form a wild population. The researchers claimed that their aim was to produce the first mammoth within six years.
Cave lion (Panthera spelaea): The discovery of two well-preserved cubs in the Sakha Republic ignited a project to clone the animal.
Steppe bison (Bison priscus): The discovery of the mummified steppe bison of 9,000 years ago could help people clone the ancient bison species back, even though the steppe bison would not be the first to be "resurrected".
Woolly rhinoceros (Coelodonta antiquitatis): Similar reasons to those for bringing back as the woolly mammoth.
Irish elk (Megaloceros giganteus)
Cave bear (Ursus spelaeus)
Southern branch of Pleistocene Park: The Wild Field wilderness reserve
In 2012 to 2014 a branch of Pleistocene Park named "Wild Field" (Russian: Дикое поле, Dikoe pole) was constructed near the city of Tula in Tula Oblast in the European part of Russia, approximately 250 km (150 mi) south of Moscow.Unlike Pleistocene Park, Wild Field's primary purpose is not scientific research but public outreach, i.e., it will provide a model of what an unregulated steppe ecosystem looked like before the advent of humans. It is situated near a federal road and a railway station and will be accessible to the general public.Wild Field comprises 300 ha (740 ac) of which 280 ha have been fenced off and stocked with animals. Already present in the park are nine species of large herbivores and one omnivore species: Bashkir horses (a strain of Equus ferus caballus) from the southern part of the Ural Mountains, Altai maral/Altai wapiti (Cervus canadensis sibiricus), Edilbaevskaya sheep (a strain of Ovis orientalis aries), roe deer (Capreolus spec.), Kalmykian cattle (a strain of Bos primigenius taurus), domestic yaks (Bos mutus grunniens), wild boar (Sus scrofa), one female elk[BE]/moose[AE] (Alces alces), four reindeer (Rangifer tarandus) and 73 domestic Pridonskaya goats (a strain of Capra aegagrus hircus).
See also
Wild Field (wilderness reserve)
Permafrost carbon cycle
Quaternary extinction event
Rewilding (conservation biology)
External links
Successful Kickstarter campaign
Fundraiser at Indigogo
Ben Fogle: New Lives In The Wild, S11 E2: Siberia available online until 20 October 2022
Footnotes
References
Media
Official park website
PleistocenePark (Patreon)
Official facebook site
Official website of the Pleistocene Park Foundation (Last update April 2018)
″Wild Field″ Manifesto. Sergey A. Zimov, 2014.
Literature
Sergey A. Zimov (2005): ″Pleistocene Park: Return of the Mammoth's Ecosystem.″ In: Science, 6 May 2005, vol. 308, no. 5723, pp. 796–798. Accessed 5 May 2013..
Aleksandr Markov (2006): ″Good Fence for Future Mammoth Steppes.″ Translated by Anna Kizilova. Russia-InfoCentre website, 21 January 2007. Accessed 5 May 2013..
Sergei Zimov (2007): ″Mammoth Steppes and Future Climate.″ In: Science in Russia, 2007, pp. 105–112. Article found in: www.pleistocenepark.ru/en/ – Materials. Accessed 5 May 2013..
Adam Wolf (2008): ″The Big Thaw.″ In: Stanford Magazine, Sept.–Oct. 2008, pp. 63–69. Accessed 7 May 2013.. – PDF of print version, found in: www.pleistocenepark.ru/en/ – Materials. Accessed 7 May 2013..
Arthur Max (2010): ″Russian Scientist Working To Recreate Ice Age Ecosystem.″ In: The Huffington Post, 27 November 2010. Accessed 7 May 2013..
Martin W. Lewis (2012): ″Pleistocene Park: The Regeneration of the Mammoth Steppe?″ and ″Pleistocene Re-Wilding: Environmental Restoration or Ecological Heresy?″ In: GeoCurrents, 12 respectively 14 April 2012. Accessed 2 May 2013..
Sergey A. Zimov, Nikita S. Zimov, F. Stuart Chapin III (2012): "The Past and Future of the Mammoth Steppe Ecosystem." (doi). In: Julien Louys (ed.), Paleontology in Ecology and Conservation, Berlin Heidelberg, Springer-Verlag 2012. Accessed 4 November 2017..
S.A. Zimov, N.S. Zimov, A.N. Tikhonov, F.S. Chapin III (2012): ″Mammoth steppe: a high-productivity phenomenon.″ In: Quaternary Science Reviews, vol. 57, 4 December 2012, pp. 26–45. Accessed 10 February 2014..
Damira Davletyarova (2013): ″The Zimovs: Restoration of the Mammoth-Era Ecosystem, and Reversing Global Warming.″ In: Ottawa Life Magazine, 11 February 2013. Accessed 6 June 2013..
Eli Kintisch (2015): "Born to rewild. A father and son's quixotic quest to bring back a lost ecosystem – and save the world." In: Science, 4 December 2015, vol. 350, no. 6265, pp. 1148–1151. (Alternative version on the Pulitzer Center on Crisis Reporting website.) Accessed 26 September 2016..
Ross Andersen (2017): "Welcome to Pleistocene Park. In Arctic Siberia, Russian scientists are trying to stave off catastrophic climate change—by resurrecting an Ice Age biome complete with lab-grown woolly mammoths." In: The Atlantic, April 2017. Accessed 10 March 2017..
Adele Peters (2017): "Home, home on the ферма. Meet The Father-Son Duo Importing American Bison To Siberia To Save The Planet." In: Fast Company, 21 March 2017. Accessed 29 March 2017..
Animal People, Inc. (2017): "An Interview with Nikita Zimov, Director of Pleistocene Park." In: Animal People Forum, 2 April 2017.
Noah Deich (2017): "Mammoths, Permafrost & Soil Carbon Storage: A Q&A about Pleistocene Park." Interview with Dr. Guy Lomax of the Natural Climate Initiative at The Nature Conservancy. Center for Carbon Removal, 3 April 2017.
Video
Pleistocene Park (w/o date): 360° panorama view from top of the monitoring tower. Photo in Pleistocene Park Picture Gallery. Accessed 20 October 2014..
R. Max Holmes (2011): An Arctic Solution to Climate Warming. Talk at the TEDxWoodsHole, 2 March 2011, in Woods Hole, Mass. Video, 9:17 min., uploaded 18 November 2011. Accessed 10 March 2017..
Eugene Potapov (2012): Pleistocene Park. Video, 7:11 min., uploaded 21 October 2012. Accessed 23 April 2013..
Panoramio (2012): A view of the Kolyma River floodplains taken from the surrounding hills above Pleistocene Park. Photo, uploaded 23 October 2012. Accessed 27 June 2013..
Luke Griswold-Tergis (2014): Can Woolly Mammoths Save the World? Talk at the TEDxConstitutionDrive 2014 (Menlo Park, CA).] Video, 15:25 min., uploaded 29 May 2014. Accessed 20 October 2014..
Grant Slater, Ross Andersen (2016): Creating Pleistocene Park. Video, 26:01 min., uploaded 13 March 2017. Accessed 6 April 2017..
The Pleistocene Park Foundation, Inc. (2017): Pleistocene Park: an ice-age ecosystem to save the world. Video, 3:09 min. Kickstarter crowdfunding campaign. Accessed 4 March 2017..
ZoominTV (2017) Jurassic Park IRL: How the mammoth can help our future. Video, 3:25 min., uploaded 10 July 2017. Note: This video shows Wild Field footage cut against an interview about Pleistocene Park.Accessed 6 April 2017..
Barbara Lohr (2017): Siberia: Raiders of the Lost Age. Video, 36 min. ARTE Reportage.
Atlas Pro (2021): The Plan to Revive the Mammoth Steppe to Fight Climate Change. Video, 20 min.
External links
Media related to Pleistocene Park at Wikimedia Commons
Official website
Pleistocene Park Foundation
Revive & Restore. Genetic Rescue for Endangered and Extinct Species. – Search results for "Zimov" |
total equivalent warming impact | Total equivalent warming impact (TEWI) is besides global warming potential measure used to express contributions to global warming.
It is defined as sum of the direct emissions (chemical) and indirect emissions (energy use) of greenhouse gases.
References
Sources
Reiss M. (March 1996). "Total equivalent warming impact of refrigerants". Fuel and Energy Abstracts. 37 (2): 147–147(1). doi:10.1016/0140-6701(96)88188-4. |
nile delta | The Nile Delta (Arabic: دلتا النيل, Delta an-Nīl or simply الدلتا, ad-Delta) is the delta formed in Lower Egypt where the Nile River spreads out and drains into the Mediterranean Sea. It is one of the world's largest river deltas—from Alexandria in the west to Port Said in the east, it covers 240 km (150 mi) of Mediterranean coastline and is a rich agricultural region. From north to south the delta is approximately 160 km (100 mi) in length. The Delta begins slightly down-river from Cairo.
Geography
From north to south, the delta is approximately 160 km (100 mi) in length. From west to east, it covers some 240 km (150 mi) of coastline. The delta is sometimes divided into sections, with the Nile dividing into two main distributaries, the Damietta and the Rosetta, flowing into the Mediterranean at port cities with the same name. In the past, the delta had several distributaries, but these have been lost due to flood control, silting and changing relief. One such defunct distributary is Wadi Tumilat.
The Suez Canal is east of the delta and enters the coastal Lake Manzala in the north-east of the delta. To the north-west are three other coastal lakes or lagoons: Lake Burullus, Lake Idku and Lake Mariout.
The Nile is considered to be an "arcuate" delta (arc-shaped), as it resembles a triangle or flower when seen from above. Some scholars such as Aristotle have written that the delta was constructed for agricultural purposes due to the drying of the region of Egypt.In modern day, the outer edges of the delta are eroding, and some coastal lagoons have seen increasing salinity levels as their connection to the Mediterranean Sea increases. Since the delta no longer receives an annual supply of nutrients and sediments from upstream due to the construction of the Aswan Dam, the soils of the floodplains have become poorer, and large amounts of fertilizers are now used. Topsoil in the delta can be as much as 21 m (70 ft) in depth.
History
People have lived in the Nile Delta region for thousands of years, and it has been intensively farmed for at least the last five thousand years. The delta was a major constituent of Lower Egypt, and there are many archaeological sites in and around the delta. Artifacts belonging to ancient sites have been found on the delta's coast. The Rosetta Stone was found in the delta in 1799 in the port city of Rosetta (an anglicized version of the name Rashid). In July 2019 a small Greek temple, ancient granite columns, treasure-carrying ships, and bronze coins from the reign of Ptolemy II, dating back to the third and fourth centuries BC, were found at the sunken city of Heracleion, colloquially known as Egypt's Atlantis. The investigations were conducted by Egyptian and European divers led by the underwater archaeologist Franck Goddio. They also uncovered a devastated historic temple (the city's main temple) underwater off Egypt's north coast.In January 2019 archaeologists led by Mostafa Waziri working in the Kom Al-Khelgan area of the Nile Delta discovered tombs from the Second Intermediate Period and burials from the Naqada II era. The burial site contained the remains of animals, amulets and scarabs carved from faience, round and oval pots with handles, flint knives, broken and burned pottery. All burials included skulls and skeletons in the bending position and were not very well-preserved.
Ancient branches of the Nile
Records from ancient times (such as by Ptolemy) reported that the delta had seven distributaries or branches, (from east to west):
the Pelusiac
the Tanitic
the Mendesian
the Phatnitic or Phatmetic (later the Damietta)
the Sebennytic
the Bolbitine (later the Rosetta)
the Canopic (also called the Herakleotic, Agathodaemon)
George of Cyprus list
Alexandrian (Schedia canal)
Colynthin (Canopic branch)
Agnu (Rashid)
Parollos (Burullus)
Chasmatos (Baltim)
Tamiathe (Dumyat)
Tenese (Tinnis)Modern Egyptologists suggest that in the Pharaonic era there were at a time five main branches:
the Pelusiac
the Sebennytic
the Canopic
the Damietta
the RosettaThe first three have dried up over the centuries due to flood control, silting and changing relief, while the last two still exist today. The Delta used to flood annually, but this ended with the construction of the Aswan Dam.
Population
About 39 million people live in the Delta region. Outside of major cities, population density in the delta averages 1,000/km2 (2,600/sq mi) or more. Alexandria is the largest city in the delta with an estimated population of more than 4.5 million. Other large cities in the delta include Shubra El Kheima, Port Said, El Mahalla El Kubra, Mansura, Tanta, and Zagazig.
Wildlife
During autumn, parts of the Nile River are red with lotus flowers. The Lower Nile (North) and the Upper Nile (South) have plants that grow in abundance. The Upper Nile plant is the Egyptian lotus, and the Lower Nile plant is the Papyrus Sedge (Cyperus papyrus), although it is not nearly as plentiful as it once was, and is becoming quite rare.Several hundred thousand water birds winter in the delta, including the world's largest concentrations of little gulls and whiskered terns. Other birds making their homes in the delta include grey herons, Kentish plovers, shovelers, cormorants, egrets and ibises.
Other animals found in the delta include frogs, turtles, tortoises, mongooses, and the Nile monitor. Nile crocodiles and hippopotamus, two animals which were widespread in the delta during antiquity, are no longer found there. Fish found in the delta include the flathead grey mullet and soles.
Climate
The Delta has a hot desert climate (Köppen: BWh) as the rest of Egypt, but its northernmost part, as is the case with the rest of the northern coast of Egypt which is the wettest region in the country, has relatively moderate temperatures, with highs usually not surpassing 31 °C (88 °F) in the summer. Only 100–200 mm (4–8 in) of rain falls on the delta area during an average year, and most of this falls in the winter months. The delta experiences its hottest temperatures in July and August, with a maximum average of 34 °C (93 °F). Winter temperatures are normally in the range of 9 °C (48 °F) at nights to 19 °C (66 °F) in the daytime. With cooler temperatures and some rain, the Nile Delta region becomes quite humid during the winter months.
Sea level rise
Egypt's Mediterranean coastline experiences significant loss of land to the sea, in some places amounting to 90 m (100 yd) a year. The low-lying Nile Delta area in particular is vulnerable to sea level rise associated with global warming. This effect is exacerbated by the lack of sediments being deposited since the construction of the Aswan Dam. If the polar ice caps were to melt, much of the northern delta, including the ancient port city of Alexandria, could disappear under the Mediterranean. A 30 cm (12 in) rise in sea level could affect about 6.6% of the total land cover area in the Nile Delta region. At 1 m (3 ft 3 in) sea level rise, an estimated 887 thousand people could be at risk of flooding and displacement and about 100 km2 (40 sq mi) of vegetation, 16 km2 (10 sq mi) wetland, 402 km2 (160 sq mi) cropland, and 47 km2 (20 sq mi) of urban area land could be destroyed, flooding approximately 450 km2 (170 sq mi). Some areas of the Nile Delta's agricultural land have been rendered saline as a result of sea level rise; farming has been abandoned in some places, while in others sand has been brought in from elsewhere to reduce the effect. In addition to agriculture, the delta's ecosystems and tourist industry could be negatively affected by global warming. Food shortages resulting from climate change could lead to seven million "climate refugees" by the end of the 21st century. Nevertheless, environmental damage to the delta is not currently one of Egypt's priorities.The delta's coastline has also undergone significant changes in geomorphology as a result of the reclamation of coastal dunes and lagoons to form new agricultural land and fish farms as well as the expansion of coastal urban areas.
Governorates and large cities
The Nile Delta forms part of these 10 governorates:
Large cities located in the Nile Delta:
References
External links
"Nile Delta flooded savanna". Terrestrial Ecoregions. World Wildlife Fund.
Adaptationlearning.net: UN project for managing sea level rise risks in the Nile Delta
"The Nile Delta". Keyway Bible Study. Archived from the original on 2 August 2010. |
1,3,3,3-tetrafluoropropene | 1,3,3,3-Tetrafluoropropene (HFO-1234ze(E), R-1234ze) is a hydrofluoroolefin. It was developed as a "fourth generation" refrigerant to replace fluids such as R-134a, as a blowing agent for foam and aerosol applications, and in air horns and gas dusters. The use of R-134a is being phased out because of its high global warming potential (GWP). HFO-1234ze(E) itself has zero ozone-depletion potential (ODP=0), a very low global warming potential (GWP < 1 ), even lower than CO2, and it is classified by ANSI/ASHRAE as class A2L refrigerant (lower flammability and lower toxicity).In open atmosphere however, HFO-1234ze actually might form HFC-23 as one of its secondary atmospheric breakdown products. HFC-23 is a very potent greenhouse gas with a GWP100 of 14,800. The secondary GWP of R-1234ze would then be in the range of 1,400±700 considering the amount of HFC-23 which may form from HFO-1234ze in the atmosphere. Besides the global warming potential, when HFOs decompose in the atmosphere, trifluoroacetic acid (TFA(A)) is formed, which also remains in the atmosphere for several days. The trifluoroacetic acid then forms trifluoroacetate (TFA), a salt of trifluoroacetic acid, in water and on the ground. Due to its high polarity and low degradability, it is difficult to remove TFA from drinking water (ICPR 2019). Please note - the formation of R-23 and TFA by HFO-1234ze is contested in the scientific community - recent results indicate the statements in this paragraph to be false.
Uses
The increasing concerns about global warming and the related possible undesirable climate effects have led to an increasing agreement in developed countries for the reduction of greenhouse gas emissions. Given the relatively high global warming potential of most of the hydro-fluoro-carbons (HFCs), several actions are ongoing in different countries to reduce the use of these fluids. For example, the European Union's recent F-Gas regulation specifies the mandatory GWP values of the refrigerants to be used as working fluids in almost all air conditioners and refrigeration machines beginning in 2020.Several types of possible replacement candidates have been proposed so far, both synthetic and natural. Among the synthetic options, hydro-fluoro-olefins (HFOs) are the ones appearing most promising thus far.
HFO-1234ze(E) has been adopted as a working fluid in chillers, heat pumps, and supermarket refrigeration systems. There are also plans to use it as a propellant in inhalers.It has been demonstrated that HFO-1234ze(E) can not be considered as a drop-in replacement of HFC-134a. In fact, from a thermodynamic point of view, it can be stated that:
– The theoretical coefficients of performance of HFO-1234ze(E) is slightly lower than HFC-134a one;
– HFO-1234ze(E) has a different volumetric cooling capacity when compared to HFC-134a.
– HFO-1234ze(E) has saturation pressure drops higher than HFC-134a during two-phase heat transfer under the constraint of achieving the same heat transfer coefficient.So, from a technological point of view, modifications to the condenser and evaporator designs and to compressor displacement are needed to achieve the same cooling capacity and energetic performance of HFC-134a.
See also
2,3,3,3-Tetrafluoropropene (HFO-1234yf)
== References == |
hydrogen vehicle | A hydrogen vehicle is a vehicle that uses hydrogen fuel for motive power. Hydrogen vehicles include hydrogen-fueled space rockets, as well as ships and aircraft. Motive power is generated by converting the chemical energy of hydrogen to mechanical energy, either by reacting hydrogen with oxygen in a fuel cell to power electric motors or, less commonly, by burning hydrogen in an internal combustion engine.As of 2021, there are two models of hydrogen cars publicly available in select markets: the Toyota Mirai (2014–), which is the world's first commercially produced dedicated fuel cell electric vehicle (FCEV), and the Hyundai Nexo (2018–). There are also fuel cell buses. Hydrogen aircraft are not expected to carry many passengers long haul before the 2030s at the earliest.As of 2019, 98% of hydrogen is produced by steam methane reforming, which emits carbon dioxide. It can be produced by electrolysis of water, or by thermochemical or pyrolytic means using renewable feedstocks, but the processes are currently expensive. Various technologies are being developed that aim to deliver costs low enough, and quantities great enough, to compete with hydrogen production using natural gas.Vehicles running on hydrogen technology benefit from a long range on a single refuelling, but are subject to several drawbacks: high carbon emissions when hydrogen is produced from natural gas, capital cost burden, high energy inputs in production, low energy content per unit volume at ambient conditions, production and compression of hydrogen, the investment required to build refuelling infrastructure around the world to dispense hydrogen, and transportation of hydrogen. In addition, leaked hydrogen has a global warming effect 11.6 times stronger than CO₂.For light duty vehicles including passenger cars, hydrogen adoption is behind that of battery electric vehicles. A 2022 study found that technological developments and economies of scale in BEVs, compared with the evolution of the use of hydrogen, have made it unlikely for hydrogen light-duty vehicles to play a significant role in the future.
Vehicles
Rationale and context
The rationale for hydrogen vehicles lies in their potential to reduce reliance on fossil fuels, associated greenhouse gas emissions and localised air pollution from transportation. This would require hydrogen to be produced cleanly, for use in sectors and applications where cheaper and more energy efficient mitigation alternatives are limited.
Aeroplanes
Companies such as Boeing, Lange Aviation, and the German Aerospace Center pursue hydrogen as fuel for crewed and uncrewed aeroplanes. In February 2008 Boeing tested a crewed flight of a small aircraft powered by a hydrogen fuel cell. Uncrewed hydrogen planes have also been tested. For large passenger aeroplanes, The Times reported that "Boeing said that hydrogen fuel cells were unlikely to power the engines of large passenger jet aeroplanes but could be used as backup or auxiliary power units onboard."In July 2010, Boeing unveiled its hydrogen-powered Phantom Eye UAV, powered by two Ford internal-combustion engines that have been converted to run on hydrogen.
Automobiles
As of 2021, there are two hydrogen cars publicly available in select markets: the Toyota Mirai and the Hyundai Nexo. The Honda Clarity was produced from 2016 to 2021. Hydrogen combustion cars are not commercially available.In the light road vehicle segment, by the end of 2022, 70,200 fuel cell electric vehicles had been sold worldwide, compared with 26 million plug-in electric vehicles. With the rapid rise of electric vehicles and associated battery technology and infrastructure, the global scope for hydrogen’s role in cars is shrinking relative to earlier expectations.The first road vehicle powered by a hydrogen fuel cell was the Chevrolet Electrovan, introduced by General Motors in 1966.The Toyota FCHV and Honda FCX, which began leasing on December 2, 2002, became the world's first government-certified commercial hydrogen fuel cell vehicles, and the Honda FCX Clarity, which began leasing in 2008, was the world's first hydrogen fuel cell vehicle designed for mass production rather than adapting an existing model. Honda established the world's first fuel cell vehicle dealer network in 2008, and at the time was the only company able to lease hydrogen fuel cell vehicles to private customers.
The 2013 Hyundai Tucson FCEV, a modified Tucson, was introduced to the market as a lease-only vehicle, and Hyundai Motors claimed it was the world's first mass-produced hydrogen fuel cell vehicle. However, due to high prices and a lack of charging infrastructure, sales fell far short of initial plans, with only 273 units sold by the end of May 2015. Hyundai Nexo, which succeeded the Tucson in 2018, was selected as the "safest SUV" by the Euro NCAP in 2018.Toyota launched the world's first dedicated mass-produced fuel cell vehicle (FCV), the Mirai, in Japan at the end of 2014 and began sales in California, mainly the Los Angeles area and also in selected markets in Europe, the UK, Germany and Denmark later in 2015. The car has a range of 312 mi (502 km) and takes about five minutes to refill its hydrogen tank. The initial sale price in Japan was about 7 million yen ($69,000). Former European Parliament President Pat Cox estimated that Toyota would initially lose about $100,000 on each Mirai sold. At the end of 2019, Toyota had sold over 10,000 Mirais. Many automobile companies have introduced demonstration models in limited numbers (see List of fuel cell vehicles and List of hydrogen internal combustion engine vehicles).In 2013 BMW leased hydrogen technology from Toyota, and a group formed by Ford Motor Company, Daimler AG, and Nissan announced a collaboration on hydrogen technology development.In 2015, Toyota announced that it would offer all 5,680 patents related to hydrogen fuel cell vehicles and hydrogen fuel cell charging station technology, which it has been researching for over 20 years, to its competitors free of charge in order to stimulate the market for hydrogen-powered vehicles.By 2017, however, Daimler had abandoned hydrogen vehicle development, and most of the automobile companies developing hydrogen cars had switched their focus to battery electric vehicles. By 2020, all but three automobile companies had abandoned plans to manufacture hydrogen cars.
Auto racing
A record of 207.297 miles per hour (333.612 km/h) was set by a prototype Ford Fusion Hydrogen 999 Fuel Cell Race Car at the Bonneville Salt Flats, in August 2007, using a large compressed oxygen tank to increase power. The land-speed record for a hydrogen-powered vehicle of 286.476 miles per hour (461.038 km/h) was set by Ohio State University's Buckeye Bullet 2, which achieved a "flying-mile" speed of 280.007 miles per hour (450.628 km/h) at the Bonneville Salt Flats in August 2008.
In 2007, the Hydrogen Electric Racing Federation was formed as a racing organization for hydrogen fuel cell-powered vehicles. The organization sponsored the Hydrogen 500, a 500-mile race.
Buses
Fuel-cell buses have been trialed by several manufacturers in different locations, for example, the Ursus Lublin. Solaris Bus & Coach introduced its Urbino 12 hydrogen electric buses in 2019. Several dozen were ordered. In 2022, the city of Montpellier, France, cancelled a contract to procure 51 buses powered by hydrogen fuel cells, when it found that "the cost of operation for hydrogen [buses] is 6 times the cost of electricity".
Trams and trains
In the International Energy Agency’s 2022 Net Zero Emissions Scenario, hydrogen is forecast to account for 2% of rail energy demand in 2050, while 90% of rail travel is expected to be electrified by then (up from 45% today). Hydrogen’s role in rail would likely be focused on lines that prove difficult or costly to electrify.In March 2015, China South Rail Corporation (CSR) demonstrated the world's first hydrogen fuel cell-powered tramcar at an assembly facility in Qingdao. Tracks for the new vehicle have been built in seven Chinese cities.In northern Germany in 2018 the first fuel-cell powered Coradia iLint trains were placed into service; excess power is stored in lithium-ion batteries.
Ships
As of 2019 Hydrogen fuel cells are not suitable for propulsion in large long-distance ships, but they are being considered as a range-extender for smaller, short-distance, low-speed electric vessels, such as ferries. Hydrogen in ammonia is being considered as a long-distance fuel.
Bicycles
In 2007, Pearl Hydrogen Power Source Technology Co of Shanghai, China, demonstrated a PHB hydrogen bicycle. In 2014, Australian scientists from the University of New South Wales presented their Hy-Cycle model. The same year, Canyon Bicycles started to work on the Eco Speed concept bicycle.In 2017, Pragma Industries of France developed a bicycle that was able to travel 100 km on a single hydrogen cylinder. In 2019, Pragma announced that the product, "Alpha Bike", has been improved to offer an electrically assisted pedalling range of 150 km, and the first 200 of the bikes are to be provided to journalists covering the 45th G7 summit in Biarritz, France.Lloyd Alter of TreeHugger responded to the announcement, asking "why … go through the trouble of using electricity to make hydrogen, only to turn it back into electricity to charge a battery to run the e-bike [or] pick a fuel that needs an expensive filling station that can only handle 35 bikes a day, when you can charge a battery powered bike anywhere. [If] you were a captive fleet operator, why [not] just swap out batteries to get the range and the fast turnover?"
Military vehicles
General Motors' military division, GM Defense, focuses on hydrogen fuel cell vehicles. Its SURUS (Silent Utility Rover Universal Superstructure) is a flexible fuel cell electric platform with autonomous capabilities. Since April 2017, the U.S. Army has been testing the commercial Chevrolet Colorado ZH2 on its U.S. bases to determine the viability of hydrogen-powered vehicles in military mission tactical environments.
Motorcycles and scooters
ENV develops electric motorcycles powered by a hydrogen fuel cell, including the Crosscage and Biplane. Other manufacturers as Vectrix are working on hydrogen scooters. Finally, hydrogen-fuel-cell-electric-hybrid scooters are being made such as the Suzuki Burgman fuel-cell scooter and the FHybrid. The Burgman received "whole vehicle type" approval in the EU. The Taiwanese company APFCT conducted a live street test with 80 fuel-cell scooters for Taiwan's Bureau of Energy.
Auto rickshaws
Hydrogen auto rickshaw concept vehicles have been built by Mahindra HyAlfa and Bajaj Auto.
Quads and tractors
Autostudi S.r.l's H-Due is a hydrogen-powered quad, capable of transporting 1-3 passengers. A concept for a hydrogen-powered tractor has been proposed.
Fork trucks
A hydrogen internal combustion engine (or "HICE") forklift or HICE lift truck is a hydrogen fueled, internal combustion engine-powered industrial forklift truck used for lifting and transporting materials. The first production HICE forklift truck based on the Linde X39 Diesel was presented at an exposition in Hannover on May 27, 2008. It used a 2.0 litre, 43 kW (58 hp) diesel internal combustion engine converted to use hydrogen as a fuel with the use of a compressor and direct injection.In 2013 there were over 4,000 fuel cell forklifts used in material handling in the US. The global market was estimated at 1 million fuel cell powered forklifts per year for 2014–2016. Fleets are being operated by companies around the world. Pike Research stated in 2011 that fuel-cell-powered forklifts will be the largest driver of hydrogen fuel demand by 2020.Most companies in Europe and the US do not use petroleum powered forklifts, as these vehicles work indoors where emissions must be controlled and instead use electric forklifts. Fuel-cell-powered forklifts can provide benefits over battery powered forklifts as they can be refueled in 3 minutes. They can be used in refrigerated warehouses, as their performance is not degraded by lower temperatures. The fuel cell units are often designed as drop-in replacements.
Rockets
Many large rockets use liquid hydrogen as fuel, with liquid oxygen as an oxidizer (LH2/LOX). An advantage of hydrogen rocket fuel is the high effective exhaust velocity compared to kerosene/LOX or UDMH/NTO engines. According to the Tsiolkovsky rocket equation, a rocket with higher exhaust velocity uses less propellant to accelerate. Also the energy density of hydrogen is greater than any other fuel. LH2/LOX also yields the greatest efficiency in relation to the amount of propellant consumed, of any known rocket propellant.A disadvantage of LH2/LOX engines is the low density and low temperature of liquid hydrogen, which means bigger and insulated and thus heavier fuel tanks are needed. This increases the rocket's structural mass which reduces its delta-v significantly. Another disadvantage is the poor storability of LH2/LOX-powered rockets: Due to the constant hydrogen boil-off, the rocket must be fueled shortly before launch, which makes cryogenic engines unsuitable for ICBMs and other rocket applications with the need for short launch preparations.
Overall, the delta-v of a hydrogen stage is typically not much different from that of a dense fuelled stage, but the weight of a hydrogen stage is much less, which makes it particularly effective for upper stages, since they are carried by the lower stages. For first stages, dense fuelled rockets in studies may show a small advantage, due to the smaller vehicle size and lower air drag.LH2/LOX were also used in the Space Shuttle to run the fuel cells that power the electrical systems. The byproduct of the fuel cell is water, which is used for drinking and other applications that require water in space.
Heavy trucks
The International Energy Agency’s 2022 net-zero emissions scenario sees hydrogen meeting approximately 30% of heavy truck energy demand in 2050, mainly for long-distance heavy freight (with battery electric power accounting for around 60%).United Parcel Service began testing of a hydrogen powered delivery vehicle in 2017. In 2020, Hyundai began commercial production of its Xcient fuel cell trucks and shipped ten of them to Switzerland.In 2022 in Australia, five hydrogen fuel cell class 8 trucks were placed into use to transport zinc from Sun Metals' Townsville mine to the Port of Townsville, Queensland, to be shipped around the world.
Internal combustion vehicle
Hydrogen internal combustion engine cars are different from hydrogen fuel cell cars. The hydrogen internal combustion car is a slightly modified version of the traditional gasoline internal combustion engine car. These hydrogen engines burn fuel in the same manner that gasoline engines do; the main difference is the exhaust product. Gasoline combustion results in emissions of mostly carbon dioxide and water, plus trace amounts of carbon monoxide, NOx, particulates and unburned hydrocarbons, while the main exhaust product of hydrogen combustion is water vapor.
In 1807 François Isaac de Rivaz designed the first hydrogen-fueled internal combustion engine. In 1965, Roger E. Billings, then a high school student, converted a Model A to run on hydrogen. In 1970 Paul Dieges patented a modification to internal combustion engines which allowed a gasoline-powered engine to run on hydrogen.Mazda has developed Wankel engines burning hydrogen, which are used in the Mazda RX-8 Hydrogen RE. The advantage of using an internal combustion engine, like Wankel and piston engines, is the lower cost of retooling for production.
Fuel cell
Fuel cell cost
Hydrogen fuel cells are relatively expensive to produce, as their designs require rare substances, such as platinum, as a catalyst. In 2014, former European Parliament President Pat Cox estimated that Toyota would initially lose about $100,000 on each Mirai sold. In 2020, researchers at the University of Copenhagen's Department of Chemistry are developing a new type of catalyst that they hope will decrease the cost of fuel cells. This new catalyst uses far less platinum because the platinum nano-particles are not coated over carbon which, in conventional hydrogen fuel cells, keeps the nano-particles in place but also causes the catalyst to become unstable and denatures it slowly, requiring even more platinum. The new technology uses durable nanowires instead of the nano-particles. "The next step for the researchers is to scale up their results so that the technology can be implemented in hydrogen vehicles."
Freezing conditions
The problems in early fuel-cell designs at low temperatures concerning range and cold start capabilities have been addressed so that they "cannot be seen as show-stoppers anymore". Users in 2014 said that their fuel cell vehicles perform flawlessly in temperatures below zero, even with the heaters blasting, without significantly reducing range. Studies using neutron radiography on unassisted cold-start indicate ice formation in the cathode, three stages in cold start and Nafion ionic conductivity. A parameter, defined as coulomb of charge, was also defined to measure cold start capability.
Service life
The service life of fuel cells is comparable to that of other vehicles. Polymer-electrolyte membrane (PEM) fuel cell service life is 7,300 hours under cycling conditions.
Hydrogen
Hydrogen does not exist in convenient reservoirs or deposits like fossil fuels or helium. It is produced from feedstocks such as natural gas and biomass or electrolyzed from water. A suggested benefit of large-scale deployment of hydrogen vehicles is that it could lead to decreased emissions of greenhouse gases and ozone precursors. However, as of 2014, 95% of hydrogen is made from methane. It can be produced by thermochemical or pyrolitic means using renewable feedstocks, but that is an expensive process.Renewable electricity can however be used to power the conversion of water into hydrogen: Integrated wind-to-hydrogen (power to gas) plants, using electrolysis of water, are exploring technologies to deliver costs low enough, and quantities great enough, to compete with traditional energy sources. The challenges facing the use of hydrogen in vehicles include its storage on board the vehicle. As of September 2023, hydrogen cost $36 per kilogram at public fueling stations in California, 14 times as much per mile for a Mirai as compared with a Tesla Model 3.
Production
The molecular hydrogen needed as an onboard fuel for hydrogen vehicles can be obtained through many thermochemical methods utilizing natural gas, coal (by a process known as coal gasification), liquefied petroleum gas, biomass (biomass gasification), by a process called thermolysis, or as a microbial waste product called biohydrogen or Biological hydrogen production. 95% of hydrogen is produced using natural gas. Hydrogen can be produced from water by electrolysis at working efficiencies of 65–70%. Hydrogen can be made by chemical reduction using chemical hydrides or aluminum. Current technologies for manufacturing hydrogen use energy in various forms, totaling between 25 and 50 percent of the higher heating value of the hydrogen fuel, used to produce, compress or liquefy, and transmit the hydrogen by pipeline or truck.Environmental consequences of the production of hydrogen from fossil energy resources include the emission of greenhouse gasses, a consequence that would also result from the on-board reforming of methanol into hydrogen. Hydrogen production using renewable energy resources would not create such emissions, but the scale of renewable energy production would need to be expanded to be used in producing hydrogen for a significant part of transportation needs. In a few countries, renewable sources are being used more widely to produce energy and hydrogen. For example, Iceland is using geothermal power to produce hydrogen, and Denmark is using wind.
Storage
Compressed hydrogen in hydrogen tanks at 350 bar (5,000 psi) and 700 bar (10,000 psi) is used for hydrogen tank systems in vehicles, based on type IV carbon-composite technology.Hydrogen has a very low volumetric energy density at ambient conditions, compared with gasoline and other vehicle fuels. It must be stored in a vehicle either as a super-cooled liquid or as highly compressed gas, which require additional energy to accomplish. In 2018, researchers at CSIRO in Australia powered a Toyota Mirai and Hyundai Nexo with hydrogen separated from ammonia using a membrane technology. Ammonia is easier to transport safely in tankers than pure hydrogen.
Infrastructure
To enable the delivery of hydrogen fuel to transport end-users, a broad range of investments are needed, including, according to the International Energy Agency (IEA), the "construction and operation of new port infrastructure, buffer storage, pipelines, ships, refueling stations and plants to convert the hydrogen into a more readily transportable commodity (and potentially back to hydrogen)". In particular, the IEA notes that refueling stations will be needed in locations that are suitable for long‐distance trucking such as industrial hubs and identifies the need for investment in airport infrastructure for the storage and delivery of hydrogen. The IEA deems the infrastructure requirements for hydrogen in shipping more challenging, drawing attention to the "need for major investments and co‐ordinated efforts among fuel suppliers, ports, shipbuilders and shippers".As of 2021, there were 49 publicly accessible hydrogen refueling stations in the US, 48 of which were located in California (compared with 42,830 electric charging stations). By 2017, there were 91 hydrogen fueling stations in Japan.
Codes and standards
Hydrogen codes and standards, as well as codes and technical standards for hydrogen safety and the storage of hydrogen, have been an institutional barrier to deploying hydrogen technologies. To enable the commercialization of hydrogen in consumer products, new codes and standards must be developed and adopted by federal, state and local governments.
Official support
U.S. initiatives
Fuel cell buses are supported.The New York State Energy Research and Development Authority (NYSERDA) has created incentives for hydrogen fuel cell electric trucks and buses.
Criticism
Critics claim the time frame for overcoming the technical and economic challenges to implementing wide-scale use of hydrogen in cars is likely to be at least several decades. They argue that the focus on the use of the hydrogen car is a dangerous detour from more readily available solutions to reducing the use of fossil fuels in vehicles. In 2008, Wired News reported that "experts say it will be 40 years or more before hydrogen has any meaningful impact on gasoline consumption or global warming, and we can't afford to wait that long. In the meantime, fuel cells are diverting resources from more immediate solutions."In the 2006 documentary, Who Killed the Electric Car?, former U.S. Department of Energy official Joseph Romm said: "A hydrogen car is one of the least efficient, most expensive ways to reduce greenhouse gases." He also argued that the cost to build out a nationwide network of hydrogen refueling stations would be prohibitive. He held the same views in 2014. In 2009, the Los Angeles Times wrote that "hydrogen is a lousy way to move cars." Robert Zubrin, the author of Energy Victory, stated: "Hydrogen is 'just about the worst possible vehicle fuel'". The Economist noted that most hydrogen is produced through steam methane reformation, which creates at least as much emission of carbon per mile as some of today's gasoline cars, but that if the hydrogen could be produced using renewable energy, "it would surely be easier simply to use this energy to charge the batteries of all-electric or plug-in hybrid vehicles." Over their lifetimes, hydrogen vehicles will emit more carbon than gasoline vehicles. The Washington Post asked in 2009, "[W]hy would you want to store energy in the form of hydrogen and then use that hydrogen to produce electricity for a motor, when electrical energy is already waiting to be sucked out of sockets all over America and stored in auto batteries"?Volkswagen's Rudolf Krebs said in 2013 that "no matter how excellent you make the cars themselves, the laws of physics hinder their overall efficiency. The most efficient way to convert energy to mobility is electricity." He elaborated: "Hydrogen mobility only makes sense if you use green energy", but ... you need to convert it first into hydrogen "with low efficiencies" where "you lose about 40 percent of the initial energy". You then must compress the hydrogen and store it under high pressure in tanks, which uses more energy. "And then you have to convert the hydrogen back to electricity in a fuel cell with another efficiency loss". Krebs continued: "in the end, from your original 100 percent of electric energy, you end up with 30 to 40 percent." In 2015, CleanTechnica listed some of the disadvantages of hydrogen fuel cell vehicles A 2016 study in Energy by scientists at Stanford University and the Technical University of Munich concluded that, even assuming local hydrogen production, "investing in all-electric battery vehicles is a more economical choice for reducing carbon dioxide emissions".A 2017 analysis published in Green Car Reports concluded that the best hydrogen-fuel-cell vehicles consume "more than three times more electricity per mile than an electric vehicle ... generate more greenhouse gas emissions than other powertrain technologies ... [and have] very high fuel costs. ... Considering all the obstacles and requirements for new infrastructure (estimated to cost as much as $400 billion), fuel-cell vehicles seem likely to be a niche technology at best, with little impact on U.S. oil consumption. The US Department of Energy agrees, for fuel produced by grid electricity via electrolysis, but not for most other pathways for generation. A 2019 video by Real Engineering noted that, notwithstanding the introduction of vehicles that run on hydrogen, using hydrogen as a fuel for cars does not help to reduce carbon emissions from transportation. The 95% of hydrogen still produced from fossil fuels releases carbon dioxide, and producing hydrogen from water is an energy-consuming process. Storing hydrogen requires more energy either to cool it down to the liquid state or to put it into tanks under high pressure, and delivering the hydrogen to fueling stations requires more energy and may release more carbon. The hydrogen needed to move a FCV a kilometer costs approximately 8 times as much as the electricity needed to move a BEV the same distance. Also in 2019, Katsushi Inoue, the president of Honda Europe, stated, "Our focus is on hybrid and electric vehicles now. Maybe hydrogen fuel cell cars will come, but that's a technology for the next era."Assessments since 2020 have concluded that hydrogen vehicles are still only 38% efficient, while battery EVs are from 80% to 95% efficient. A 2021 assessment by CleanTechnica concluded that while hydrogen cars are far less efficient than electric cars, the vast majority of hydrogen being produced is polluting grey hydrogen, and delivering hydrogen would require building a vast and expensive new infrastructure, the remaining two "advantages of fuel cell vehicles – longer range and fast fueling times – are rapidly being eroded by improving battery and charging technology." A 2022 study in Nature Electronics agreed. Another 2022 article, in Recharge News, stated that ships are more likely to be powered by ammonia or methanol than hydrogen. Also in 2022, Germany’s Fraunhofer Institute concluded that hydrogen is unlikely to play a major role in road transport.A 2023 study by the Centre for International Climate and Environmental Research (CICERO) estimated that leaked hydrogen has a global warming effect 11.6 times stronger than CO₂.
Safety and supply
Hydrogen fuel is hazardous because of the low ignition energy (see also Autoignition temperature) and high combustion energy of hydrogen, and because it tends to leak easily from tanks. Explosions at hydrogen filling stations have been reported. Hydrogen fuelling stations generally receive deliveries of hydrogen by truck from hydrogen suppliers. An interruption at a hydrogen supply facility can shut down multiple hydrogen fuelling stations.
Comparison with other types of alternative fuel vehicle
Hydrogen vehicles compete with various proposed alternatives to the modern fossil fuel powered vehicle infrastructure.
Plug-in hybrids
Plug-in hybrid electric vehicles, or PHEVs, are hybrid vehicles that can be plugged into the electric grid and contain an electric motor and also an internal combustion engine. The PHEV concept augments standard hybrid electric vehicles with the ability to recharge their batteries from an external source, enabling increased use of the vehicle's electric motors while reducing their reliance on internal combustion engines.
Natural gas
Internal combustion engine-based compressed natural gas(CNG), HCNG, LPG or LNG vehicles (Natural gas vehicles or NGVs) use methane (Natural gas or Biogas) directly as a fuel source. Natural gas has a higher energy density than hydrogen gas. NGVs using biogas are nearly carbon neutral. Unlike hydrogen vehicles, CNG vehicles have been available for many years, and there is sufficient infrastructure to provide both commercial and home refueling stations. Worldwide, there were 14.8 million natural gas vehicles by the end of 2011. The other use for natural gas is in steam reforming which is the common way to produce hydrogen gas for use in electric cars with fuel cells.Methane is also an alternative rocket fuel.
Plug-in electric vehicles
In the light road vehicle segment, by 2023 26 million plug-in electric vehicles had been sold worldwide, and there were 65,730 public electric vehicle chargers in North America, in addition to the availability of home and work charging. Long distance electric trucks require more megawatt charging infrastructure.
See also
References
External links
California Fuel Cell Partnership homepage
Fuel Cell Today - Market-based intelligence on the fuel cell industry
U.S. Dept. of Energy hydrogen pages
Sandia Corporation – Hydrogen internal combustion engine description
Inside world's first hydrogen-powered production car BBC News, 14 September 2010
Toyota Ecopark Hydrogen Demonstration ARENAWIRE, 22 March 2019 |
coral bleaching | Coral bleaching is the process when corals become white due to various stressors, such as changes in temperature, light, or nutrients. Bleaching occurs when coral polyps expel the zooxanthellae (dinoflagellates that are commonly referred to as algae) that live inside their tissue, causing the coral to turn white. The zooxanthellae are photosynthetic, and as the water temperature rises, they begin to produce reactive oxygen species. This is toxic to the coral, so the coral expels the zooxanthellae. Since the zooxanthellae produce the majority of coral colouration, the coral tissue becomes transparent, revealing the coral skeleton made of calcium carbonate. Most bleached corals appear bright white, but some are blue, yellow, or pink due to pigment proteins in the coral.The leading cause of coral bleaching is rising ocean temperatures due to climate change. A temperature about 1 °C (or 2 °F) above average can cause bleaching. According to the United Nations Environment Programme, between 2014 and 2016, the longest recorded global bleaching events killed coral on an unprecedented scale. In 2016, bleaching of coral on the Great Barrier Reef killed between 29 and 50 percent of the reef's coral. In 2017, the bleaching extended into the central region of the reef. The average interval between bleaching events has halved between 1980 and 2016. The world's most bleaching-tolerant corals can be found in the southern Persian/Arabian Gulf. Some of these corals bleach only when water temperatures exceed ~35 °C.Bleached corals continue to live, but they are more vulnerable to disease and starvation. Zooxanthellae provide up to 90 percent of the coral's energy, so corals are deprived of nutrients when zooxanthellae are expelled. Some corals recover if conditions return to normal, and some corals can feed themselves. However, the majority of coral without zooxanthellae starve.Normally, coral polyps live in an endosymbiotic relationship with zooxanthellae. This relationship is crucial for the health of the coral and the reef, which provide shelter for approximately 25% of all marine life. In this relationship, the coral provides the zooxanthellae with shelter. In return, the zooxanthellae provide compounds that give energy to the coral through photosynthesis. This relationship has allowed coral to survive for at least 210 million years in nutrient-poor environments. Coral bleaching is caused by the breakdown of this relationship.
Process
The corals that form the great reef ecosystems of tropical seas depend upon a symbiotic relationship with algae-like single-celled flagellate protozoa called zooxanthellae that live within their tissues and give the coral its coloration. The zooxanthellae provide the coral with nutrients through photosynthesis, a crucial factor in the clear and nutrient-poor tropical waters. In exchange, the coral provides the zooxanthellae with the carbon dioxide and ammonium needed for photosynthesis. Negative environmental conditions, such as abnormally warm or cool temperatures, high light, and even some microbial diseases, can lead to the breakdown of the coral/zooxanthellae symbiosis. To ensure short-term survival, the coral-polyp then consumes or expels the zooxanthellae. This leads to a lighter or completely white appearance, hence the term "bleached". Under mild stress conditions, some corals may appear bright blue, pink, purple, or yellow instead of white, due to the continued or increased presence of the coral cells' intrinsic pigment molecules, a phenomenon known as "colourful bleaching". As the zooxanthellae provide up to 90 percent of the coral's energy needs through products of photosynthesis, after expelling, the coral may begin to starve.Coral can survive short-term disturbances, but if the conditions that lead to the expulsion of the zooxanthellae persist, the coral's chances of survival diminish. In order to recover from bleaching, the zooxanthellae have to re-enter the tissues of the coral polyps and restart photosynthesis to sustain the coral as a whole and the ecosystem that depends on it.
If the coral polyps die of starvation after bleaching, they will decay. The hard coral species will then leave behind their calcium carbonate skeletons, which will be taken over by algae, effectively blocking coral regrowth. Eventually, the coral skeletons will erode, causing the reef structure to collapse.
Triggers
Coral bleaching may be caused by a number of factors. While localized triggers lead to localized bleaching, the large-scale coral bleaching events of recent years have been triggered by global warming. Under the increased carbon dioxide concentration expected in the 21st century, corals are expected to become increasingly rare on reef systems. Coral reefs located in warm, shallow water with low water flow have been more affected than reefs located in areas with higher water flow.
List of triggers
increased water temperature (marine heatwaves, most commonly due to global warming), or reduced water temperatures
increased solar irradiance (photosynthetically active radiation and ultraviolet light)
increased sedimentation (due to silt runoff)
bacterial infections
changes in salinity
herbicides
extreme low tide and exposure
cyanide fishing
elevated sea levels due to global warming (Watson)
mineral dust from African dust storms caused by drought
pollutants such as oxybenzone, butylparaben, octyl methoxycinnamate, or enzacamene: four common sunscreen ingredients that are nonbiodegradable and can wash off of skin
ocean acidification due to elevated levels of CO2 caused by air pollution
being exposed to oil or other chemical spills
changes in water chemistry, particularly an imbalance in the ratio of the macronutrients nitrate and phosphate
Trends due to climate change
The warming ocean surface waters can lead to bleaching of corals which can cause serious damage and coral death. The IPCC Sixth Assessment Report in 2022 found that: "Since the early 1980s, the frequency and severity of mass coral bleaching events have increased sharply worldwide".: 416 Coral reefs, as well as other shelf-sea ecosystems, such as rocky shores, kelp forests, seagrasses, and mangroves, have recently undergone mass mortalities from marine heatwaves.: 381 It is expected that many coral reefs will "undergo irreversible phase shifts due to marine heatwaves with global warming levels >1.5°C".: 382 This problem was already identified in 2007 by the Intergovernmental Panel on Climate Change (IPCC) as the greatest threat to the world's reef systems.The Great Barrier Reef experienced its first major bleaching event in 1998. Since then, bleaching events have increased in frequency, with three events occurring in the years 2016–2020. Bleaching is predicted to occur three times a decade on the Great Barrier Reef if warming is kept to 1.5°C, increasing every other year to 2°C.With the increase of coral bleaching events worldwide, National Geographic noted in 2017, "In the past three years, 25 reefs—which comprise three-fourths of the world's reef systems—experienced severe bleaching events in what scientists concluded was the worst-ever sequence of bleachings to date."
Mass bleaching events
Elevated sea water temperatures are the main cause of mass bleaching events. Sixty major episodes of coral bleaching have occurred between 1979 and 1990, with the associated coral mortality affecting reefs in every part of the world. In 2016, the longest coral bleaching event was recorded. The longest and most destructive coral bleaching event was because of the El Niño that occurred from 2014 to 2017. During this time, over 70 percent of the coral reefs around the world have become damaged.Factors that influence the outcome of a bleaching event include stress-resistance which reduces bleaching, tolerance to the absence of zooxanthellae, and how quickly new coral grows to replace the dead. Due to the patchy nature of bleaching, local climatic conditions such as shade or a stream of cooler water can reduce bleaching incidence. Coral and zooxanthellae health and genetics also influence bleaching.Large coral colonies such as Porites are able to withstand extreme temperature shocks, while fragile branching corals such Acropora are far more susceptible to stress following a temperature change. Corals consistently exposed to low-stress levels may be more resistant to bleaching.Scientists believe that the oldest known bleaching was that of the Late Devonian (Frasnian/Famennian), also triggered by the rise of sea surface temperatures. It resulted in the demise of the largest coral reefs in the Earth's history.According to Clive Wilkinson of Global Coral Reef Monitoring Network of Townsville, Australia, in 1998 the mass bleaching event that occurred in the Indian Ocean region was due to the rising of sea temperatures by 2 °C coupled with the strong El Niño event in 1997–1998.
Impacts
Coral bleaching events and the subsequent loss of coral coverage often result in the decline of fish diversity. The loss of diversity and abundance in herbivorous fish particularly affect coral reef ecosystems. As mass bleaching events occur more frequently, fish populations will continue to homogenize. Smaller and more specialized fish species that fill particular ecological niches that are crucial for coral health are replaced by more generalized species. The loss of specialization likely contributes to the loss of resilience in coral reef ecosystems after bleaching events.
Economic and political impact
According to Brian Skoloff of The Christian Science Monitor, "If the reefs vanished, experts say, hunger, poverty and political instability could ensue." Since countless sea life depend on the reefs for shelter and protection from predators, the extinction of the reefs would ultimately create a domino effect that would trickle down to the many human societies that depend on those fish for food and livelihood. There has been a 44% decline over the last 20 years in the Florida Keys and up to 80% in the Caribbean alone.Coral reefs provide various ecosystem services, one of which is being a natural fishery, as many frequently consumed commercial fish spawn or live out their juvenile lives in coral reefs around the tropics. Thus, reefs are a popular fishing site and are an important source of income for fishers, especially small, local fisheries. As coral reef habitat decreases due to bleaching, reef associated fish populations also decrease, which affects fishing opportunities. A model from one study by Speers et al. calculated direct losses to fisheries from decreased coral cover to be around $49–69 billion, if human societies continue to emit high levels of greenhouse gases. But, these losses could be reduced for a consumer surplus benefit of about $14–20 billion, if societies chose to emit a lower level of greenhouse gases instead. These economic losses also have important political implications, as they fall disproportionately on developing countries where the reefs are located, namely in Southeast Asia and around the Indian Ocean. It would cost more for countries in these areas to respond to coral reef loss as they would need to turn to different sources of income and food, in addition to losing other ecosystem services such as ecotourism. A study completed by Chen et al. suggested that the commercial value of reefs decreases by almost 4% every time coral cover decreases by 1% because of losses in ecotourism and other potential outdoor recreational activities.Coral reefs also act as a protective barrier for coastlines by reducing wave impact, which lowers the damage from storms, erosions, and flooding. Countries that lose this natural protection will lose more money because of the increased susceptibility of storms. This indirect cost, combined with the lost revenue from tourism, will result in enormous economic effects.
Monitoring coral bleaching and reef sea surface temperature
The US National Oceanic and Atmospheric Administration (NOAA) monitors for bleaching "hot spots", areas where sea surface temperature rises 1 °C or more above the long-term monthly average. The "hot spots" are the locations in which thermal stress is measured, and with the development of Degree Heating Week (DHW), the coral reef's thermal stress is monitored. Global coral bleaching is being detected earlier due to the satellite remote sensing of the rise of sea temperatures. It is necessary to monitor the high temperatures because coral bleaching events are affecting coral reef reproduction and normal growth capacity, as well as it weakening corals, eventually leading to their mortality. This system detected the worldwide 1998 bleaching event, that corresponded to the 1997–98 El Niño event. Currently, 190 reef sites around the globe are monitored by the NOAA, and send alerts to research scientists and reef managers via the NOAA Coral Reef Watch (CRW) website. By monitoring the warming of sea temperatures, the early warnings of coral bleaching alert reef managers to prepare for and draw awareness to future bleaching events. The first mass global bleaching events were recorded in 1998 and 2010, which was when the El Niño caused the ocean temperatures to rise and worsened the corals living conditions. The 2014–2017 El Niño was recorded to be the longest and most damaging to the corals, which harmed over 70% of our coral reefs. Over two-thirds of the Great Barrier Reef have been reported to be bleached or dead.
To accurately monitoring the extent and evolution of bleaching events, scientist are using underwater photogrammetric techniques to create accurate orthophoto of coral reefs transects and AI-assisted image segmentation with open source tools like TagLab to identify from these photos the health status of the corals.
Changes in ocean chemistry
Increasing ocean acidification due to rises in carbon dioxide levels exacerbates the bleaching effects of thermal stress. Acidification affects the corals' ability to create calcareous skeletons, essential to their survival. This is because ocean acidification decreases the amount of carbonate ion in the water, making it more difficult for corals to absorb the calcium carbonate they need for the skeleton. As a result, the resilience of reefs goes down, while it becomes easier for them to erode and dissolve. In addition, the increase in CO2 allows herbivore overfishing and nutrification to change coral-dominated ecosystems to algal-dominated ecosystems. A recent study from the Atkinson Center for a Sustainable Future found that with the combination of acidification and temperature rises, the levels of CO2 could become too high for coral to survive in as little as 50 years.
Coral bleaching due to photoinhibition of zooxanthellae
Zooxanthellae are a type of dinoflagellate that live within the cytoplasm of many marine invertebrates. Members of the phylum Dinoflagellata, they are round microalgae that share a symbiotic relationship with their host. They are also part of the genus Symbiodinium and Kingdom Alveolata. These organisms are phytoplankton and therefore photosynthesize. The host organism harnesses the products of photosynthesis, i.e. oxygen, sugar, etc., and in exchange, the zooxanthellae are offered housing and protection, as well as carbon dioxide, phosphates, and other essential inorganic compounds that help them to survive and thrive. Zooxanthellae share 95% of the products of photosynthesis with their host coral. According to a study done by D.J. Smith et al., photoinhibition is a likely factor in coral bleaching. It also suggests that the hydrogen peroxide produced in zooxanthealle plays a role in signaling themselves to flee the corals. Photo-inhibition of Zooxanthellae can be caused by exposure to UV filters found in personal care products. In a study done by Zhong et al., Oxybenzone (BP-3) had the most negative effects on zooxanthellae health. The combination of temperature increase and presence of UV filters in the ocean has further decreased zooxanthellae health. The combination of UV filters and higher temperatures led to an additive effect on photo-inhibition and overall stress on coral species.
Infectious disease
Infectious bacteria of the species Vibrio shiloi are the bleaching agent of Oculina patagonica in the Mediterranean Sea, causing this effect by attacking the zooxanthellae. V. shiloi is infectious only during warm periods. Elevated temperature increases the virulence of V. shiloi, which then become able to adhere to a beta-galactoside-containing receptor in the surface mucus of the host coral. V. shiloi then penetrates the coral's epidermis, multiplies, and produces both heat-stable and heat-sensitive toxins, which affect zooxanthellae by inhibiting photosynthesis and causing lysis.During the summer of 2003, coral reefs in the Mediterranean Sea appeared to gain resistance to the pathogen, and further infection was not observed. The main hypothesis for the emerged resistance is the presence of symbiotic communities of protective bacteria living in the corals. The bacterial species capable of lysing V. shiloi had not been identified as of 2011.
By region
Pacific Ocean
Great Barrier Reef
The Great Barrier Reef along the coast of Australia experienced bleaching events in 1980, 1982, 1992, 1994, 1998, 2002, 2006, 2016, 2017 and 2022. Some locations suffered severe damage, with up to 90% mortality. The most widespread and intense events occurred in the summers of 1998 and 2002, with 42% and 54%, respectively, of reefs bleached to some extent, and 18% strongly bleached. However, coral losses on the reef between 1995 and 2009 were largely offset by growth of new corals. An overall analysis of coral loss found that coral populations on the Great Barrier Reef had declined by 50.7% from 1985 to 2012, but with only about 10% of that decline attributable to bleaching, and the remaining 90% caused about equally by tropical cyclones and by predation by crown-of-thorns starfishes.
A global mass coral bleaching has been occurring since 2014 because of the highest recorded temperatures plaguing oceans. These temperatures have caused the most severe and widespread coral bleaching ever recorded in the Great Barrier reef. The most severe bleaching in 2016 occurred near Port Douglas. In late November 2016, surveys of 62 reefs showed that long term heat stress from climate change caused a 29% loss of shallow water coral. The highest coral death and reef habitat loss was inshore and mid-shelf reefs around Cape Grenville and Princess Charlotte Bay.
The IPCC's moderate warming scenarios (B1 to A1T, 2 °C by 2100, IPCC, 2007, Table SPM.3, p. 13) forecast that corals on the Great Barrier Reef are very likely to regularly experience summer temperatures high enough to induce bleaching.
Hawaii
In 1996, Hawaii's first major coral bleaching occurred in Kaneohe Bay, followed by major bleaching events in the Northwest islands in 2002 and 2004. In 2014, biologists from the University of Queensland observed the first mass bleaching event, and attributed it to The Blob. In 2014 and 2015, a survey in Hanauma Bay Nature Preserve on Oahu found 47% of the corals suffering from coral bleaching and close to 10% of the corals dying. In 2014 and 2015, 56% of the coral reefs of the big island were affected by coral bleaching events. During the same period, 44% of the corals on west Maui were effected. On 24 January 2019, scientists with The Nature Conservancy found that the reefs had begun to stabilize nearly 4 years after the last bleaching event. According to the Division of Aquatic Resources (DAR), there was still a considerable amount of bleaching in 2019. On Oahu and Maui, up to 50% of the coral reefs were bleached. On the big island, roughly 40% of corals experienced bleaching in the Kona coast area. The DAR stated that the recent bleaching events have not been as bad as the 2014–2015 events. In 2020, the National Oceanic and Atmospheric Administration (NOAA) released the first-ever nationwide coral reef status report. The report stated that the northwestern and main Hawaiian islands were in "fair" shape, meaning the corals have been moderately impacted.
Hawaiian Sunscreen Policy In May 2018, Hawaii passed the bill "SB-2571", banning the vending of sunscreen containing chemicals deemed conducive of coral bleaching on the island's local reefs. The bill was signed in by David Ige, of the Democratic party. A chemical deemed toxic in SB-2571 is the 'oxybenzone' (also banned; octinoxate), a chemical that becomes toxic to coral when exposed to sunlight. Up to one-tenth of the approximated 14,000 tons of sunscreen polluting coral reef areas contains oxybenzone, putting almost half of all coral reefs in danger of being exposed. Coral reefs show increased rates of bleaching in both controlled and natural environments when exposed to high levels of oxybenzone, found in many commercial sunscreen products. Another study showed that over time, the presence of oxybenzone in water will decrease a reef's strength to face other bleaching events such as increasing water temperatures. SB-2571 banned all sunscreen products with the exception of prescription products. Hawaii is the first U.S. state to introduce this type of ban, which went into effect in January 2021.
Jarvis Island
Eight severe and two moderate bleaching events occurred between 1960 and 2016 in the coral community in Jarvis Island, with the 2015–16 bleaching displaying the unprecedented severity in the record.
Japan
According to the 2017 Japanese government report, almost 75% of Japan's largest coral reef in Okinawa has died from bleaching.
Indian Ocean
Coral reef provinces have been permanently damaged by warm sea temperatures, most severely in the Indian Ocean. Up to 90% of coral cover has been lost in the Maldives, Sri Lanka, Kenya and Tanzania and in the Seychelles during the massive 1997–98 bleaching event. The Indian Ocean in 1998 reported 20% of its coral had died and 80% was bleached. The shallow tropical areas of the Indian Ocean are already experiencing what are predicted to be worldwide ocean conditions in the future. Coral that has survived in the shallow areas of the Indian Ocean may be proper candidates for coral restoration efforts in other areas of the world because they are able to survive the extreme conditions of the ocean.
Maldives
The Maldives has over 20,000 km2 of reefs, of which more than 60% of the coral has suffered from bleaching in 2016.
Thailand
Thailand experienced a severe mass bleaching in 2010 which affected 70% of the coral in the Andaman Sea. Between 30% and 95% of the bleached coral died.
Indonesia
In 2017, there was a study done on two islands in Indonesia to see how their coral cover was. One of the places was the Melinjo Islands and the other was the Saktu Islands. On Saktu Island, the lifeform conditions were categorized as bad, with an average coral cover of 22.3%. In the Melinjo Islands, the lifeform conditions were categorized as bad, with an average coral cover of 22.2%.
Atlantic Ocean
United States
In South Florida, a 2016 survey of large corals from Key Biscayne to Fort Lauderdale found that about 66% of the corals were dead or reduced to less than half of their live tissue.
Belize
The first recorded mass bleaching event that took place in the Belize Barrier Reef was in 1998, where sea level temperatures reached up to 31.5 °C (88.7 °F) from 10 August to 14 October. For a few days, Hurricane Mitch brought in stormy weather on 27 October but only reduced temperatures by 1 degree or less. During this time period, mass bleaching in the fore-reef and lagoon occurred. While some fore reef colonies suffered some damage, coral mortality in the lagoon was catastrophic.The most prevalent coral in the reefs Belize in 1998 was the lettuce coral, Agaricia tenuifolia. On 22 and 23 October, surveys were conducted at two sites and the findings were devastating. Virtually all the living coral was bleached white and their skeletons indicated that they had died recently. At the lagoon floor, complete bleaching was evident among A. tenuifolia. Furthermore, surveys done in 1999 and 2000 showed a near total mortality of A. tenuifolia at all depths. Similar patterns occurred in other coral species as well. Measurements on water turbidity suggest that these mortalities were attributed to rising water temperatures rather than solar radiation.
Caribbean
Hard coral cover on reefs in the Caribbean have declined by an estimated 80%, from an average of 50% cover in the 1970s to only about 10% cover in the early 2000s. A 2013 study to follow up on a mass bleaching event in Tobago from 2010 showed that after only one year, the majority of the dominant species declined by about 62% while coral abundance declined by about 50%. However, between 2011 and 2013, coral cover increased for 10 of the 26 dominant species but declined for 5 other populations.
Other areas
Coral in the south Red Sea does not bleach despite summer water temperatures up to 34 °C (93 °F).
Coral bleaching in the Red Sea is more common in the northern section of the reefs; the southern part of the reef has been plagued by coral-eating starfish, dynamite fishing and human impacts on the environment. In 1988, there was a massive bleaching event that affected the reefs in Saudi Arabia and Sudan, though the southern reefs were more resilient and it affected them very little. Previously, it was thought that the northern reef suffers more from coral bleaching and shows a fast turnover of coral, while the southern reef was thought to not suffer from bleaching as harshly and show more consistency. However, new research shows that where the southern reef should be bigger and healthier than the northern, it was not. This is believed to be because of major disturbances in recent history from bleaching events, and coral-eating starfish.
In 2010, coral bleaching occurred in Saudi Arabia and Sudan, where the temperature rose 10 to 11 degrees. Certain taxa experienced 80% to 100% of their colonies bleaching, while some showed on average 20% of that taxa bleaching.
Coral adaptation
In 2010, researchers at Penn State discovered corals that were thriving while using an unusual species of symbiotic algae in the warm waters of the Andaman Sea in the Indian Ocean. Normal zooxanthellae cannot withstand temperatures as high as was there, so this finding was unexpected. This gives researchers hope that with rising temperatures due to global warming, coral reefs will develop tolerance for different species of symbiotic algae that are resistant to high temperature, and can live within the reefs.
In 2010, researchers from Stanford University also found corals around the Samoan Islands that experience a drastic temperature increase for about four hours a day during low tide. The corals do not bleach or die regardless of the high heat increase. Studies showed that the corals off the coast of Ofu Island near America Samoa have become trained to withstand the high temperatures. Researchers are now asking a new question: can we condition corals, that are not from this area, in this manner and slowly introduce them to higher temperatures for short periods of time and make them more resilient against rising ocean temperatures.Certain mild bleaching events can cause coral to produce high concentrations of sun-screening pigments in order to shield themselves from further stress. Some of the pigments produced have pink, blue or purple hues, while others are strongly fluorescent. Production of these pigments by shallow-water corals is stimulated by blue light. When corals bleach, blue light inside the coral tissue increases greatly because it is no longer being absorbed by the photosynthetic pigments found inside the symbiotic algae, and is instead reflected by the white coral skeleton. This causes an increase in the production of the sun-screening pigments, making the bleached corals appear very colourful instead of white – a phenomenon sometimes called 'colourful coral bleaching'.Increased sea surface temperature leads to the thinning of the epidermis and apoptosis of gastrodermis cells in the host coral. The reduction in apoptosis and gastrodermis is seen via epithelium, leading to up to a 50% loss in the concentration of symbionts over a short period of time. Under conditions of high temperature or increased light exposure, the coral will exhibit a stress response that includes producing reactive oxygen species, the accumulation of this if not removed by antioxidant systems will lead to the death of the coral. Studies testing the structures of coral under heat stressed environments show that the thickness of the coral itself greatly decreases under heat stress compared to the control. With the death of the zooxanthellae in the heat stressed events, the coral must find new sources to gather fixed carbon to generate energy, species of coral that can increase their carnivorous tendencies have been found to have an increased likelihood of recovering from bleaching events.After the zooxanthellae leaves the coral, the coral structures are often taken over by algae due to their ability to outcompete the zooxanthella since they need less resources to survive. There is little evidence of competition between zooxanthellae and algae, but in the absence of zooxanthellae the algae thrives on the coral structures. Once algae takes over and the coral can no longer sustain itself, the structures often begin to decay due to ocean acidification. Ocean acidification is the process by which carbon dioxide is absorbed into the ocean, this decreases the amounts of carbonate ions in the ocean, a necessary ion corals use to build their skeletons. Corals go through processes of decalcifying and calcifying during different times of the day and year due to temperature fluctuations. Under current IPCC emission pathway scenarios, corals tend to disintegrate, and the winter months with cooler temperatures will not serve ample time for the corals to reform.
Artificial assistance
In 2020, scientists reported to have evolved 10 clonal strains of a common coral microalgal endosymbionts at elevated temperatures for 4 years, increasing their thermal tolerance for climate resilience. Three of the strains increased the corals' bleaching tolerance after reintroduction into coral host larvae. Their strains and findings may potentially be relevant for the adaptation to and mitigation of climate change and further tests of algal strains in adult colonies across a range of coral species are planned.In 2021, researchers demonstrated that probiotics can help coral reefs mitigate heat stress, indicating that such could make them more resilient to climate change and mitigate coral bleaching.
Recovery and macroalgal regime shifts
After corals experience a bleaching event to increased temperature stress some reefs are able to return to their original, pre-bleaching state. Reefs either recover from bleaching, where they are recolonized by zooxanthellae, or they experience a regime shift, where previously flourishing coral reefs are taken over by thick layers of macroalgae. This inhibits further coral growth because the algae produces antifouling compounds to deter settlement and competes with corals for space and light. As a result, macroalgae forms stable communities that make it difficult for corals to grow again. Reefs will then be more susceptible to other issues, such as declining water quality and removal of herbivore fish, because coral growth is weaker. Discovering what causes reefs to be resilient or recover from bleaching events is of primary importance because it helps inform conservation efforts and protect coral more effectively.
A primary subject of research regarding coral recovery pertains to the idea of super-corals, otherwise referred to as the corals that live and thrive in naturally warmer and more acidic regions and bodies of water. When transplanted to endangered or bleached reefs, their resilience and irradiance can equip the algae to live among the bleached corals. As Emma Camp, a National Geographic Explorer, marine bio-geochemist and an ambassador for Biodiversity for the charity IBEX Earth, suggests, the super-corals could have the capability to help with the damaged reefs long-term. While it can take 10 to 15 years to restore damaged and bleached coral reefs, the super-corals could have lasting impacts despite climate change as the oceans rise in temperature and gain more acidity. Bolstered by the research of Ruth Gates, Camp has looked into lower oxygen levels and the extreme, unexpected habitats that reefs can be found in across the globe.Corals have shown to be resilient to short-term disturbances. Recovery has been shown in after storm disturbance and crown of thorns starfish invasions. Fish species tend to fare better following reef disturbance than coral species as corals show limited recovery and reef fish assemblages have shown little change as a result of short-term disturbances. In contrast, fish assemblages in reefs that experience bleaching exhibit potentially damaging changes. One study by Bellwood et al. notes that while species richness, diversity, and abundance did not change, fish assemblages contained more generalist species and less coral dependent species. Responses to coral bleaching are diverse between reef fish species, based on what resources are affected. Rising sea temperature and coral bleaching do not directly impact adult fish mortality, but there are many indirect consequences of both. Coral-associated fish populations tend to be in decline due to habitat loss; however, some herbivorous fish populations have seen a drastic increase due to the increase of algae colonization on dead coral. Studies note that better methods are needed to measure the effects of disturbance on the resilience of corals.
Until recently, the factors mediating the recovery of coral reefs from bleaching were not well studied. Research by Graham et al. (2015) studied 21 reefs around Seychelles in the Indo-Pacific in order to document the long-term effects of coral bleaching. After the loss of more than 90% of corals due to bleaching in 1998 around 50% of the reefs recovered and roughly 40% of the reefs experienced regime shifts to macroalgae dominated compositions. After an assessment of factors influencing the probability of recovery, the study identified five major factors: density of juvenile corals, initial structural complexity, water depth, biomass of herbivorous fishes, and nutrient conditions on the reef. Overall, resilience was seen most in coral reef systems that were structurally complex and in deeper water.The ecological roles and functional groups of species also play a role in the recovery of regime shifting potential in reef systems. Coral reefs are affected by bioeroding, scraping, and grazing fish species. Bioeroding species remove dead corals, scraping species remove algae and sediment to further future growth, grazing species remove algae. The presence of each type of species can influence the ability for normal levels of coral recruitment which is an important part of coral recovery. Lowered numbers of grazing species after coral bleaching in the Caribbean has been likened to sea-urchin-dominated systems which do not undergo regime shifts to fleshy macroalgae dominated conditions.There is always the possibility of unobservable changes, or cryptic losses or resilience, in a coral community's ability to perform ecological processes. These cryptic losses can result in unforeseen regime changes or ecological flips. More detailed methods for determining the health of coral reefs that take into account long-term changes to the coral ecosystems and better-informed conservation policies are necessary to protect coral reefs in the years to come.
Rebuilding coral reefs
Research is being done to help slow down the mortality rate of corals. Worldwide projects are being completed to help replenish and restore the coral reefs. Current coral restoration efforts include microfragmentation, coral farming, and relocation. The population of corals is rapidly declining, so scientists are doing experiments in coral growth and research tanks to help replenish their population. These research tanks mimic the coral reefs natural environment in the ocean. They are growing corals in these tanks to use for their experiments, so no more corals are being harmed or taken from the ocean. They are also transplanting the successfully grown corals from the research tanks and putting them into the areas of the ocean where the reefs are dying out. An experiment is being done in some coral growth and research tanks by Ruth Gates and Madelaine Van Oppen. They are trying to make "super corals" that can withstand some of the environmental factors that the corals are currently dying from. Van Oppen is also working on developing a type of algae that will have a symbiotic relationship with corals and can withstand water temperature fluctuations for long periods of time. This project may be helping to replenish our reefs, but the growing process of corals in research tanks is very time-consuming. It can take at least 10 years for the corals to fully grow and mature enough to where they will be able to breed. Following Ruth Gates' death in October 2018, her team at the Gates Coral Lab at the Hawai'i Institute of Marine Biology continues her research on restoration efforts. Continuing research and restoration efforts at the Gates Coral Lab focuses on the effects of beneficial mutations, genetic variation, and relocation via human assistance on the resilience of coral reefs. As of 2019, the Gates Coral Lab team determined that large-scale restoration techniques would not be effective; localized efforts to restore coral reefs on an individual basis are tested to be more realistic and effective while research is conducted to determine the best ways to combat coral destruction on a mass scale.
Marine Protected Areas
Marine Protected Areas (MPAs) are sectioned-off areas of the ocean designated for protection from human activities such as fishing and un-managed tourism. According to NOAA, MPAs currently occupy 26% of U.S. waters. MPAs have been documented to improve and prevent the effects of coral bleaching in the United States. In 2018, research by coral scientists in the Caribbean concluded that areas of the ocean managed/protected by government had improved conditions that coral reefs were able to flourish in. MPAs defend ecosystems from overfishing, which allows multiple species of fish to thrive and deplete seaweed density, making it easier for young coral organisms to grow and increase in population/strength. From this study, a 62% increase in coral populations was recorded due to the protection of an MPA. Higher populations of young coral increase the longevity of a reef, as well as its ability to recover from extreme bleaching events.
Local impacts and solutions to coral bleaching
There are a number of stressors locally impacting coral bleaching, including sedimentation, continual support of urban development, land change, increased tourism, untreated sewage, and pollution. To illustrate, increased tourism is good for a country, however, it also comes with costs. An example is the Dominican Republic which relies heavily on its coral reefs to attract tourists resulting in increased structural damage, over fishing, nutrient pollution, and an increase in diseases to the coral reefs. As a result, the Dominican Republic has implemented a sustainable management plan for its land and marine areas to regulate ecotourism.
Economic value of coral reefs
Coral reefs provide shelter to an estimated quarter of all ocean species. Experts estimate that coral reef services are worth up to $1.2 million per hectare which translates to an average of $172 billion per year. The benefits of coral reefs include providing physical structures such as coastal shoreline protection, biotic services within and between ecosystems, biogeochemical services such as maintaining nitrogen levels in the ocean, climate records, and recreational and commercial (tourism) services. Coral reefs are one of the best marine ecosystems to use to as a food source. The coral reefs are also the perfect habitat for rare and economically important species of tropical fish, as they provide the perfect area for fish to breed and create nurseries in. If the populations of the fish and corals in the reef are high, then we can use the area as a place to gather food and things with medicinal properties, which also helps create jobs for people who can collect these specimens. The reefs also have some cultural importance in specific regions around the world.
Cost benefit analysis of reducing loss of coral reefs
In 2010, the Convention on Biological Diversity's (CBD) Strategic Plan for Biodiversity 2011–2020 created twenty distinct targets for sustainable development for post-2015. Target 10 indicates the goal of minimizing "anthropogenic pressures on coral reefs". Two programs were looked at, one that reduces coral reef loss by 50% that has a capital cost of $684 million and a recurrent cost of $81 million. The other program reduces coral reef loss by 80 percent and has a capital cost of $1.036 billion with recurring costs of $130 million. CBD acknowledges that they may be underestimating the costs and resources needed to achieve this target due to lack of relevant data but nonetheless, the cost-benefit analysis shows that the benefits outweigh the costs by a great enough amount for both programs (benefit cost ratio of 95.3 and 98.5) that "there is ample scope to increase outlays on coral protection and still achieve a benefit to cost ratio that is well over one".
See also
Effects of climate change on oceans
References
Sources
Watson ME (2011). "Coral Reefs". In Allin CW (ed.). Encyclopedia of environmental issues. Vol. 1. Pasadena, Calif.: Salem Press. pp. 317–318. ISBN 978-1-58765-735-1.
External links
Global information system on coral reefs.
Current global map of bleaching alert areas. |
olindias phosphorica | Olindias phosphorica, or cigar jellyfish, is a species of hydrozoan from the central and eastern Atlantic and the Mediterranean Sea.
The Mediterranean sea is a predominantly warm body of water, thus O. phosphorica is a warm-water Jellyfish. Global warming has facilitated the proliferation of the species throughout the Mediterranean sea.
Feeding
A Cigar Jellyfish's diet consists of plankton mostly.
Size
Their umbrellas grow up to 8 centimeter, have 100-120 secondary tentacles and 30-60 primary tentacles. Their stings do not cause much damage.
== References == |
passive daytime radiative cooling | Passive daytime radiative cooling (PDRC) is a zero-energy building cooling method proposed as a solution to reduce air conditioning, lower urban heat island effect, cool human body temperatures in extreme heat, move toward carbon neutrality and control global warming by enhancing terrestrial heat flow to outer space through the installation of thermally-emissive surfaces on Earth that require zero energy consumption or pollution. In contrast to compression-based cooling systems that are prevalently used (e.g., air conditioners) consume substantial amounts of energy, have a net heating effect, require ready access to electricity, and often require coolants that are ozone-depleting or have a strong greenhouse effect.Application of PDRCs may also increase the efficiency of systems benefiting of a better cooling, such like photovoltaic systems, dew collection techniques, and thermoelectric generators.PDRC surfaces are designed to be high in solar reflectance (to minimize heat gain) and strong in longwave infrared (LWIR) thermal radiation heat transfer through the atmosphere's infrared window (8–13 µm) to cool temperatures even during the daytime. It is also referred to as passive radiative cooling, daytime passive radiative cooling, radiative sky cooling, photonic radiative cooling, and terrestrial radiative cooling. PDRC differs from solar radiation management because it increases radiative heat emission rather than merely reflecting the absorption of solar radiation.Some estimates propose that if 1–2% of the Earth's surface area were dedicated to PDRC that warming would cease and temperature increases would be rebalanced to survivable levels. Regional variations provide different cooling potentials with desert and temperate climates benefiting more from application than tropical climates, attributed to the effects of humidity and cloud cover on reducing the effectiveness of PDRCs. Low-cost scalable PDRC materials feasible for mass production have been developed, such as coatings, thin films, metafabrics, aerogels, and biodegradable surfaces.
PDRCs can be included in self-adaptive systems, 'switching' from passive cooling to heating to mitigate any potential "overcooling" effects in urban environments. They have also been developed in colors other than white, although there is generally a tradeoff in cooling potential, since darker color surfaces are less reflective. Research, development, and interest in PDRCs has grown rapidly since the 2010s, which has been attributed to a scientific breakthrough in the use of photonic metamaterials to achieve daytime cooling in 2014, along with growing concerns over energy use and global warming.
Classification
Passive daytime radiative cooling is not a carbon dioxide removal (CDR) or Solar Radiation Management (SRM) method, but rather enhances longwave infrared thermal radiation heat transfer on the Earth's surface through the infrared window with the coldness of outer space to achieve daytime cooling. Solar radiation is reflected by the PDRC surface to minimize heat gain and to maximize thermal emittance. PDRC differs from SRM because it increases radiative heat emission rather than merely reflecting the absorption of solar radiation. PDRC has been referred to as an alternative or "third approach" to geoengineering. PDRC has also been classified as a sustainable and renewable cooling technology.
Global implementation
When applied globally, PDRC can lower rising temperatures to slow and reverse global warming. Aili et al. concludes that "widescale adoption of radiative cooling could reduce air temperature near the surface, if not the whole atmosphere." To address global warming, PDRCs must be designed "to ensure that the emission is through the atmospheric transparency window and out to space, rather than just to the atmosphere, which would allow for local but not global cooling."
PDRC is not proposed as a standalone solution to global warming, but to be coupled with a global reduction in CO2 emissions and transition off of fossil fuel energy. Otherwise, "the radiative balance will not last long, and the potential financial benefits of mitigation will not fully be realized because of continued ocean acidification, air pollution, and redistribution of biomass" from high remaining levels of atmospheric CO2, as per Munday, who summarized the global implementation of PDRC as follows:Currently the Earth is absorbing ~1 W/m2 more than it is emitting, which leads to an overall warming of the climate. By covering the Earth with a small fraction of thermally emitting materials, the heat flow away from the Earth can be increased, and the net radiative flux can be reduced to zero (or even made negative), thus stabilizing (or cooling) the Earth (...) If only 1%–2% of the Earth’s surface were instead made to radiate at this rate rather than its current average value, the total heat fluxes into and away from the entire Earth would be balanced and warming would cease.The estimated total surface area coverage is 5×1012 m2 or about half the size of the Sahara Desert. Global implementation may be more predictable if distributed in a decentralized manner, rather than in a few heavily centralized locations on the Earth's surface. Mandal et al. refers to this as a "distributed geoengineering" strategy that can mitigate "weather disruptions that may arise from large-scale, centralized geoengineering." Desert climates have the highest radiative cooling potential due to low year-round humidity and cloud cover while tropical climates have a lower cooling potential due to the presence of humidity and cloud cover.Total costs for global implementation have been estimated at $1.25 to $2.5 trillion or about 3% of global GDP, with probable reductions in price at scale. This has been described as "a small investment compared to the estimated $20 trillion global benefits predicted by limiting global warming to 1.5°C rather than 2°C," as per Munday. Low-cost scalable materials have been developed for widescale implementation, although some challenges toward commercialization remain.Some studies have recommended efforts to focus on maximizing the solar reflectance or albedo of surfaces from very low values to high values, so long as a thermal emittance of at least 90% can be achieved. For example, while the albedo of an urban rooftop may be 0.2, increasing reflectivity to 0.9 is far more impactful than increasing an already reflective surface to be more reflective, such as from 0.9 to 0.97.
Benefits
Studies have noted the following benefits of widescale implementation of passive daytime radiative cooling:
Advancing toward a carbon neutral future and achieving net-zero emissions.
Alleviating electrical grids and renewable energy sources from devoting electric energy to cooling.
Balancing the Earth's energy budget.
Cooling human body temperatures during extreme heat.
Improving atmospheric water collection systems and dew harvesting techniques.
Improving performance of solar energy systems.
Mitigating energy crises.
Mitigating urban heat island effect.
Reducing greenhouse gas emissions by replacing fossil fuel energy use devoted to cooling.
Reducing local and global temperature increases associated with global warming.
Reducing thermal pollution of water resources.
Reducing water consumption for wet cooling processing.
Advantages to solar radiation management
Passive daytime radiative cooling is referred to as more stable, adaptable, and reversible when compared to stratospheric aerosol injection, which proposes injecting particles into the atmosphere to increase radiative forcing to reduce temperatures. Studies have warned against stratospheric aerosol injection's potential to contribute to further ozone loss and heat the Earth's lower stratosphere further, stating that the injection of sulfate particles "would reflect more of the incoming solar radiation back into space, but it would also capture more of the outgoing thermal radiation back to the Earth" and therefore accelerate warming.Wang et al. states that stratospheric aerosol injection "might cause potentially dangerous threats to the Earth’s basic climate operations" that may not be reversible, and thus put forth a preference for passive radiative cooling. Munday noted that although "unexpected effects will likely occur" with the global implementation of PDRC, that "these structures can be removed immediately if needed, unlike methods that involve dispersing particulate matter into the atmosphere, which can last for decades."When compared to the reflective surfaces approach of increasing the reflectivity or albedo of surfaces, such as through painting roofs white, or the space mirror proposals of "deploying giant reflective surfaces in space," Munday states that "the increased reflectivity likely falls short of what is needed and comes at a high financial cost." PDRC differs from the reflective surfaces approach by "increasing the radiative heat emission from the Earth rather than merely decreasing its solar absorption."
Function
The basic function of PDRCs is to be high in both solar reflectivity (in 0.4–2.5 µm) and in heat emissivity (in 8–13 µm), to maximize "net emission of longwave thermal radiation" and minimize "absorption of downward shortwave radiation." PDRCs use the infrared window (8–13 µm) for heat transfer with the coldness of outer space (~2.7 K) to radiate heat and subsequently lower ambient temperatures with zero energy input.PDRCs mimic the natural process of radiative cooling, in which the Earth cools itself by releasing heat to outer space (Earth's energy budget), although during the daytime, lowering ambient temperatures under direct solar intensity. On a clear day, solar irradiance can reach 1000 W/m2 with a diffuse component between 50 and 100 W/m2. The average PDRC has an estimated cooling power of ~100–150 W/m2. The cooling power of PDRCs is proportional to the exposed surface area of the installation.
Measuring effectiveness
To measure a PDRC surface's cooling power, the absorbed powers of atmospheric and solar radiations must be quantified. PDRC should not be measured when the surface is in a balanced or controlled state, but rather in a real-world setting. Standardized devices to measure PDRC effectiveness have been proposed.Evaluating atmospheric downward longwave radiation based on "the use of ambient weather conditions such as the surface air temperature and humidity instead of the altitude-dependent atmospheric profiles," may be problematic since "downward longwave radiation comes from various altitudes of the atmosphere with different temperatures, pressures, and water vapor contents" and "does not have uniform density, composition, and temperature across its thickness."
Broadband emitters (BE) vs. selective emitters (SE)
PDRCs can be broadband in their thermal emittance capacity, meaning they possess high emittance in both the solar spectrum and atmospheric LWIR window (8 to 14 μm), or selective emitters, meaning they narrowband emit longwave infrared radiation only in the infrared window.In theory, selective thermal emitters can achieve higher cooling power. However, selective emitters also face additional challenges in real-world applications that can weaken their performance, such as from dropwise condensation, which is common even in semi-arid environments, that can accumulate on the PDRC surface even when it has been made hydrophobic and alter the narrowband emission. Broadband emitters also outperform selective materials when "the material is warmer than the ambient air, or when its sub-ambient surface temperature is within the range of several degrees."Both emitters can be advantageous for different types of applications. Broadband emitters may be less problematic for horizontal applications, such as on roofs, whereas selective emitters may be more useful if implemented on vertical surfaces like building facades, where dropwise condensation is inconsequential and their stronger cooling power can be actualized.Broadband emitters can be made angle-dependent to potentially enhance their cooling performance. Polydimethylsiloxane (PDMS) is a common broadband emitter used for PDRC. Most PDRC materials are broadband primarily credited to their lower cost and higher performance at above-ambient temperatures.
Hybrid systems
Combining PDRCs with other systems may increase their cooling power. When included in a combined thermal insulation, evaporative cooling, and radiative cooling system consisting of "a solar reflector, a water-rich and IR-emitting evaporative layer, and a vapor-permeable, IR-transparent, and solar-reflecting insulation layer," 300% higher ambient cooling power was demonstrated. This could extend the shelf life of food by 40% in humid climates and 200% in dry climates without refrigeration. The system however requires water "re-charges" to maintain its cooling power, with more frequent re-charges in hot climates than cooler climates.A dual-mode asymmetric photonic mirror (APM) consisting of silicon-based diffractive gratings could achieve all-season cooling, even under cloudy and humid conditions, as well as heating. The cooling power of APM could perform 80% more when compared to standalone radiative coolers. Under cloudy sky, it could achieve 8 °C more cooling and, for heating, 5.7 °C higher.
Climatic variations
The global cooling potential of various areas around the world varies primarily based on climate zones and the presence of weather patterns and events. Dry and hot regions generally have a higher radiative cooling power (estimated up to 120 W/m2), while colder regions or those with high humidity or cloud cover generally have lower global cooling potentials. The cooling potential of various regions can also change from winter to summer due to shifts in humidity and cloud cover. Studies mapping the daytime radiative cooling potential have been done for China and India, the United States, and on a continental scale across Europe.
Regional cooling potential
Desert climates
Dry regions such as western Asia, north Africa, Australia and the southwestern United States are ideal for PDRC application due to the relative lack of humidity and cloud cover in both winter and summer. The cooling potential for desert regions has been estimated at "in the higher range of 80–110 W/m2," as per Aili et al. and 120 W/m2 as per Yin et al. The Sahara Desert and western Asia is the largest area on Earth with a high cooling potential in both winter and summer.The cooling potential of desert regions risks being relatively unfulfilled due to very low population densities, which may lower interest in applying PDRCs for local cooling. However, in the event of global implementation, lowly populated or unpopulated desert climates may be an important "land surface contribution to the planetary albedo" which could "reduce air temperature near the surface, if not the whole atmosphere."
Temperate climates
Temperate climates have a high radiative cooling potential and higher average population densities when compared to desert climates, which may increase willingness to apply PDRCs in these zones. This is because these climatic zones tend to be "transitional" zones between dry and humid climates. High population areas in temperate climatic zones may be susceptible to an "overcooling" effect from PDRCs (see: overcooling section below) due to temperature shifts from hot summers to mild winters, which can be overcome with the modification of PDRCs to adjust for temperature shifts.
Tropical climates
While passive radiative cooling technologies have proven successful in mid-latitude regions of Earth, to reach the same level of performance has faced more difficulties in tropical climates. This has primarily been attributed to the higher solar irradiance and atmospheric radiation of these zones, particularly humidity and cloud cover. The average cooling potential of hot and humid climates varies between 10 and 40 W/m2, which is significantly lower than hot and dry climates.For example, the cooling potential of most of southeast Asia and the Indian subcontinent is significantly diminished in the summer due to a dramatic increase in humidity, dropping as low as 10–30 W/m2. Other similar zones, such as tropical savannah areas in Africa, see a more modest decline during summer, dropping to 20–40 W/m2. However, tropical regions generally have a higher albedo or radiative forcing due to sustained cloud cover and thus their land surface contributes less to planetary albedo.A study by Han et al. determined criteria for a PDRC surface in tropical climates to have a solar reflectance of at least 97% and an infrared emittance of at least 80% to achieve sub-ambient temperatures in tropical climates. The researchers used a BaSO4-K2SO4 coating with a "solar reflectance and infrared emittance (8–13 μm) of 98.4% and 95% respectively" in the tropical climate of Singapore and achieved a "sustained daytime sub-ambient temperature of 2°C" under direct solar intensity of 1000 W/m2.
Variables
Humidity and cloud coverage
Humidity and cloud coverage significantly weaken PDRC effectiveness. A study by Huang et al. noted that "vertical variations of both vapor concentration and temperature in the atmosphere" can have a considerable impact on radiative coolers. The authors put forth that aerosol and cloud coverage can also weaken the effectiveness of radiators and thus concluded that adaptable "design strategies of radiative coolers" are needed to maximize effectiveness under these climatic conditions. Regions with high humidity and cloud cover have less global cooling potential than areas with low humidity and cloud cover.
Dropwise condensation
The formation of dropwise condensation on PDRC surfaces can alter the infrared emittance of the surface of selective PDRC emitters, which can weaken their performance. Even in semi-arid environments, dew formation on PDRC surfaces can occur. Thus, the cooling power of selective emitters "may broaden the narrowband emittances of the selective emitter and reduce their sub-ambient cooling power and their supposed cooling benefits over broadband emitters," as per Simsek et al., who discuss the implications on the performance of selective emitters:In showing that dropwise condensation on horizontal emitters leads to broadband emittance regardless of the emitter, our work shows that the assumed benefits of selective emitters are even smaller when it comes to the largest application of radiative cooling – cooling roofs of buildings. However, recently, it has been shown that for vertical building facades experiencing broadband summertime terrestrial heat gains and wintertime losses, selective emitters can achieve seasonal thermoregulation and energy savings. Since dew formation appears less likely on vertical surfaces even in exceptionally humid environments, the thermoregulatory benefits of selective emitters will likely persist in both humid and dry operating conditions.
Rain
Rain can generally help clean PDRC surfaces that have been covered with dust, dirt, or other debris and improve their reflectivity. However, in humid areas, consistent rain can result in heavy water accumulation on PDRC surfaces which can hinder performance. In response, porous PDRCs have been developed. Another response is to make hydrophobic PDRCs which are "self-cleaning." Scalable and sustainable hydrophobic PDRCs that avoid VOCs have been developed that repel rainwater and other liquids.
Wind
Wind may have some effect on altering the efficiency of passive radiative cooling surfaces and technologies. Liu et al. proposes using a "tilt strategy and wind cover strategy" to mitigate effects of wind. The researchers found regional differences in regard to the impacts of wind cover in China, noting that "85% of China's areas can achieve radiative cooling performance with wind cover" whereas in northwestern China wind cover effects would be more substantial. Bijarniya et al. similarly proposes the use of a wind shield in areas susceptible to high winds.
Materials and production
Solar reflective and heat emissive surfaces can be of various material compositions. However, for widespread application to be feasible, PDRC materials must be low cost, available for mass production, and applicable in many contexts. Most research has focused on PDRC coatings and thin films, which tend to be more available for mass production, lower cost, and more applicable in a wider range of contexts, although other materials may provide potential for diverse applications.Some PDRC research has also developed more eco-friendly or sustainable materials, even if not fully biodegradable. Zhong et al. state "most PDRC materials now are non-renewable polymers, artificial photonic or synthetic chemicals, which will cause excessive CO2 emissions by consuming fossil fuels and go against the global carbon neutrality goal. Environmentally friendly bio-based renewable materials should be an ideal material to devise PDRC systems."
Multilayer and complex structures
Advanced photonic materials and structures, such as multilayer thin films, micro/nanoparticles, photonic crystals, metamaterials, metasurfaces, have been tested to significantly facilitate radiative cooling. However, while multilayer and complex nano-photonic structures have proven successful in experimental scenarios and simulations, widespread application "is severely restricted because of the complex and expensive processes of preparation," as per Cui et al. Similarly, Zhang et al. noted that "scalable production of artificial photonic radiators with complex structures, outstanding properties, high throughput, and low cost is still challenging." This has advanced research of simpler structures for PDRC materials that are more suited for mass production.
Coatings
PDRC coatings or paints tend to be advantageous for their direct application to surfaces, simplifying preparation processes and reducing costs, although not all PDRC coatings are inexpensive. Coatings generally offer "strong operability, convenient processing, and low cost, which have the prospect of large-scale utilization," as per Dong et al. PDRC coatings have been developed in colors other than white while still demonstrating high solar reflectance and heat emissivity.Coatings must be durable and resistant to soiling, which can be achieved with porous PDRCs or hydrophobic topcoats that can withstand cleaning, although hydrophic coatings use polytetrafluoroethylene or other similar compounds to be water-resistant. Negative environmental impacts can be mitigated by limiting use of other toxic solvents common in paints, such as acetone. Non-toxic or water-based paints have been developed. More research and development is needed.Porous Polymers Coating (PPC) exhibit excellent PDRC performance. These polymers have a high concentration of tiny pores, which scatter light effectively at the boundary between the polymer and the air. This scattering enhances both solar reflectance (reflecting more than 96% of sunlight) and thermal emittance (emitting 97% of heat), resulting in a surface temperature that is six degrees cooler than the surroundings at noon in Phoenix. Additionally, this process is solution-based and can be easily scaled up.A new design for PPC coloring is discovered. The dye of desired color is coated on the porous polymer. Comparing to traditional dye in porous polymer, which the dye is mixed in the polymer, the new design can have a higher refractive index of infrared and heat.The cost of PDRC coatings was significantly lowered with a 2018 study by Atiganyanun et al. which demonstrated how "photonic media, when properly randomized to minimize the photon transport mean free path, can be used to coat a black substrate and reduce its temperature by radiative cooling." This coating could "outperform commercially available solar-reflective white paint for daytime cooling" without using expensive manufacturing steps or materials.PDRC coatings that are described as scalable and low-cost include:
Li et al. (2019), aluminum phosphate coating, solar reflectance 97%, heat emittance 90%, daytime air temperature ~4.2 °C lower than ambient temperature (~4.8 °C lower than commercial heat insulation coating), predicted estimated cost by Dong et al. at $1.2/m2, tested in Guangzhou (daytime humidity 41%), selective emitter (SE).
Li et al. (2021), ultrawhite BaSO4 paint with 60% volume concentration, solar reflectance 98.1%, heat emittance 95%, daytime air temperature ~4.5 °C lower than ambient, "providing great reliability, convenient paint form, ease of use, and compatibility with the commercial paint fabrication process."
Weng et al. (2021), porous PDMS (Polydimethylsiloxane) sponge emitter template method for coatings, solar reflectance 95%, heat emittance 96.5%, daytime air temperature ~8 °C lower than ambient, avoids hazardous etching agents (e.g., hydrofluoric acid, hydrogen peroxide, acetic acid) or VOCs (e.g., acetone, dimethylformamide, tetrahydrofuran, hexane), "compatibility with large-scale production," tested in Hangzhou (daytime humidity ~61%).
Wang et al. (2022), waterborne thermochromic coating free of ecotoxic and carcinogenic titanium dioxide, solar reflectance 96%, heat emittance 94%, daytime air temperature ~7.1 °C lower than ambient, and "can be produced at a large scale and conveniently coated on various substrates through traditional drop casting, spraying, roller painting, or spin-coating methods" and "switchable [between] solar heating and radiative cooling," tested in Shanghai (daytime humidity ~28%).
Dong et al. (2022), BaSO4, CaCO3, and SiO2 particles coating, solar reflectance 97.6%, heat emittance 89%, daytime air temperature ~8.3 °C lower than ambient (~5.5 °C lower than commercial white paints), described "for large-scale commercial production" with a predicted estimated cost of $0.5/m2, tested in Weihai (daytime humidity 40%).
Zhai et al. (2022), α-Bi2O3 colored coating, solar reflectance 99%, heat emittance 97%, daytime air temperature ~2.31 °C (average cooling power 68 Wm-2), uses "low cost of raw oxide materials, and simple preparation process," tested in Nanjing (daytime humidity 54%).
Films
Many PDRC thin films have been developed which have demonstrated very high solar reflectance and heat emittance. However, films with precise patterns or structures are not scalable "due to the cost and technical difficulties inherent in large-scale precise lithography," as per Khan et al., or "due to complex nanoscale lithography/synthesis and rigidity," as per Zhou et al.The polyacrylate hydrogel film from the later study has broader applications, including potential uses in building construction and large-scale thermal management systems. This research focuses on a polyacrylate film developed for hybrid passive cooling. The film uses sodium polyacrylate, a low-cost industrial material, to achieve high solar reflectance and high mid-infrared emittance. A significant aspect of this material is its ability to absorb atmospheric moisture, enabling it to provide both radiative and evaporative cooling. This dual mechanism allows for efficient cooling even under varying atmospheric conditions, including high humidity or limited access to clear skies.Some researchers have attempted to overcome this with various methods:
Zhang et al. (2020), facile microstamping method film on low-cost polymer PDMS, solar reflectance 95%, heat emittance 96%, daytime temperature reduction up to 5.1 °C, "promising for scale-up production."
Zhang et al. (2021), low-cost film developed with a phase inversion process using cellulose acetate and calcium silicate, solar reflectance 97.3%, heat emittance 97.2%, daytime temperature reduction up to 7.3 °C below ambient (average net cooling power of 90.7 W m−2), "a low-cost, scalable composite film with novel dendritic cell like structures," tested in Qingdao.
Fan et al. (2022), eco-friendly preparation of superhydrophobic porous polydimethylsiloxane (PDMS) radiative cooling film, daytime temperature reduction up to 11.52 °C below ambient, "the film is promising to be widely used for long-term cooling for outdoor applications."
Nie et al. (2022), composite film made of fluorine-free reagents and SiO2 particles, solar reflectance 85%, heat emittance 95%, daytime temperature reduction average 12.2 °C, manufactured with "a simple preparation process, which has characteristics of low-cost environmental friendliness and excellent machinal durability," tested in Hubei.
Zhong et al. (2023), hierarchical flexible fibrous cellulose (wood pulp) film, solar reflectance 93.8%, heat emittance 98.3%, daytime temperature reduction up to 11.3 °C below ambient, study is "the first time to realize high crystallinity and hierarchical microstructures in regenerated cellulose materials by the self-assembly of cellulose macromolecules at the molecular level," which "will provide new perspectives for the development of flexible cellulose materials."
Metafabrics
PDRCs can also come in the form of metafabrics, which can be worn as clothing to shield and regulate body temperatures in times of extreme heat. Most metafabrics are made of petrol-based fibers, although research and development of sustainable or regenerative materials is ongoing. For instance, Zhong et al. states that "new flexible cellulose fibrous films with wood-like hierarchical microstructures need to be developed for wearable PDRC applications."By Shaoning Zeng et althey choose a material chosen, a composite of titanium oxide and polylactic acid (TiO2-PLA) with a polytetrafluoroethylene (PTFE) lamination, was pivotal in achieving the desired optical characteristics. The fabric underwent thorough optical and thermal characterization, measuring essential properties like reflectivity and emissivity. Numerical simulations, including Lorenz-Mie theory and Monte Carlo simulations, were crucial in predicting the fabric’s performance and guiding its optimization. Additionally, mechanical testing was conducted to ensure the fabric’s durability, strength, and practicality for everyday use.
One of the most notable results was the metafabric's exceptional ability to facilitate radiative cooling. The fabric achieved high emissivity (94.5%) in the atmospheric window, which is crucial for radiative cooling as it allows body heat to be effectively radiated away. Simultaneously, it exhibited high solar reflectivity (92.4%), meaning it could reflect a significant portion of the sun's rays, further contributing to its cooling effect. This combination of high emissivity and reflectivity is central to its cooling capabilities, significantly outperforming traditional fabrics in this regard. Additionally, the fabric's mechanical properties, including strength, durability, waterproofness, and breathability, confirmed its suitability for everyday clothing. This combination of high radiative cooling performance and strong mechanical properties makes the fabric a promising candidate for practical applications.
Liu et al. (2022), eco-friendly bio-derived regenerable polymer alginate to modify cotton fiber and then in-matrix generate CaCO3 nano- or other micro-particles, solar reflectance 90%, heat emittance 97%, lowered human skin temperature by 5.4ᵒC, "fully compatible with industrial processing facilities" and with "effective UV protection properties with a UPF value of 15, is fast-dry, and is stable against washing."
Li et al. (2022), wearable hat constructed of a radiative cooling paper with SiO2 fibers and fumed SiO2, solar reflectance 97%, heat emittance 91%, reduced temperatures for the hair of the wearer by 12.9ᵒC when compared with a basic white cotton hat (and 19ᵒC when compared with no hat), waterproof and air permeable, "suitable for the manufacture of radiative cooling hat to achieve the thermal management of human head."
Aerogels
Aerogels may be used as a potential low-cost PDRC material scalable for mass production. Some aerogels can also be considered a more environmentally friendly alternative to other materials, with degradable potential and the absence of toxic chemicals. Aerogels can also be useful as a thermal insulation material to reduce solar absorption and parasitic heat gain to improve the cooling performance of PDRCs.
Yue et al. (2022), superhydrophobic waste paper-based (cellulose) aerogel, solar reflectance 93%, thermal emittance 91%, reduced daytime temperatures up to 8.5 °C below ambient in outdoor test, in a building energy simulation the aerogel "showed that 43.4% of cooling energy on average could be saved compared to the building baseline consumption" in China if widely implemented.
Liu et al. (2022), degradable and superhydrophobic stereo-complex poly (lactic acid) aerogel with low thermal conductivity, solar reflectance 89%, heat emittance 93%, reduced daytime temperatures 3.5ᵒC below ambient, "opens an environmentally sustainable pathway to radiative cooling applications."
Li et al. (2022), low-cost silica-alumina nanofibrous aerogels (SAFAs) synthesized by electrospinning, solar reflectance 95%, heat emittance 93%, reduced daytime temperatures 5ᵒC below ambient, "the SAFAs exhibit high compression fatigue resistance, robust fire resistance and excellent thermal insulation" with "low cost and high performance," shows potential for further studies.
Nano bubbles
Pigments absorb light. Soap bubbles show a prism of different colors on their surfaces. These colors result from the way light interacts with differing thicknesses of the bubble’s film, a phenomenon called structural color. Part of Qingchen Shen and Silvia Vignolini’s research focuses on identifying the causes behind different types of structural colors in nature. In one case, her group found that cellulose nanocrystals (CNCs), which are derived from the cellulose found in plants, could be made into iridescent, colorful films without any added pigment. They made films with vibrant blue, green and red colors that, when placed under sunlight, were an average of nearly 7 F cooler than the surrounding air. A square meter of the film generated over 120 Watts of cooling power.
Biodegradable surfaces
With the proliferation of PDRC development, many proposed radiative cooling materials are not biodegradable. As per Park et al., "sustainable materials for radiative cooling have not been sufficiently investigated."
Park et al. (2022), eco-friendly porous polymer structure via thermally induced phase separation, solar reflectance 91%, heat emittance 92%, daytime temperature reduction up to 9 °C, sufficient durability for use on buildings and highest cooling effect reported "among all organic-based passive radiation cooling emitters."
Applications
Passive daytime radiative cooling has "the potential to simultaneously alleviate the two major problems of energy crisis and global warming" while being an "environmental protection refrigeration technology." PDRCs thereby have an array of potential applications, but are now most often applied to various aspects of the built environment, such as building envelopes, cool pavements, and other surfaces to decrease energy demand, costs, and CO2 emissions. PDRC has been tested and applied for indoor space cooling, outdoor urban cooling, solar cell efficiency, power plant condenser cooling, among other applications. For outdoor applications, the lifetime of PDRCs should be adequately estimated, both for high humidity and heat as well as for UV stability.
Indoor space cooling
The most common application of passive daytime radiative cooling currently is on building envelopes, including PDRC cool roofs, which can significantly lower indoor space temperatures within buildings. A PDRC roof application can double the energy savings of a white roof. This makes PDRCs a sustainable and low-cost alternative or supplement to air conditioning by decreasing energy demand, alleviating energy grids in peak periods, and reducing CO2 emissions caused by air conditioning's release of hydrofluorocarbons into the atmosphere which can be thousands of times more potent that CO2.Air conditioning alone accounts for 12%-15% of global energy usage, while CO2 emissions from air conditioning account for "13.7% of energy-related CO2 emissions, approximately 52.3 EJ yearly" or 10% of emissions total. Air conditioning applications are expected to rise, despite their negative impacts on energy sectors, costs, and global warming, which has been described as a "vicious cycle." However, this can be significantly reduced with the mass production of low-cost PDRCs for indoor space cooling. A multilayer PDRC surface covering 10% of a building's roof can replace 35% of air conditioning used during the hottest hours of daytime.In suburban single-family residential areas, PDRCs can lower energy costs by 26% to 46% in the United States and lower temperatures on average by 5.1ᵒC. With the addition of "cold storage to utilize the excess cooling energy of water generated during off-peak hours, the cooling effects for indoor air during the peak-cooling-load times can be significantly enhanced" and air temperatures may be reduced by 6.6–12.7 °C.In cities, PDRCs can result in significant energy and cost savings. In a study on US cities, Zhou et al. found that "cities in hot and arid regions can achieve high annual electricity consumption savings of >2200 kWh, while <400 kWh is attainable in colder and more humid cities," being ranked from highest to lowest by electricity consumption savings as follows: Phoenix (~2500 kWh), Las Vegas (~2250 kWh), Austin (~2100 kWh), Honolulu (~2050 kWh), Atlanta (~1500 kWh), Indianapolis (~1200 kWh), Chicago (~1150 kWh), New York City (~900 kWh), Minneapolis (~850 kWh), Boston (~750 kWh), Seattle (~350 kWh). In a study projecting energy savings for Indian cities in 2030, Mumbai and Kolkata had a lower energy savings potential, Jaisalmer, Varansai, and Delhi had a higher potential, although with significant variations from April to August dependent on humidity and wind cover.The growing interest and rise in PDRC application to buildings has been attributed to cost savings related to "the sheer magnitude of the global building surface area, with a market size of ~$27 billion in 2025," as estimated in a 2020 study.
Outdoor urban space cooling
Passive daytime radiative cooling surfaces can mitigate extreme heat from the urban heat island effect which occurs in over 450 cities worldwide, where it can be as much as 10–12 °C (18–22 °F) hotter in urban areas in comparison to surrounding rural areas. On an average hot summer day, the roofs of buildings can be 27–50 °C (49–90 °F) hotter than the surrounding air, warming air temperatures further through convection. Well-insulated dark rooftops are significantly hotter than all other urban surfaces, including asphalt pavements, further expanding air conditioning demand (which further accelerates global warming and urban heat island through the release of waste heat into the ambient air) and increasing risks of heat-related disease and fatal health effects.PDRCs can be applied to building roofs and urban shelters to significantly lower surface temperatures with zero energy consumption by reflecting heat out of the urban environment and into outer space. The primary obstacle of PDRC implementation in urban areas is the glare that may be caused through the reflectance of visible light onto surrounding buildings. Colored PDRC surfaces may mitigate glare issues, such as Zhai et al. "Super-white paints with commercial high-index (n~1.9) retroreflective spheres," as per Mandal et al., or the use of retroreflective materials (RRM) may also mitigate glare, although further research and development is needed. Surrounding buildings without PDRC application may weaken the cooling power of PDRCs.Even when installed on roofs in highly dense urban areas, broadband radiative cooling panels have been shown to lower surface temperatures at the sidewalk level. A study by Khan et al. published in 2022 assessed the effects of PDRC surfaces in winter, including for both non-modulated and modulated PDRCs, in the Kolkata metropolitan area. A non-modulated PDRC with a reflectance of 0.95 and emissivity of 0.93 decreased ground surface temperatures by nearly 4.9 °C (8.8 °F) and with an average daytime reduction of 2.2 °C (4.0 °F).While in summer the cooling effects of broadband non-modulated PDRCs may be desirable, they could present an uncomfortable "overcooling" effect for city populations in winter and thus increase energy use for heating. This can be mitigated by broadband modulated PDRCs, which they found could increase daily ambient urban temperatures by 0.4–1.4 °C (0.72–2.52 °F) in winter. While in the tropical metropolitan area of Kolkata, for instance, "overcooling" is unlikely, elsewhere it could impact the willingness to apply PDRCs in urban spaces. Therefore, modulated PDRCs may be preferred in cities with warm summers and cold winters for controlled cooling, while non-modulated PDRCs may be more beneficial for cities with hot summers and moderate winters. The authors expected "low-cost optically modulated passive systems" to be commercially available soon.In a study on urban bus shelters, it was found that most shelters fail at providing thermal comfort for commuters, noting that, on average, a tree could provide 0.5 °C (0.90 °F) more cooling. Other methods to cool shelters often resort to air conditioning or other energy intensive measures that can crowd commuters in an enclosed space for cooling. Urban shelters with PDRC roofing can significantly reduce temperatures with zero added costs or energy input, while adding "a non-reciprocal mid-infrared cover" can increase benefits by reducing incoming atmospheric radiation as well as reflecting radiation from surrounding buildings, as per Mokharti et al.For outdoor urban space cooling, it is recommended that PDRC implementation in urban areas primarily focus on increasing albedo so long as heat emissivity can be maintained at the standard of 90%, as per Anand et al. This can rapidly and significantly lower temperatures while reducing energy demand and costs for cooling in urban environments.
Solar energy efficiency
Passive daytime radiative cooling surfaces can be integrated with solar energy plants, referred to as solar energy–radiative cooling (SE–RC), to improve functionality and performance by preventing solar cells from 'overheating' and thus degrading. Since solar cells have a maximum efficiency of 33.7% (with the average commercial PV panel having a conversion rate around 20%), the majority of absorbed power produces excess heat and increases the operating temperature of the system. Solar cell efficiency declines 0.4-0.5% for every 1ᵒC increase in temperature.Passive daytime radiative cooling can extend the life of solar cells by lowering the operating temperature of the system. Integrating PDRCs into solar energy systems is also relatively simple, given that "most solar energy harvesting systems have a sky-facing flat plate structural design, which is similar to radiative cooling systems." Integration has been shown to "produce a higher energy gain per unit area" while also increasing the "total useful working time." Integrated systems can mitigate issues of "limited working time and low energy gain" and are "a current research hotspot," as per Ahmed et al.Methods have been proposed to potentially enhance cooling performance. Lu et al. proposes using a "full-spectrum synergetic management (FSSM) strategy to cool solar cells, which combines radiative cooling and spectral splitting to enhance radiative heat dissipation and reduce the waste heat generated by the absorption of sub-BG photons."Outdoor tests using various PDRC materials, some more scalable than others, have demonstrated various degrees of cooling power:
Wang et al. (2021), a periodic pyramid-textured polydimethylsiloxane (PDMS) radiative film, cooled commercial silicon solar cells by over 2 °C.
Lee et al. (2021), a visibly clear PDRC designed "using a rational design to deploy an optical modulator (n-hexadecane) in SiO2 aerogel microparticles within a silicone elastomer matrix," cooled commercial silicon solar cells by 7.7 °C on average.
Tang et al. (2022), nanoporous anodic aluminum oxide film, flatpanel solar cell relative efficiency improvement of ~2.72%, concentrated solar cell relative efficiency improvement of ~16.02%, described as "a high-performance and scalable radiative cooler."
Zhao et al. (2022), a silica micro-grating photonic cooler, cooled commercial silicon cells by 3.6 °C under solar intensity of 830 W m−2 to 990 W m−2.
Personal thermal management
The usage of passive daytime radiative cooling in fabrics to regulate body temperatures during extreme heat is in research and development. While other fabrics are useful for heat accumulation, they "may lead to heat stroke in hot weather." Zeng et al. states that "incorporating passive radiative cooling structures into personal thermal management technologies could effectively defend humans against intensifying global climate change."Wearable PDRCs can come in different forms and be particularly useful for outdoor workers. Readily available wearable PDRCs are not yet available, although prototypes have been developed. This field of research is referred to as personal thermal management (PTM). Although most textiles developed are in white, colored wearable materials have also been developed, although only in select colors that are relatively successful for solar reflectance to minimize heat gain.
Power plant condenser cooling
Passive daytime radiative cooling can be used in various power plant condensers, including thermoelectric power plants and concentrated solar plants (CSP) to cool water for effective use within the heat exchanger. A generalized study of "a covered pond with radiative cooler revealed that 150 W/m2 flux could be achieved without loss of water." PDRC application for power plant condensers can reduce high water use and thermal pollution caused by water cooling.For a thermoelectric power plant condenser, one study found that supplementing the air-cooled condenser for radiative cooling panels "get a 4096 kWhth/day cooling effect with a pump energy consumption of 11 kWh/day." For a concentrated solar plant (CSP) "on the CO2 supercritical cycle at 550ᵒC can be improved in 5% net output over an air-cooled system by integration with 14 m2 /kWe capacity radiative cooler."
Thermal regulation of buildings
In addition to cooling, passive daytime radiative cooling surfaces can be modified to be self-adaptive for temperature-dependent 'switching' from cooling to heating or, in other words, for full-scale thermal regulation. This can be achieved through switching the thermal emittance of the surface from a high to low value. Applications are limited to testing and commercially available self-switching PDRCs are in research and development.
Thermoelectric generation
When combined with a thermoelectric generator, a passive daytime radiative cooling surface can be used to generate electricity during the daytime and nighttime, although the power generated in tests has been relatively low. Research and development is preliminary.
Automobile and greenhouse cooling
Thermally enclosed spaces, including automobiles and greenhouses, are particularly susceptible to harmful temperature increases, especially during extreme weather. This is because of the heavy presence of windows, which are act as "transparent" to incoming solar radiation yet "opaque" to outgoing long-wave thermal radiation, which causes them to heat rapidly. The temperature of an automobile in direct sunlight can rise to 60–82ᵒC when ambient temperatures is only 21ᵒC. This accumulation of heat "can cause heat stroke and hyperthermia in the occupants, especially children", which can be alleviated with passive radiative cooling.
Water harvesting
Dew harvesting yields may be improved with passive daytime radiative cooling application. Selective PDRC emitters that have a high emissivity only at the atmospheric window (8–13 μm) and broadband emitters may produce varying results. In one study using a broadband PDRC, the research condensed "~8.5 mL day of water for 800 W m2 of peak solar intensity." Whereas selective emitters may be less advantageous in other contexts, they may be more advantageous for dew harvesting applications. PDRCs could improve atmospheric water harvesting by being combined with solar vapor generation systems to improve water collection rates.
Water and ice cooling
Passive daytime radiative cooling surfaces can be installed over the surface of a body of water for cooling. In a controlled study, a body of water was cooled 10.6ᵒC below the ambient temperature with the usage of a photonic radiator.PDRC surfaces have been developed to cool ice and prevent ice from melting under sunlight. It has been proposed as a sustainable method for ice protection. This can be applied to protect iced or refrigerated food from spoiling.
Unwanted side effects
Jeremy Munday writes that although "unexpected effects will likely occur" with global PDRC implementation, that "these structures can be removed immediately if needed, unlike methods that involve dispersing particulate matter into the atmosphere, which can last for decades." Wang et al. state that stratospheric aerosol injection "might cause potentially dangerous threats to the Earth’s basic climate operations" that may not be reversible, preferring PDRC. Zevenhoven et al. state that "instead of stratospheric aerosol injection (SAI), cloud brightening or a large number of mirrors in the sky (“sunshade geoengineering”) to block out or reflect incoming (short-wave, SW) solar irradiation, long-wavelength (LW) thermal radiation can be selectively emitted and transferred through the atmosphere into space".
"Overcooling" and PDRC modulation
"Overcooling" is cited as a side effect of PDRCs that may be problematic, especially when PDRCs are applied in high-population areas with hot summers and cool winters, characteristic of temperate zones. While PDRC application in these areas can be useful in summer, in winter it can result in an increase in energy consumption for heating and thus may reduce the benefits of PDRCs on energy savings and emissions. As per Chen et al., "to overcome this issue, dynamically switchable coatings have been developed to prevent overcooling in winter or cold environments."The detriments of overcooling can be reduced by modulation of PDRCs, harnessing their passive cooling abilities during summer, while modifying them to passively heat during winter. Modulation can involve "switching the emissivity or reflectance to low values during the winter and high values during the warm period." In 2022, Khan et al. concluded that "low-cost optically modulated" PDRCs are "under development" and "are expected to be commercially available on the market soon with high future potential to reduce urban heat in cities without leading to an overcooling penalty during cold periods."There are various methods of making PDRCs 'switchable' to mitigate overcooling. Most research has used vanadium dioxide (VO2), an inorganic compound, to achieve temperature-based 'switchable' cooling and heating effects. While, as per Khan et al., developing VO2 is difficult, their review found that "recent research has focused on simplifying and improving the expansion of techniques for different types of applications." Chen et al. found that "much effort has been devoted to VO2 coatings in the switching of the mid-infrared spectrum, and only a few studies have reported the switchable ability of temperature-dependent coatings in the solar spectrum." Temperature-dependent switching requires no extra energy input to achieve both cooling and heating.Other methods of PDRC 'switching' require extra energy input to achieve desired effects. One such method involves changing the dielectric environment. This can be done through "reversible wetting" and drying of the PDRC surface with common liquids such as water and alcohol. However, for this to be implemented on a mass scale, "the recycling, and utilization of working liquids and the tightness of the circulation loop should be considered in realistic applications."Another method involves 'switching' through mechanical force, which may be useful and has been "widely investigated in [PDRC] polymer coatings owing to their stretchability." For this method, "to achieve a switchable coating in εLWIR, mechanical stress/strain can be applied in a thin PDMS film, consisting of a PDMS grating and embedded nanoparticles." One study estimated, with the use of this method, that "19.2% of the energy used for heating and cooling can be saved in the US, which is 1.7 times higher than the only cooling mode and 2.2 times higher than the only heating mode," which may inspire additional research and development.
Glare and visual appearance
Glare caused from surfaces with high solar reflectance may present visibility concerns that can limit PDRC application, particularly within urban environments at the ground level. PDRCs that use a "scattering system" to generate reflection in a more diffused manner have been developed and are "more favorable in real applications," as per Lin et al.Low-cost PDRC colored paint coatings, which reduce glare and increase the color diversity of PDRC surfaces, have also been developed. While some of the surface's solar reflectance is lost in the visible light spectrum, colored PDRCs can still exhibit significant cooling power, such as a coating by Zhai et al., which used a α-Bi2O3 coating (resembling the color of the compound) to develop a non-toxic paint that demonstrated a solar reflectance of 99% and heat emissivity of 97%.Generally it is noted that there is a tradeoff between cooling potential and darker colored surfaces. Less reflective colored PDRCs can also be applied to walls while more reflective white PDRCs can be applied to roofs to increase visual diversity of vertical surfaces, yet still contribute to cooling.
Commercialization
The commercialization of passive daytime radiative cooling technologies is in an early stage of development.SkyCool Systems, founded by Aaswath P. Raman, who authored the scientific breakthrough study demonstrating the use of photonic metamaterials in making PDRC possible, is a startup that is commercializing radiative cooling technologies. SkyCool panels have been applied to some buildings in California, reducing energy costs. The company has received a grant from the California Energy Commission for further application opportunities.3M, an American multinational corporation, has developed a selectively emissive passive radiative cooling film. The film has been applied through pilot programs that are open for expansion. The film was tested on bus shelters in Tempe, Arizona. 3M's film achieved "10–20% energy savings when deployed on SkyCool Systems panels and integrated with a building's HVAC or refrigeration system."Radi-Cool, co-founded by Prof. Yin and Prof. Yang, who achieved the large-scale production and application of the zero-consumption radiative cooling technology in China. Radi-Cool products have been applied in many areas such like airport (Singapore, Japan), office building and big mall (Philippines, Malaysia), warehouse industrial (Germany, Latin America), etc. They have grown rapidly and have expanded the distribution all over the world. Their film and coating can help the world to reduce the carbon dioxide emission and create an eco-friendly environment.
History
Nocturnal passive radiative cooling has been recognized for thousands of years, with records showing awareness by the ancient Iranians, demonstrated through the construction of Yakhchāls, since 400 B.C.E.Passive daytime radiative cooling was hypothesized by Félix Trombe in 1967. The first experimental setup was created in 1975, but was only successful for nighttime cooling. Further developments to achieve daytime cooling using different material compositions were not successful.In the 1980s, Lushiku and Granqvist identified the infrared window as a potential way to access the ultracold outer space as a way to achieve passive daytime cooling.Early attempts at developing passive radiative daytime cooling materials took inspiration from nature, particularly the Saharan silver ant and white beetles, noting how they cooled themselves in extreme heat.Research and development in passive daytime radiative cooling evolved rapidly in the 2010s with the discovery of the ability to suppress solar heating using photonic metamaterials, which widely expanded research and development in the field. This is largely credited to the landmark study by Aaswath P. Raman, Marc Abou Anoma, Linxiao Zhu, Eden Raphaeli, and Shanhui Fan published in 2014.
See also
== References == |
two days before the day after tomorrow | "Two Days Before the Day After Tomorrow" is the eighth episode in the ninth season of the American animated television series South Park. The 133rd episode overall, it originally aired on Comedy Central in the United States on October 19, 2005.In the episode, Stan and Cartman accidentally destroy a beaver dam, which causes a catastrophic flood in the nearby town of Beaverton. To avoid punishment, the boys allow the townfolk to be misled into believing that the dam's destruction was caused by global warming, which triggers panic and mayhem around Colorado and across the country.
The episode was co-written by series co-creator Trey Parker and Kenny Hotz. It parodies the 2004 film The Day After Tomorrow, and also general responses to Hurricane Katrina, particularly the various ad hoc explanations for the increased level of suffering from the hurricane and its aftermath.
Plot
Stan and Cartman are playing in a boat that Cartman claims is his uncle's when Cartman dares Stan to drive the boat, claiming he will take the blame if trouble arises. However, as Stan does not know how to drive a boat, they crash into the world's largest beaver dam, flooding the town of Beaverton. Cartman reveals that he lied about the boat belonging to his uncle and convinces Stan to try to hide their involvement by maintaining complete silence about the incident.
Meanwhile, the flood has a worse outcome than Stan expected. The people of Beaverton are in a state of disaster. Nobody tries to help the situation; instead, everybody would rather figure out who is to blame. At a conference at the Governor's office with top Colorado scientists and government officials, they all declare that the disaster is the result of global warming. At first, it is determined the full effects will take place on "the day after tomorrow". However, some scientists suddenly burst in and state that it has been proven that the disaster will take place "two days before the day after tomorrow".
The declaration of the scientists causes mass hysteria, and everybody runs from "global warming". Most of the South Park people crowd in the community center. Randy authoritatively states that global warming is causing an ice age outside that would kill them if they left.
Stan admits to Kyle that he and Cartman were the cause of the Beaverton flood (although Stan takes most of the blame). The trio then set off to rescue the people by boat. The attempt is a disaster in itself, as they wind up crashing into an oil refinery, compounding the problems of the stranded people, who now must deal with drowning and fire. Meanwhile, Randy, Gerald, and Stephen brave the supposed ice age to find their sons. Dressed in heavy arctic mountaineering gear despite the mild weather, the trio quickly collapse in the street due to heat exhaustion and dehydration, but mistake their symptoms as the "last stages of hypothermia."
The boys nearly escape to the roof of the flooded and burning refinery, but Cartman claims that "all Jews carry gold in a little bag around their necks", and holds Kyle at gunpoint, demanding his "Jew gold," not wanting to be poor in a supposedly changing world. Kyle inexplicably has the gold, along with a fake bag, but he throws it into the destroyed refinery to spite Cartman. After this, boys are rescued by an Army helicopter and fly back to South Park where the Army declares that Crab People, not global warming, are responsible for breaking the dam. In exasperation, Stan finally admits that he broke the dam, but one of the townspeople incorrectly interprets his admission as a lesson to stop obsessing over who to blame while ignoring the problem. The townspeople all begin to declare, "I broke the dam", with Cartman joining in knowing that he won't get in trouble now, while Stan tries unsuccessfully to explain that he literally did it, with a boat. Stan gets angry and finally just shouts, "Oh, fuck it!"
Cultural references
This episode parodies the response to Hurricane Katrina, particularly the various ad hoc explanations for the increased level of suffering from the hurricane and its aftermath. In addition, the episode parodies the misplaced anger and unwillingness to negotiate between all the parties in the Katrina relief effort, the distorted media coverage that occurred during the hurricane's aftermath, and the Houston mass evacuation during Hurricane Rita. For instance, when the people conclude that George Bush was the cause of the beaver dam being broken, someone says "George Bush doesn't care about beavers!" in a parody of Kanye West's quote, "George Bush doesn't care about black people." In addition, during the evacuation, only white people are rescued, while a black man can be seen left stranded. This references the accusations of selectively racist rescue efforts and media coverage during the Hurricane Katrina crisis."Two Days Before the Day After Tomorrow" also parodies the 2004 film The Day After Tomorrow, and general responses to global warming. For instance, the scene where Stan calls his father on the phone while the water level rises is a reference to a similar scene in The Day After Tomorrow where Sam calls his father while trying to outlast the fatal coldness. Several other scenes from the film are parodied in the episode. The final scene where everyone says "I broke the dam" is a reference to the 1960 film Spartacus where the title character comes forward as Spartacus, and the slave-crowd all claim to be Spartacus in an effort to protect him.The scene in which Cartman confronts Kyle over his "Jew gold", in the flooded fuel factory, is a reference to the final scene of the 1976 film Marathon Man, in which a fugitive Nazi war criminal confronts a detective who has taken his diamonds, stolen from Holocaust victims, and is preventing the Nazi from cashing in on them.
References
External links
"Two Days Before the Day After Tomorrow" Full episode at South Park Studios
"Two Days Before the Day After Tomorrow" at IMDb |
berkeley earth | Berkeley Earth is a Berkeley, California-based independent 501(c)(3) non-profit focused on land temperature data analysis for climate science. Berkeley Earth was founded in early 2010 (originally called the Berkeley Earth Surface Temperature project) to address the major concerns from outside the scientific community regarding global warming and the instrumental temperature record. The project's stated aim was a "transparent approach, based on data analysis." In February 2013, Berkeley Earth became an independent non-profit. In August 2013, Berkeley Earth was granted 501(c)(3) tax-exempt status by the US government. The primary product is air temperatures over land, but they also produce a global dataset resulting from a merge of their land data with HadSST.
Berkeley Earth founder Richard A. Muller told The Guardian
...we are bringing the spirit of science back to a subject that has become too argumentative and too contentious, ....we are an independent, non-political, non-partisan group. We will gather the data, do the analysis, present the results and make all of it available. There will be no spin, whatever we find. We are doing this because it is the most important project in the world today. Nothing else comes close.
Berkeley Earth has been funded by unrestricted educational grants totaling (as of December 2013) about $1,394,500. Large donors include Lawrence Berkeley National Laboratory, the Charles G. Koch Foundation, the Fund for Innovative Climate and Energy Research (FICER), and the William K. Bowes Jr. Foundation. The donors have no control over how Berkeley Earth conducts the research or what they publish.The team's preliminary findings, data sets and programs were published beginning in December 2012. The study addressed scientific concerns including the urban heat island effect, poor station quality, and the risk of data selection bias. The Berkeley Earth group concluded that the warming trend is real, that over the past 50 years (between the decades of the 1950s and 2000s) the land surface warmed by 0.91±0.05 °C, and their results mirror those obtained from earlier studies carried out by the U.S. National Oceanic and Atmospheric Administration (NOAA), the Hadley Centre, NASA's Goddard Institute for Space Studies (GISS) Surface Temperature Analysis, and the Climatic Research Unit (CRU) at the University of East Anglia. The study also found that the urban heat island effect and poor station quality did not bias the results obtained from these earlier studies.
Scientific team and directors
Berkeley Earth team members include:
Richard A. Muller, founder and Scientific Director. Professor of Physics, UCB and Senior Scientist, Lawrence Berkeley National Laboratory. Muller is a member of the JASON Defense Advisory Group who has been critical of other climate temperature studies before this project.
Elizabeth Muller, Founder and Executive Director
Robert Rohde, lead scientist. Ph.D. in physics, University of California, Berkeley. Rohde's scientific interests include earth sciences, climatology, and scientific graphics. Rohde is the founder of Global Warming Art.
Zeke Hausfather, scientist
Steven Mosher, scientist, co-author of Climategate: The Crutape Letters
Saul Perlmutter, Nobel Prize-winning astrophysicist at Lawrence Berkeley National Laboratory and Professor of Physics at UCB.
Arthur H. Rosenfeld, professor of physics at UCB and former California Energy Commissioner. The research he directed at Lawrence Berkeley National Laboratory led to the development of compact fluorescent lamps.
Jonathan Wurtele, professor of physics
Will GraserFormer team members
Sebastian Wickenburg, Ph.D. Candidate in Physics
Charlotte Wickham, statistical scientist
Don Groom, physicist
Robert Jacobsen, Professor of Physics at UCB and an expert in analyses of large data sets.
David Brillinger, statistical scientist. Professor of Statistics at UCB. A contributor to the theory of time series analysis.
Judith Curry, climatologist and Chair of the School of Earth and Atmospheric Sciences at the Georgia Institute of Technology.
Pamela Hyde, Communications and Project Director
John Li, Energy Geoscience internBoard of Directors
Elizabeth Muller, president and chair, managing partner of Global Shale.
Will Glaser, Treasurer, founded Pandora Music
Bill Shireman, Secretary "He develops profitable business strategies that drive pollution down and profits up."
Richard Muller, Board Director
Art Rosenfeld, Board Director
Marlan W. Downey, Board Director, Former President of the international subsidiary of Shell Oil; founder of Roxanna Oil; former president of Arco International
Jim Boettcher, Board Director; investments
Initial results
After completing the analysis of the full land temperature data set, consisting of more than 1.6 billion temperature measurements dating back to the 1800s from 15 sources around the world, and originating from more than 39,000 temperature stations worldwide, the group submitted four papers for peer-review and publication in scientific journals. The Berkeley Earth study did not assess temperature changes in the oceans, nor try to assess how much of the observed warming is due to human action. The Berkeley Earth team also released the preliminary findings to the public on October 20, 2011, in order to promote additional scrutiny. The data sets and programs used to analyze the information, and the papers undergoing peer review were also made available to the public.The Berkeley Earth study addressed scientific concerns raised by skeptics including urban heat island effect, poor station quality, and the risk of data selection bias. The team's initial conclusions are the following:
The urban heat island effect and poor station quality did not bias the results obtained from earlier studies carried out by the U.S. National Oceanic and Atmospheric Administration (NOAA), the Hadley Centre and NASA's GISS Surface Temperature Analysis. The team found that the urban heat island effect is locally large and real, but does not contribute significantly to the average land temperature rise, as the planet's urban regions amount to less than 1% of the land area. The study also found that while stations considered "poor" might be less accurate, they recorded the same average warming trend.
Global temperatures closely matched previous studies from NASA GISS, NOAA and the Hadley Centre, that have found global warming trends. The Berkeley Earth group estimates that over the past 50 years the land surface warmed by 0.911 °C, just 2% less than NOAA's estimate. The team scientific director stated that "...this confirms that these studies were done carefully and that potential biases identified by climate change sceptics did not seriously affect their conclusions."
About 1/3 of temperature sites around the world with records of 70 years or longer reported cooling (including much of the United States and northern Europe). But 2/3 of the sites show warming. Individual temperature histories reported from a single location are frequently noisy and/or unreliable, and it is always necessary to compare and combine many records to understand the true pattern of global warming.
The Atlantic multidecadal oscillation (AMO) has played a larger role than previously thought. The El Niño-Southern Oscillation (ENSO) is generally thought to be the main reason for inter-annual warming or cooling, but the Berkeley Earth team's analysis found that the global temperature correlates more closely with the state of the Atlantic Multidecadal Oscillation index, which is a measure of sea surface temperature in the north Atlantic.The Berkeley Earth analysis uses a new methodology and was tested against much of the same data as NOAA and NASA. The group uses an algorithm that attaches an automatic weighting to every data point, according to its consistency with comparable readings. The team claims this approach allows the inclusion of outlandish readings without distorting the result and standard statistical techniques were used to remove outliers. The methodology also avoids traditional procedures that require long, continuous data segments, thus accommodating short sequences, such as those provided by temporary weather stations. This innovation allowed the group to compile an earlier record than its predecessors, starting from 1800, but with a high degree of uncertainty because at the time there were only two weather stations in America, just a few in Europe and one in Asia.
Reactions
Given project leader Muller's well-publicized concerns regarding the quality of climate change research, other critics anticipated that the Berkeley Earth study would be a vindication of their stance. For example, when the study team was announced, Anthony Watts, a climate change denialist blogger who popularized several of the issues addressed by the Berkeley Earth group study, expressed full confidence in the team's methods:
I'm prepared to accept whatever result they produce, even if it proves my premise wrong. ... [T]he method isn't the madness that we've seen from NOAA, NCDC, GISS, and CRU, and, there aren't any monetary strings attached to the result that I can tell. ... That lack of strings attached to funding, plus the broad mix of people involved especially those who have previous experience in handling large data sets gives me greater confidence in the result being closer to a bona fide ground truth than anything we've seen yet.
When the initial results were released, and found to support the existing consensus, the study was widely decried by deniers. Watts spoke to The New York Times, which wrote: "Mr. Watts ... contended that the study's methodology was flawed because it examined data over 60 years instead of the 30-year-one that was the basis for his research and some other peer-reviewed studies. He also noted that the report had not yet been peer-reviewed and cited spelling errors as proof of sloppiness." Steven Mosher, a co-author of a book critical of climate scientists, also disapproved saying that the study still lacked transparency. He said: "I'm not happy until the code is released and released in a language that people can use freely." Stephen McIntyre, editor of Climate Audit, a climate-skeptics blog, said that "the team deserves credit for going back to the primary data and doing the work" and even though he had not had an opportunity to read the papers in detail, he questioned the analyses of urban heating and weather station quality.By contrast, the study was well received by Muller's peers in climate science research. James Hansen, a leading climate scientist and head of NASA Goddard Institute for Space Studies commented that he had not yet read the research papers but was glad Muller was looking at the issue. He said "It should help inform those who have honest skepticism about global warming." Phil Jones the director of the Climatic Research Unit (CRU) at the University of East Anglia, said: "I look forward to reading the finalised paper once it has been reviewed and published. These initial findings are very encouraging and echo our own results and our conclusion that the impact of urban heat islands on the overall global temperature is minimal." Michael Mann, director of the Earth System Science Center at Pennsylvania State University, commented that "...they get the same result that everyone else has gotten," and "that said, I think it's at least useful to see that even a critic like Muller, when he takes an honest look, finds that climate science is robust." Peter Thorne, from the Cooperative Institute for Climate and Satellites in North Carolina and chair of the International Surface Temperature Initiative, said: "This takes a very distinct approach to the problem and comes up with the same answer, and that builds confidence that pre-existing estimates are in the right ballpark. There is very substantial value in having multiple groups looking at the same problem in different ways." The ice core research scientist Eric Steig wrote at RealClimate.org that it was unsurprising that Berkeley Earth's results matched previous results so well: "Any of various simple statistical analyses of the freely available data ...show... that it was very very unlikely that the results would change".
Expanded scope
Since the publication of its papers in 2013, Berkeley Earth has broadened its scope. Berkeley Earth has three program areas of work: 1) further scientific investigations on the nature of climate change and extreme weather events, 2) an education and communications program, and 3) evaluation of mitigation efforts in developed and developing economies, with a focus on energy conservation and the use of natural gas as a bridging fuel.
July 2012 announcement
In an op-ed published in The New York Times on 28 July 2012, Muller announced further findings from the project. He said their analysis showed that average global land temperatures had increased by 2.5 °F (1.4 °C) in 250 years, with the increase in the last 50 years being 1.5 °F (0.8 °C), and it seemed likely that this increase was entirely due to human caused greenhouse gas emissions. His opening paragraph stated:
Call me a converted skeptic. Three years ago I identified problems in previous climate studies that, in my mind, threw doubt on the very existence of global warming. Last year, following an intensive research effort involving a dozen scientists, I concluded that global warming was real and that the prior estimates of the rate of warming were correct. I'm now going a step further: Humans are almost entirely the cause.
He said that their findings were stronger than those shown in the IPCC Fourth Assessment Report. Their analysis, set out in five scientific papers now being subjected to scrutiny by others, had used statistical methods which Robert Rohde had developed and had paid particular attention to overcoming issues that skeptics had questioned, including the urban heat island effect, poor station quality, data selection and data adjustment. In the fifth paper which they now made public, they fitted the shape of the record to various forcings including volcanoes, solar activity and sunspots. They found that the shape best matched the curve of the calculated greenhouse effect from human-caused greenhouse gas emissions. Muller said he still found "that much, if not most, of what is attributed to climate change is speculative, exaggerated or just plain wrong. I've analyzed some of the most alarmist claims, and my skepticism about them hasn't changed."
See also
Novim Projects; Surface Temperature
Global warming controversy
References
External links
Berkeley Earth home page
Papers submitted for review (as of October 2011):Berkeley Earth Temperature Averaging Process
Influence of Urban Heating on the Global Temperature Land Average
Earth Atmospheric Land Surface Temperature and Station Quality in the United States
Decadal Variations in the Global Atmospheric Land Temperatures |
tragedy of the commons | The tragedy of the commons is a metaphoric label for a concept that is widely discussed in economics, ecology and other sciences. According to the concept, should a number of people enjoy unfettered access to a finite, valuable resource such as a pasture, they will tend to over-use it, and may end up destroying its value altogether. To exercise voluntary restraint is not a rational choice for individuals – if they did, the other users would merely supplant them – yet the predictable result is a tragedy for all.
The metaphor is the title of a 1968 essay by ecologist Garrett Hardin. As another example he cited a watercourse which all are free to pollute. But the principal concern of his essay was overpopulation of the planet. To prevent the inevitable tragedy (he argued) it was necessary to reject the principle (supposedly enshrined in the Universal Declaration of Human Rights) according to which every family has a right to choose the number of its offspring, and to replace it by "mutual coercion, mutually agreed upon".
The concept itself did not originate with Hardin, but extends back to classical antiquity, being discussed by Aristotle. Some scholars have argued that over-exploitation of the common resource is by no means inevitable, since the individuals concerned may be able to achieve mutual restraint by consensus. Others have contended that the metaphor is inapposite because its exemplar - unfettered access to common land - did not exist historically, the right to exploit common land being controlled by law.
Expositions
Classical
The concept of unrestricted-access resources becoming spent, where personal use does not incur personal expense, has been discussed for millennia. Aristotle wrote that "That which is common to the greatest number gets the least amount of care. Men pay most attention to what is their own: they care less for what is common."
Lloyd's pamphlet
In 1833, the English economist William Forster Lloyd published a pamphlet which included a hypothetical example of over-use of a common resource. This was the situation of cattle herders sharing a common parcel of land on which they were each entitled to let their cows graze, as was the custom in English villages. He postulated that if a herder put more than his allotted number of cattle on the common, overgrazing could result. For each additional animal, a herder could receive additional benefits, while the whole group shared the resulting damage to the commons. If all herders made this individually rational economic decision, the common could be depleted or even destroyed, to the detriment of all.
Garrett Hardin's article
In 1968, ecologist Garrett Hardin explored this social dilemma in his article "The Tragedy of the Commons", published in the journal Science. The essay derived its title from the pamphlet by Lloyd, which he cites, on the over-grazing of common land:
Therein is the tragedy. Each man is locked into a system that compels him to increase his herd without limit – in a world that is limited. Ruin is the destination toward which all men rush, each pursuing his own best interest in a society that believes in the freedom of the commons
Hardin discussed problems that cannot be solved by technical means, as distinct from those with solutions that require "a change only in the techniques of the natural sciences, demanding little or nothing in the way of change in human values or ideas of morality". Hardin focused on human population growth, the use of the Earth's natural resources, and the welfare state. Hardin argued that if individuals relied on themselves alone, and not on the relationship between society and man, then people will treat other people as resources, which would lead to the world population growing and for the process to continue. Parents breeding excessively would leave fewer descendants because they would be unable to provide for each child adequately. Such negative feedback is found in the animal kingdom. Hardin said that if the children of improvident parents starved to death, if overbreeding was its own punishment, then there would be no public interest in controlling the breeding of families. Hardin blamed the welfare state for allowing the tragedy of the commons; where the state provides for children and supports overbreeding as a fundamental human right, a Malthusian catastrophe is inevitable. Consequently, in his article, Hardin lamented the following proposal from the United Nations:
The Universal Declaration of Human Rights describes the family as the natural and fundamental unit of society. [Article 16] It follows that any choice and decision with regard to the size of the family must irrevocably rest with the family itself, and cannot be made by anyone else.
In addition, Hardin also pointed out the problem of individuals acting in rational self-interest by claiming that if all members in a group used common resources for their own gain and with no regard for others, all resources would still eventually be depleted. Overall, Hardin argued against relying on conscience as a means of policing commons, suggesting that this favors selfish individuals – often known as free riders – over those who are more altruistic.In the context of avoiding over-exploitation of common resources, Hardin concluded by restating Hegel's maxim (which was quoted by Engels), "freedom is the recognition of necessity". He suggested that "freedom" completes the tragedy of the commons. By recognizing resources as commons in the first place, and by recognizing that, as such, they require management, Hardin believed that humans "can preserve and nurture other and more precious freedoms".
The "Commons" as a modern resource concept
Hardin's article marked the mainstream acceptance of the term "commons" as used to connote a shared resource. As Frank van Laerhoven and Elinor Ostrom have stated: "Prior to the publication of Hardin’s article on the tragedy of the commons (1968), titles containing the words 'the commons', 'common pool resources,' or 'common property' were very rare in the academic literature." They go on to say: "In 2002, Barrett and Mabry conducted a major survey of biologists to determine which publications in the twentieth century had become classic books or benchmark publications in biology. They report that Hardin’s 1968 article was the one having the greatest career impact on biologists and is the most frequently cited".
System archetype
In systems theory, the commons problem is one of the ten most common system archetypes. The Tragedy of the Commons archetype can be illustrated using a causal loop diagram.
Application
Metaphoric meaning
Like Lloyd and Thomas Malthus before him, Hardin was primarily interested in the problem of human population growth. But in his essay, he also focused on the use of larger (though finite) resources such as the Earth's atmosphere and oceans, as well as pointing out the "negative commons" of pollution (i.e., instead of dealing with the deliberate privatization of a positive resource, a "negative commons" deals with the deliberate commonization of a negative cost, pollution).
As a metaphor, the tragedy of the commons should not be taken too literally. The "tragedy" is not in the word's conventional or theatric sense, nor a condemnation of the processes that lead to it. Similarly, Hardin's use of "commons" has frequently been misunderstood, leading him to later remark that he should have titled his work "The Tragedy of the Unregulated Commons".The metaphor illustrates the argument that free access and unrestricted demand for a finite resource ultimately reduces the resource through over-exploitation, temporarily or permanently. This occurs because the benefits of exploitation accrue to individuals or groups, each of whom is motivated to maximize use of the resource to the point in which they become reliant on it, while the costs of the exploitation are borne by all those to whom the resource is available (which may be a wider class of individuals than those who are exploiting it). This, in turn, causes demand for the resource to increase, which causes the problem to snowball until the resource collapses (even if it retains a capacity to recover). The rate at which depletion of the resource is realized depends primarily on three factors: the number of users wanting to consume the common in question, the consumptive nature of their uses, and the relative robustness of the common.The same concept is sometimes called the "tragedy of the fishers", because fishing too many fish before or during breeding could cause stocks to plummet.
Modern commons
The tragedy of the commons can be considered in relation to environmental issues such as sustainability. The commons dilemma stands as a model for a great variety of resource problems in society today, such as water, forests, fish, and non-renewable energy sources such as oil, gas, and coal.
Situations exemplifying the "tragedy of the commons" include the overfishing and destruction of the Grand Banks of Newfoundland, the destruction of salmon runs on rivers that have been dammed (most prominently in modern times on the Columbia River in the Northwest United States and historically in North Atlantic rivers), and the devastation of the sturgeon fishery (in modern Russia, but historically in the United States as well). In terms of water supply, another example is the limited water available in arid regions (e.g., the area of the Aral Sea and the Los Angeles water system supply, especially at Mono Lake and Owens Lake).
It has been argued that higher sickness and mortality rates from COVID-19 in individualistic cultures with less obligatory collectivism, is another instance.
In economics, an externality is a cost or benefit that affects a party who did not choose to incur that cost or benefit. Negative externalities are a well-known feature of the "tragedy of the commons". For example, driving cars has many negative externalities; these include pollution, carbon emissions, and traffic accidents. Every time Person A gets in a car, it becomes more likely that Person Z (and millions of others) will suffer in each of those areas. Economists often urge the government to adopt policies that "internalize" an externality.The tragedy of the commons can also refer to the idea of open data. Anonymised data are crucial for useful social research and represent therefore a public resource – better said, a common good – which is liable to exhaustion. Some feel that the law should provide a safe haven for the dissemination of research data, since it can be argued that current data protection policies overburden valuable research without mitigating realistic risks.An expansive application of the concept can also be seen in Vyse's analysis of differences between countries in their responses to the COVID-19 pandemic. Vyse argues that those who defy public health recommendations can be thought of as spoiling a set of common goods, "the economy, the healthcare system, and the very air we breathe, for all of us.
Tragedy of the digital commons
In the past two decades, scholars have been attempting to apply the concept of the tragedy of the commons to the digital environment. However, between scholars there are differences on some very basic notions inherent to the tragedy of the commons: the idea of finite resources and the extent of pollution. On the other hand, there seems to be some agreement on the role of the digital divide and how to solve a potential tragedy of the digital commons.
Resources and pollution
In terms of resources, there is no coherent conception of whether digital resources are finite. Some scholars argue that digital resources are infinite because downloading a file does not constitute the destruction of the file in the digital environment. Digital resources, as such, are merely replicated and disseminated throughout the digital environment and as such can be understood as infinite. While others argue that data, for example, is a finite resource because privacy laws and regulations put a significant strain on the access to data.Finite digital resources include databases that require persistent maintenance, an example being Wikipedia. As a non-profit, it survives on a network of people contributing to maintain a knowledgebase without expectation or compensation. This digital resource will deplete as Wikipedia may only survive if it is contributed to and used as a commons. The motivation for individuals to contribute is reflective of the theory because, if humans act in their own interest and no longer participate, then the resource becomes misinformed or depleted. Arguments surrounding the regulation and mitigation requirements for digital resources may become reflective of natural resources.
This raises the question whether one can view access itself as a finite resource in the context of a digital environment. Some scholars argue this point, often pointing to a proxy for access that is more concrete and measurable. One such proxy is bandwidth, which can become congested when too many people try to access the digital environment. Alternatively, one can think of the network itself as a common resource which can be exhausted through overuse. Therefore, when talking about resources running out in a digital environment, it could be more useful to think in terms of the access to the digital environment being restricted in some way; this is called information entropy.In terms of pollution, there are some scholars who look only at the pollution that occurs in the digital environment itself. They argue that unrestricted use of digital resources can cause an overproduction of redundant data which causes noise and corrupts communication channels within the digital environment. Others argue that the pollution caused by the overuse of digital resources also causes pollution in the physical environment. They argue that unrestricted use of digital resources causes misinformation, fake news, crime, and terrorism, as well as problems of a different nature such as confusion, manipulation, insecurity, and loss of confidence.
Digital divide and solutions
Scholars disagree on the particularities underlying the tragedy of the digital commons; however, there does seem to be some agreement on the cause and the solution. The cause of the tragedy of the commons occurring in the digital environment is attributed by some scholars to the digital divide. They argue that there is too large a focus on bridging this divide and providing unrestricted access to everyone. Such a focus on increasing access without the necessary restrictions causes the exploitation of digital resources for individual self-interest that is underlying any tragedy of the commons.In terms of the solution, scholars agree that cooperation rather than regulation is the best way to mitigate a tragedy of the digital commons. The digital world is not a closed system in which a central authority can regulate the users, as such some scholars argue that voluntary cooperation must be fostered. This could perhaps be done through digital governance structure that motivates multiple stakeholders to engage and collaborate in the decision-making process. Other scholars argue more in favor of formal or informal sets of rules, like a code of conduct, to promote ethical behavior in the digital environment and foster trust. Alternative to managing relations between people, some scholars argue that it is access itself that needs to be properly managed, which includes expansion of network capacity.
Patents and technology
Patents are effectively a limited-time exploitation monopoly given to inventors after disclosing a novel invention. Once the period elapsed, the invention is in principle free to all, and many companies do indeed commercialize such products, now market-proven. However, around 50% of all patent applications do not reach successful commercialization at all, often due to immature levels of components or marketing failures by the innovators. Scholars have suggested that since investment is often connected to patentability, such inactive patents form a rapidly growing category of underprivileged technologies and ideas that, under current market conditions, are effectively unavailable for use. The case might be particularly relevant to technologies that are relatively more environmentally/human damaging but also somewhat costlier than other alternatives developed contemporaneously.
Examples
More general examples (some alluded to by Hardin) of potential and actual tragedies include:
Physical resources
Uncontrolled human population growth leading to overpopulation.
Atmosphere: through the release of pollution that leads to ozone depletion, global warming, ocean acidification (by way of increased atmospheric CO2 being absorbed by the sea), and particulate pollution
Light pollution: with the loss of the night sky for research and cultural significance, affected human, flora and fauna health, nuisance, trespass and the loss of enjoyment or function of private property.
Water: Water pollution, water crisis of over-extraction of groundwater and wasting water due to overirrigation
Forests: Frontier logging of old growth forest and slash and burn
Energy resources and climate: Environmental residue of mining and drilling, burning of fossil fuels and consequential global warming
Animals: Habitat destruction and poaching leading to the Holocene mass extinction
Oceans: Overfishing
Space debris in Earth's surrounding space leading to limited locations for new satellites and the obstruction of universal observations.
Human health
In many African and Asian countries, patriarchal culture creates a preference for sons that causes some people to abort foetal girls. This results in an imbalanced sex ratio in these countries to the extent that they have significantly more males than females, even though the natural male to female ratio is about 1.04 to 1.
Antibiotics – Antibiotic Resistance: Misuse of antibiotics anywhere in the world will eventually result in antibiotic resistance developing at an accelerated rate. The resulting antibiotic resistance has spread (and will likely continue to do so in the future) to other bacteria and other regions, hurting or destroying the Antibiotic Commons that is shared on a worldwide basis
Vaccines – Herd immunity: Avoiding a vaccine shot and relying on the established herd immunity instead will avoid potential vaccine risks, but if everyone does this, it will diminish herd immunity and bring risk to people who cannot receive vaccines for medical reasons.
Other
Knowledge commons encompass immaterial and collectively owned goods in the information age, including, for example:
Source code and software documentation in software projects that can get "polluted" with messy code or inaccurate information.
Skills acquisition and training, when all parties involved pass the buck on implementing it.
Application to evolutionary biology
A parallel was drawn in 2006 between the tragedy of the commons and the competing behaviour of parasites that, through acting selfishly, eventually diminish or destroy their common host. The idea has also been applied to areas such as the evolution of virulence or sexual conflict, where males may fatally harm females when competing for matings.The idea of evolutionary suicide, where adaptation at the level of the individual causes the whole species or population to be driven extinct, can be seen as an extreme form of an evolutionary tragedy of the commons. From an evolutionary point of view, the creation of the tragedy of the commons in pathogenic microbes may provide us with advanced therapeutic methods.Microbial ecology studies have also addressed if resource availability modulates the cooperative or competitive behaviour in bacteria populations. When resources availability is high, bacterial populations become competitive and aggressive with each other, but when environmental resources are low, they tend to be cooperative and mutualistic.Ecological studies have hypothesised that competitive forces between animals are major in high carrying capacity zones (i.e., near the Equator), where biodiversity is higher, because of natural resources abundance. This abundance or excess of resources, causes animal populations to have "r" reproduction strategies (many offspring, short gestation, less parental care, and a short time until sexual maturity), so competition is affordable for populations. Also, competition could select populations to have "r" behaviour in a positive feedback regulation.Contrary, in low carrying capacity zones (i.e., far from the equator), where environmental conditions are harsh K strategies are common (longer life expectancy, produce relatively fewer offspring and tend to be altricial, requiring extensive care by parents when young) and populations tend to have cooperative or mutualistic behaviors. If populations have a competitive behaviour in hostile environmental conditions they mostly are filtered out (die) by environmental selection; hence, populations in hostile conditions are selected to be cooperative.
Climate change
The effects of climate change have been given as a mass example of the tragedy of the commons. This perspective proposes that the earth, being the commons, has suffered a depletion of natural resources without regard to the externalities, the impact on neighboring and future populations. The collective actions of individuals, organisations, and governments continue to contribute to environmental degradation. Mitigation of the long-term impacts and tipping points require strict controls or other solution, but this may come as a loss to different industries. The sustainability of population and industry growth is the subject of climate change discussion. The global commons of environmental resource consumption or selfishness, as in the fossil fuel industry has been theorised as not realistically manageable. This is due to the crossing of irreversible thresholds of impact before the costs are entirely realised.
Commons dilemma
The commons dilemma is a specific class of social dilemma in which people's short-term selfish interests are at odds with long-term group interests and the common good. In academia, a range of related terminology has also been used as shorthand for the theory or aspects of it, including resource dilemma, take-some dilemma, and common pool resource.Commons dilemma researchers have studied conditions under which groups and communities are likely to under- or over-harvest common resources in both the laboratory and field. Research programs have concentrated on a number of motivational, strategic, and structural factors that might be conducive to management of commons.In game theory, which constructs mathematical models for individuals' behavior in strategic situations, the corresponding "game", developed by Hardin, is known as the Commonize Costs – Privatize Profits Game (CC–PP game).
Psychological factors
Kopelman, Weber, & Messick (2002), in a review of the experimental research on cooperation in commons dilemmas, identify nine classes of independent variables that influence cooperation in commons dilemmas: social motives, gender, payoff structure, uncertainty, power and status, group size, communication, causes, and frames. They organize these classes and distinguish between psychological individual differences (stable personality traits) and situational factors (the environment). Situational factors include both the task (social and decision structure) and the perception of the task.Empirical findings support the theoretical argument that the cultural group is a critical factor that needs to be studied in the context of situational variables. Rather than behaving in line with economic incentives, people are likely to approach the decision to cooperate with an appropriateness framework. An expanded, four factor model of the Logic of Appropriateness, suggests that the cooperation is better explained by the question: "What does a person like me (identity) do (rules) in a situation like this (recognition) given this culture (group)?"
Strategic factors
Strategic factors also matter in commons dilemmas. One often-studied strategic factor is the order in which people take harvests from the resource. In simultaneous play, all people harvest at the same time, whereas in sequential play people harvest from the pool according to a predetermined sequence – first, second, third, etc. There is a clear order effect in the latter games: the harvests of those who come first – the leaders – are higher than the harvest of those coming later – the followers. The interpretation of this effect is that the first players feel entitled to take more. With sequential play, individuals adopt a first come-first served rule, whereas with simultaneous play people may adopt an equality rule. Another strategic factor is the ability to build up reputations. Research found that people take less from the common pool in public situations than in anonymous private situations. Moreover, those who harvest less gain greater prestige and influence within their group.
Structural factors
Hardin stated in his analysis of the tragedy of the commons that "Freedom in a commons brings ruin to all." One of the proposed solutions is to appoint a leader to regulate access to the common. Groups are more likely to endorse a leader when a common resource is being depleted and when managing a common resource is perceived as a difficult task. Groups prefer leaders who are elected, democratic, and prototypical of the group, and these leader types are more successful in enforcing cooperation. A general aversion to autocratic leadership exists, although it may be an effective solution, possibly because of the fear of power abuse and corruption.The provision of rewards and punishments may also be effective in preserving common resources. Selective punishments for overuse can be effective in promoting domestic water and energy conservation – for example, through installing water and electricity meters in houses. Selective rewards work, provided that they are open to everyone. An experimental carpool lane in the Netherlands failed because car commuters did not feel they were able to organize a carpool. The rewards do not have to be tangible. In Canada, utilities considered putting "smiley faces" on electricity bills of customers below the average consumption of that customer's neighborhood.
Solutions
Articulating solutions to the tragedy of the commons is one of the main problems of political philosophy. In some situations, locals implement (often complex) social schemes that work well. When these fail, there are many possible governmental solutions such as privatization, internalizing the externalities, and regulation.
Non-governmental solution
Robert Axelrod contends that even self-interested individuals will often find ways to cooperate, because collective restraint serves both the collective and individual interests. Anthropologist G. N. Appell criticized those who cited Hardin to "impos[e] their own economic and environmental rationality on other social systems of which they have incomplete understanding and knowledge."Political scientist Elinor Ostrom, who was awarded 2009's Nobel Memorial Prize in Economic Sciences for her work on the issue, and others revisited Hardin's work in 1999. They found the tragedy of the commons not as prevalent or as difficult to solve as Hardin maintained, since locals have often come up with solutions to the commons problem themselves. For example, another group found that a commons in the Swiss Alps has been run by a collective of farmers there to their mutual and individual benefit since 1517, in spite of the farmers also having access to their own farmland. In general, it is in the interest of the users of a commons to keep them functioning and so complex social schemes are often invented by the users for maintaining them at optimum efficiency. Another prominent example of this is the deliberative process of granting legal personhood to a part of nature, for example rivers, with the aim of preserving their water resources and prevent environmental degradation. This process entails that a river is regarded as its own legal entity that can sue against environmental damage done to it while being represented by an independently appointed guardian advisory group. This has happened as a bottom-up process in New Zealand: Here debates initiated by the Whanganui Iwi tribe have resulted in legal personhood for the river. The river is considered as a living whole, stretching from mountain to sea and even includes not only the physical but also its metaphysical elements.Similarly, geographer Douglas L. Johnson remarks that many nomadic pastoralist societies of Africa and the Middle East in fact "balanced local stocking ratios against seasonal rangeland conditions in ways that were ecologically sound", reflecting a desire for lower risk rather than higher profit; in spite of this, it was often the case that "the nomad was blamed for problems that were not of his own making and were a product of alien forces." Independently finding precedent in the opinions of previous scholars such as Ibn Khaldun as well as common currency in antagonistic cultural attitudes towards non-sedentary peoples, governments and international organizations have made use of Hardin's work to help justify restrictions on land access and the eventual sedentarization of pastoral nomads despite its weak empirical basis. Examining relations between historically nomadic Bedouin Arabs and the Syrian state in the 20th century, Dawn Chatty notes that "Hardin's argument was curiously accepted as the fundamental explanation for the degradation of the steppe land" in development schemes for the arid interior of the country, downplaying the larger role of agricultural overexploitation in desertification as it melded with prevailing nationalist ideology which viewed nomads as socially backward and economically harmful.Elinor Ostrom and her colleagues looked at how real-world communities manage communal resources, such as fisheries, land irrigation systems, and farmlands, and they identified a number of factors conducive to successful resource management. One factor is the resource itself; resources with definable boundaries (e.g. land) can be preserved much more easily. A second factor is resource dependence; there must be a perceptible threat of resource depletion, and it must be difficult to find substitutes. The third is the presence of a community; small and stable populations with a thick social network and social norms promoting conservation do better. A final condition is that there be appropriate community-based rules and procedures in place with built-in incentives for responsible use and punishments for overuse. When the commons is taken over by non-locals, those solutions can no longer be used.Many of the economic and social structures recommended by Ostrom coincide with the structures recommended by anarchists, particularly green anarchism. The largest contemporary societies that use these organizational strategies are the Rebel Zapatista Autonomous Municipalities and the Autonomous Administration of North and East Syria which have heavily been influenced by anarchism and other versions of libertarian and ecological socialism.
Individuals may act in a deliberate way to avoid consumption habits that deplete natural resources. This consciousness promotes the boycotting of products or brands and seeking alternative, more sustainable options.
Altruistic punishment
Various well-established theories, such as theory of kin selection and direct reciprocity, have limitations in explaining patterns of cooperation emerging between unrelated individuals and in non-repeatable short-term interactions. Studies have shown that punishment is an efficacious motivator for cooperation among humans.Altruistic punishment entails the presence of individuals that punish defectors from a cooperative agreement, although doing so is costly and provides no material gain. These punishments effectively resolve tragedy of the commons scenarios by addressing both first-order free rider problems (i.e. defectors free riding on cooperators) and second-order free rider problems (i.e. cooperators free riding on work of punishers). Such results can only be witnessed when the punishment levels are high enough.
While defectors are motivated by self-interest and cooperators feel morally obliged to practice self-restraint, punishers pursue this path when their emotions are clouded by annoyance and anger at free riders.
Governmental solutions
Governmental solutions are used when the above conditions are not met (such as a community being larger than the cohesion of its social network). Examples of government regulation include privatization, regulation, and internalizing the externalities.
Privatization
One solution for some resources is to convert common good into private property (Coase 1960), giving the new owner an incentive to enforce its sustainability. Libertarians and classical liberals cite the tragedy of the commons as an example of what happens when Lockean property rights to homestead resources are prohibited by a government. They argue that the solution to the tragedy of the commons is to allow individuals to take over the property rights of a resource, that is, to privatize it.In England, this solution was attempted in the Inclosure Acts. According to Karl Marx in Das Kapital, this solution leads to increasing numbers of people being pushed into smaller and smaller pockets of common land which has yet to be privatised, thereby merely displacing and exacerbating the problem while putting an increasing number of people in precarious situations. Economic historian Bob Allen coined the term "Engels' pause" to describe the period from 1790 to 1840, when British working-class wages stagnated and per-capita gross domestic product expanded rapidly during a technological upheaval.
Regulation
In a typical example, governmental regulations can limit the amount of a common good that is available for use by any individual. Permit systems for extractive economic activities including mining, fishing, hunting, livestock raising, and timber extraction are examples of this approach. Similarly, limits to pollution are examples of governmental intervention on behalf of the commons. This idea is used by the United Nations Moon Treaty, Outer Space Treaty and Law of the Sea Treaty as well as the UNESCO World Heritage Convention (treaty) which involves the international law principle that designates some areas or resources the Common Heritage of Mankind.In Hardin's essay, he proposed that the solution to the problem of overpopulation must be based on "mutual coercion, mutually agreed upon" and result in "relinquishing the freedom to breed". Hardin discussed this topic further in a 1979 book, Managing the Commons, co-written with John A. Baden. He framed this prescription in terms of needing to restrict the "reproductive right", to safeguard all other rights. Several countries have a variety of population control laws in place.German historian Joachim Radkau thought Hardin advocates strict management of common goods via increased government involvement or international regulation bodies. An asserted impending "tragedy of the commons" is frequently warned of as a consequence of the adoption of policies which restrict private property and espouse expansion of public property.Given the current system of rule of law, the solution of giving a legal right to nature at large (from object to subject) could be a game changer. This idea of giving land a legal personality is intended to enable the democratic system of the rule of law to allow for prosecution, sanction, and reparation for damage to the earth. This legal development is not new, it has been put into practice in Ecuador in the form of a constitutional principle known as "Pacha Mama" (Mother Earth).
Internalizing externalities
Privatization works when the person who owns the property (or rights of access to that property) pays the full price of its exploitation. As discussed above negative externalities (negative results, such as air or water pollution, that do not proportionately affect the user of the resource) is often a feature driving the tragedy of the commons. Internalizing the externalities, in other words ensuring that the users of resource pay for all of the consequences of its use, can provide an alternate solution between privatization and regulation. One example is gasoline taxes which are intended to include both the cost of road maintenance and of air pollution. This solution can provide the flexibility of privatization while minimizing the amount of government oversight and overhead that is needed.
The mid-way solution
One of the significant actions areas which can dwell as potential solution is to have co-shared communities that have partial ownership from governmental side and partial ownership from the community. By ownership, here it is referred to planning, sharing, using, benefiting and supervision of the resources which ensure that the power is not held in one or two hands only. Since, involvement of multiple stakeholders is necessary responsibilities can be shared across them based on their abilities and capacities in terms of human resources, infrastructure development ability, and legal aspects, etc.
Criticism
Hardin's work is criticised as historically inaccurate in failing to account for the demographic transition, and for failing to distinguish between common property and open access resources. In a similar vein, Carl Dahlman argues that commons were effectively managed to prevent overgrazing. Likewise, Susan Jane Buck Cox argues that the common land example used to argue this economic concept is on very weak historical ground, and misrepresents what she terms was actually the "triumph of the commons": the successful common usage of land for many centuries. She argues that social changes and agricultural innovation, and not the behaviour of the commoners, led to the demise of the commons.Radical environmentalist Derrick Jensen claims the tragedy of the commons is used as propaganda for private ownership. He says it has been used by the political right wing to hasten the final enclosure of the "common resources" of third world and indigenous people worldwide, as a part of the Washington Consensus. He argues that in true situations, those who abuse the commons would have been warned to desist and if they failed would have punitive sanctions against them. He says that rather than being called "The Tragedy of the Commons", it should be called "the Tragedy of the Failure of the Commons".Marxist geographer David Harvey has a similar criticism, noting that "The dispossession of indigenous populations in North America by 'productive' colonists, for instance, was justified because indigenous populations did not produce value", and asks generally: "Why, for instance, do we not focus in Hardin's metaphor on the individual ownership of the cattle rather than on the pasture as a common?"Some authors, like Yochai Benkler, say that with the rise of the Internet and digitalisation, an economics system based on commons becomes possible again. He wrote in his book The Wealth of Networks in 2006 that cheap computing power plus networks enable people to produce valuable products through non-commercial processes of interaction: "as human beings and as social beings, rather than as market actors through the price system". He uses the term networked information economy to refer to a "system of production, distribution, and consumption of information goods characterized by decentralized individual action carried out through widely distributed, nonmarket means that do not depend on market strategies." He also coined the term commons-based peer production for collaborative efforts based on sharing information. Examples of commons-based peer production are Wikipedia, free and open source software and open-source hardware.Tragedy of the commons has served as a pretext for powerful private companies and/or governments to introduce regulatory agents or outsourcing on less powerful entities or governments, for the exploitation of their natural resources. Powerful companies and governments can easily corrupt and bribe less powerful institutions or governments, to allow them exploit or privatize their resources, which causes more concentration of power and wealth in powerful entities. This phenomenon is known as the resource curse.
Comedy of the commons
In certain cases, exploiting a resource more may be a good thing. Carol M. Rose, in a 1986 article, discussed the concept of the "comedy of the commons", where the public property in question exhibits "increasing returns to scale" in usage (hence the phrase, "the more the merrier"), in that the more people use the resource, the higher the benefit to each one. Rose cites as examples commerce and group recreational activities. According to Rose, public resources with the "comedic" characteristic may suffer from under-investment rather than over usage.A modern example presented by Garrett Richards in environmental studies is that the issue of excessive carbon emissions can be tackled effectively only when the efforts are directly addressing the issues along with the collective efforts from the world economies. Additionally, the more that nations are willing to collaborate and contribute resources, the higher the chances are for successful technological developments.
See also
Related concepts
Enclosure – In England, appropriation of common land, depriving commoners of their ancient rights
References
Notes
Bibliography
Angus, I. (2008). "The myth of the tragedy of the commons", Climate & Capitalism (August 25).
Chatty, Dawn (2010). "The Bedouin in Contemporary Syria: The Persistence of Tribal Authority and Control". Middle East Journal. 64 (1): 29–69. doi:10.3751/64.1.12. S2CID 143487962.
Cox, Susan Jane Buck (1985). "No Tragedy on the Commons" (PDF). Environmental Ethics. 7 (1): 49–61. doi:10.5840/enviroethics1985716. hdl:10535/3113.
Dixit, Avinash K.; Nalebuff, Barry J. (1993). Thinking Strategically: The Competitive Edge in Business, Politics, and Everyday Life. W. W. Norton & Company. ISBN 978-0-393-06979-2.
Gonner, E. C. K (1912). Common Land and Inclosure. London: Macmillan & Co.
Foddy, M., Smithson, M., Schneider, S., and Hogg, M. (1999). Resolving social dilemmas. Philadelphia, PA: Psychology Press.
Frischmann, Brett M.; Marciano, Alain; Ramello, Giovanni Battista (2019). "Retrospectives: Tragedy of the Commons after 50 Years". Journal of Economic Perspectives. 33 (4): 211–228. doi:10.1257/jep.33.4.211.
Hardin, Garrett (1968). "The Tragedy of the Commons". Science. 162 (3859): 1243–1248. Bibcode:1968Sci...162.1243H. doi:10.1126/science.162.3859.1243. PMID 5699198. S2CID 8757756.
Hardin, G. (1994). "The Tragedy of the Unmanaged Commons". Trends in Ecology & Evolution. 9 (5): 199. doi:10.1016/0169-5347(94)90097-3. ISBN 978-0-202-36597-8. PMID 21236819.
Hardin, Garrett (May 1, 1998). "Extensions of "The Tragedy of the Commons"". Science. 280 (5364): 682–683. doi:10.1126/science.280.5364.682. hdl:10535/3915. S2CID 153844385.
Hardin, Garrett (2008). "Tragedy of the Commons". In David R. Henderson (ed.). Concise Encyclopedia of Economics (2nd ed.). Indianapolis: Library of Economics and Liberty. ISBN 978-0-86597-665-8. OCLC 237794267. Retrieved 2016-03-13.
Johnson, Douglas L. (1993). "Nomadism and Desertification in Africa and the Middle East". GeoJournal. 31 (1): 51–66. doi:10.1007/bf00815903. S2CID 153445920.
Jones, Bryan; Rachlin, Howard (2006). "Social Discounting" (PDF). Psychological Science. 17 (4): 283–286. doi:10.1111/j.1467-9280.2006.01699.x. PMID 16623683. S2CID 6641888.
Kopelman, S.; Weber, M; Messick, D. (2002). "Factors Influencing Cooperation in Commons Dilemmas: A Review of Experimental Psychological Research". In Ostrom, E.; et al. (eds.). The Drama of the Commons. Washington, D.C.: National Academy Press. Ch. 4., 113–156. doi:10.17226/10287. ISBN 978-0-309-08250-1. S2CID 153794284.
Kopelman, S (2009). "The effect of culture and power on cooperation in commons dilemmas: Implications for global resource management" (PDF). Organizational Behavior and Human Decision Processes. 108 (1): 153–163. doi:10.1016/j.obhdp.2008.06.004. hdl:2027.42/50454.
Locher, Fabien (2013). "Cold War Pastures: Garrett Hardin and the 'Tragedy of the Commons'" (PDF). Revue d'Histoire Moderne et Contemporaine. 60 (1): 7–36. doi:10.3917/rhmc.601.0007.
Messick, D. M.; Wilke, H. A. M.; Brewer, M. B.; Kramer, R. M.; Zemke, P. E.; Lui, L. (1983). "Individual adaptations and structural change as solutions to social dilemmas". Journal of Personality and Social Psychology. 44 (294): 309. doi:10.1037/0022-3514.44.2.294.
Ostrom, Elinor (1990). Governing the commons: The evolution of institutions for collective action. Cambridge: Cambridge University Press. ISBN 0-521-40599-8.
Ostrom, Elinor (24 July 2009). "A General Framework for Analyzing Sustainability of Social-Ecological Systems". Science. 325 (5939): 419–422. Bibcode:2009Sci...325..419O. CiteSeerX 10.1.1.364.7681. doi:10.1126/science.1172133. PMID 19628857. S2CID 39710673.
Rachlin, H.; Green, L. (1972). "Commitment, choice, and self-control". Journal of the Experimental Analysis of Behavior. 17 (1): 15–22. doi:10.1901/jeab.1972.17-15. PMC 1333886. PMID 16811561.
Rachlin, Howard (1974). "Self-Control". Behaviorism. 2 (1): 94–107. JSTOR 27758811.
Van Vugt, M.; Van Lange, P. A. M.; Meertens, R. M.; Joireman, J. A. (1996). "How a Structural Solution to a Real-World Social Dilemma Failed: A Field Experiment on the First Carpool Lane in Europe" (PDF). Social Psychology Quarterly. 59 (4): 364–374. CiteSeerX 10.1.1.318.656. doi:10.2307/2787077. JSTOR 2787077. Archived from the original (PDF) on 2017-08-09.
Van Vugt, Mark (2001). "Community Identification Moderating the Impact of Financial Incentives in a Natural Social Dilemma: Water Conservation" (PDF). Personality and Social Psychology Bulletin. 27 (11): 1440–1449. doi:10.1177/01461672012711005. S2CID 220678593.
Van Vugt, Mark (2009). "Triumph of the commons" (PDF). New Scientist. 203 (2722): 40–43. doi:10.1016/S0262-4079(09)62221-1.
Weber, M.; Kopelman, S.; Messick, D. (2004). "A conceptual review of decision making in social dilemmas: applying the logic of appropriateness". Personality and Social Psychology Review. 8 (3): 281–307. doi:10.1207/s15327957pspr0803_4. PMID 15454350. S2CID 1525372.
External links
The Digital Library of the Commons
The Myth of the Tragedy of the Commons by Ian Angus
"Global Tragedy of the Commons" by John Hickman and Sarah Bartlett
"Tragedy of the Commons Explained with Smurfs" by Ryan Somma
Public vs. Private Goods & Tragedy of the Commons
On averting the Tragedy of the Commons |
deforestation of the amazon rainforest | The Amazon rainforest, spanning an area of 3,000,000 km2 (1,200,000 sq mi), is the world's largest rainforest. It encompasses the largest and most biodiverse tropical rainforest on the planet, representing over half of all rainforests. The Amazon region includes the territories of nine nations, with Brazil containing the majority (60%), followed by Peru (13%), Colombia (10%), and smaller portions in Venezuela, Ecuador, Bolivia, Guyana, Suriname, and French Guiana.
Over one-third of the Amazon rainforest is designated as formally acknowledged indigenous territory, amounting to more than 3,344 territories. Historically, indigenous Amazonian peoples have relied on the forest for various needs such as food, shelter, water, fiber, fuel, and medicines. The forest holds significant cultural and cosmological importance for them. Despite external pressures, deforestation rates are comparatively lower in indigenous territories.By the year 2022 around 26% of the forest was considered as deforested or highly degraded.Cattle ranching in the Brazilian Amazon has been identified as the primary cause of deforestation, accounting for about 80% of all deforestation in the region. This makes it the world's largest single driver of deforestation, contributing to approximately 14% of the global annual deforestation. Government tax revenue has subsidized much of the agricultural activity leading to deforestation. By 1995, 70% of previously forested land in the Amazon and 91% of land deforested since 1970 had been converted for cattle ranching. The remaining deforestation primarily results from small-scale subsistence agriculture and mechanized cropland producing crops such as soy and palm.Satellite data from 2018 revealed a decade-high rate of deforestation in the Amazon, with approximately 7,900 km2 (3,100 sq mi) destroyed between August 2017 and July 2018. The states of Mato Grosso and Pará experienced the highest levels of deforestation during this period. Illegal logging was cited as a cause by the Brazilian environment minister, while critics highlighted the expansion of agriculture as a factor encroaching on the rainforest. Researchers warn that the forest may reach a tipping point where it cannot generate sufficient rainfall to sustain itself. In the first 9 months of 2023 deforestation rate declined by 49.5% due to the policy of Lula's government and international help.
History
In the pre-Columbian era, certain parts of the Amazon rainforest were densely populated and cultivated. However, European colonization in the 16th century, driven by the pursuit of gold and later by the rubber boom, depopulated the region due to diseases and slavery, leading to forest regrowth.Until the 1970s, access to the largely roadless interior of the forest was challenging, and it remained mostly intact apart from partial clearing along the rivers. Deforestation escalated after the construction of highways penetrating deep into the forest, such as the Trans-Amazonian Highway in 1972.
Challenges arose in parts of the Amazon where poor soil conditions made plantation-based agriculture unprofitable. The crucial turning point in deforestation occurred when colonists began establishing farms within the forest during the 1960s. Their farming practices relied on crop cultivation and the slash-and-burn method. However, due to soil fertility loss and weed invasion, the colonists struggled to effectively manage their fields and crops.Indigenous areas in the Peruvian Amazon, like the Urarina's Chambira River Basin, experience limited soil productivity, leading to the continual clearing of new lands by indigenous horticulturalists. Cattle raising dominated Amazonian colonization as it required less labor, generated acceptable profits, and involved land under state ownership. While promoted as a reforestation measure, the privatization of land was criticized for potentially encouraging further deforestation and disregarding the rights of Peru's indigenous people, who typically lack formal title to land. The associated law, known as Law 840, faced significant resistance and was eventually repealed as unconstitutional.Illegal deforestation in the Amazon increased in 2015 after decades of decline, driven primarily by consumer demand for products like palm oil. Brazilian farmers clear land to accommodate the growing demand for crops such as palm oil and soy. Deforestation releases significant amounts of carbon, and if current levels continue, the remaining forests worldwide could disappear within 100 years. The Brazilian government implemented the RED (reducing emissions from deforestation and forest degradation) program to combat deforestation, providing support to various African countries through education programs and financial contributions.In January 2019, Brazil's president, Jair Bolsonaro, issued an executive order granting the agriculture ministry oversight over certain Amazon lands. This decision has been supported by cattle ranchers and mining companies but criticized for endangering indigenous populations and contributing to Brazil's relative contribution to global climate change.
Reports from the year 2021 indicated a 22% increase in deforestation from the previous year, reaching the highest level since 2006.
Causes of deforestation
The deforestation of the Amazon rainforest is influenced by various factors at local, national, and international levels. The rainforest is sought after for purposes such as cattle ranching, the extraction of valuable hardwoods, land for housing and farming (especially soybeans), the construction of roads (including highways and smaller roads), and the collection of medicinal resources. Deforestation in Brazil is also linked to an economic growth model focused on accumulating factors, primarily land, rather than enhancing overall productivity. It is important to note that illegal logging is a common practice in tree removal during deforestation.
Cattle ranching
According to a 2004 World Bank paper and a 2009 Greenpeace report, cattle ranching in the Brazilian Amazon, supported by the international beef and leather trades, has been identified as responsible for approximately 80% of deforestation in the region. This accounts for about 14% of the world's total annual deforestation, making it the largest driver of deforestation globally. The Food and Agriculture Organization of the United Nations reported in 2006 that 70% of previously forested land in the Amazon, as well as 91% of land deforested since 1970, is now used for livestock pasture.
The 2019 European Union-Mercosur Free Trade Agreement, which establishes one of the world's largest free trade areas, has faced criticism from environmental activists and advocates for indigenous rights. They argue that the trade agreement will contribute to further deforestation of the Amazon rainforest by expanding market access for Brazilian beef.During Jair Bolsonaro's government, certain environmental laws were weakened, accompanied by reductions in funding and personnel in key government agencies and the dismissal of agency heads and state bodies. The deforestation of the Amazon rainforest accelerated during the COVID-19 pandemic in Brazil. According to Brazil's National Institute for Space Research (INPE), deforestation in the Brazilian Amazon increased by more than 50% in the first three months of 2020 compared to the same period in 2019.
Soy bean
Deforestation in the Amazon has occurred as a result of farmers clearing land for mechanized cropland. A study based on NASA satellite data in 2006 revealed that the clearing of land for mechanized cropland had become a significant factor in deforestation in the Brazilian Amazon. This change in land use has had an impact on the region's climate. Researchers discovered that in 2004, a peak year for deforestation, over 20% of the forests in the state of Mato Grosso were converted to cropland. In 2005, when soybean prices decreased by more than 25%, certain areas of Mato Grosso showed a decline in large-scale deforestation events, suggesting that price fluctuations of other crops, beef, and timber could also have a notable influence on future land use in the region.The cultivation of soybeans, primarily for export and the production of biodiesel and animal feed, has been a significant driver of forest loss in the Amazon. As soybean prices have risen, soy farmers have expanded their activities into forested areas of the Amazon. However, the implementation of a private sector agreement known as the Soy Moratorium has played a crucial role in significantly reducing deforestation associated with soy production in the region. In 2006, several major commodity trading companies, including Cargill, pledged not to purchase soybeans produced in recently deforested areas of the Brazilian Amazon. Prior to the moratorium, 30% of soy field expansion was linked to deforestation, contributing to record-high deforestation rates. After eight years of the moratorium, a study conducted in 2015 found that although the soy production area had expanded by 1.3 million hectares, only about 1% of the new soy expansion had occurred at the expense of forests. In response to the moratorium, farmers opted to plant crops on already cleared land.The perceived needs of soy farmers have been used to justify certain controversial transportation projects that have been developed in the Amazon. The Belém-Brasília highway (1958) and the Cuiabá-Porto Velho highway (1968) were the only federal highways in the Legal Amazon region that were paved and accessible year-round before the late 1990s. These two highways are considered to be central to the "arc of deforestation," which is presently the primary area of deforestation in the Brazilian Amazon. The Belém-Brasília highway attracted nearly two million settlers in its first twenty years. The success of this highway in opening up the forest was replicated as additional paved roads were constructed, leading to an unstoppable wave of settlement. The completion of these roads was followed by a significant influx of settlers, who also had a substantial impact on the forest.
Logging
Logging in deforestation refers to the practice of cutting down trees for commercial purposes, primarily for the timber industry, which contributes to the overall deforestation of an area. Deforestation is the permanent removal of forests and vegetation cover from an area, often resulting in ecological, social, and economic impacts.The logging process typically involves the following steps:
Tree selection: Loggers identify and select specific trees for harvesting based on their species, size, and commercial value. Valuable tree species often targeted for logging include mahogany, teak, oak, and other hardwoods.
Access and infrastructure development: Loggers establish infrastructure such as roads and trails within the forest to reach the targeted trees. This infrastructure facilitates the transportation of heavy machinery, logging equipment, and harvested timber.
Clearing vegetation: Prior to logging, loggers often clear the understory vegetation and smaller trees surrounding the target trees to enhance access and maneuverability for machinery.
Tree felling: The selected trees are cut down using chainsaws, harvesters, or other mechanized equipment. The felled trees are then prepared for further processing.
Timber extraction: Once the trees are felled, loggers extract the timber from the forest by removing branches and cutting the tree trunks into logs of appropriate sizes for transport.
Log transportation: Extracted logs are transported from the logging site to processing facilities or storage areas using trucks, barges, or helicopters, depending on the accessibility of the area.
Processing and utilization: At processing facilities, the harvested logs are further processed into lumber, plywood, or other wood products. These products find applications in various industries, such as construction, furniture manufacturing, or paper production.The impacts of logging on deforestation are significant and wide-ranging.
Loss of biodiversity: Logging often leads to the destruction of forest ecosystems, resulting in the loss of habitat for numerous plant and animal species. Deforestation disrupts the intricate web of biodiversity and can contribute to the extinction or endangerment of various species.
Carbon emissions and climate change: Trees play a crucial role in mitigating climate change by absorbing carbon dioxide through photosynthesis. When trees are logged, the stored carbon is released back into the atmosphere as carbon dioxide, contributing to greenhouse gas emissions and climate change.
Soil erosion and degradation: Forests provide a protective cover for the soil, preventing erosion by wind and water. The removal of trees makes the exposed soil more vulnerable to erosion, leading to the loss of fertile topsoil and the degradation of the land.
Disruption of water cycles: Forests act as natural water catchments, regulating water flow and maintaining water quality. Deforestation can disrupt the water cycle, resulting in reduced water availability, altered rainfall patterns, and an increased risk of droughts or floods.
Indigenous and local community impacts: Many indigenous peoples and local communities depend on forests for their livelihoods, cultural practices, and sustenance. Deforestation and logging can displace these communities, undermine their traditional way of life, and create social conflicts.
Economic considerations: While logging can provide economic benefits in terms of employment and revenue generation, unsustainable logging practices can deplete forest resources and undermine long-term economic sustainability. Overexploitation of forests can lead to the loss of potential future income and economic opportunities.Efforts to address the impacts of logging on deforestation include implementing sustainable forest management practices, promoting reforestation and afforestation, establishing protected areas, enforcing regulations and policies, and supporting alternative livelihood options for local communities dependent on forests.A 2013 paper found a correlation between rainforest logging in the Amazon and reduced precipitation in the area, resulting in lower yields per hectare. This suggests that, on a broader scale, there is no economic gain for Brazil through logging, selling trees, and using the cleared land for pastoral purposes.
Crude oil
According to a September 2016 report by Amazon Watch, the importation of crude oil by the US is linked to about 20,000 sq mi (~50,000 km2) of rainforest destruction in the Amazon and the emission of substantial greenhouse gases. These impacts are mostly focused in the western Amazon countries of Ecuador, Peru, and Colombia. The report also indicates that oil exploration is occurring in an additional ~100,000 sq mi (~250,000 km2) of rainforest.
Other
During August 2019, a prolonged forest fire occurred in the Amazon, contributing significantly to deforestation during that summer. Approximately 519 sq mi (1,340 km2) of the Amazon forest was lost.
It is worth noting that certain instances of deforestation in the Amazon have been attributed to farmers clearing land for small-scale subsistence agriculture.
Loss rates
During the early 2000s, deforestation in the Amazon rainforest showed an increasing trend, with an annual rate of 27,423 km2 (10,588 sq mi) of forest loss recorded in 2004. Subsequently, the annual rate of forest loss generally slowed between 2004 and 2012, although there were spikes in deforestation rates in 2008, 2013, and 2015.However, recent data suggests that the loss of forest cover is once again accelerating. Between August 2017 and July 2018, approximately 7,900 km2 (3,100 sq mi) of forest were deforested in Brazil, representing a 13.7% increase compared to the previous year and the largest area cleared since 2008. Deforestation in the Brazilian Amazon rainforest experienced a significant surge in June 2019, rising more than 88% compared to the same month in 2018. and more than doubling in January 2020 compared to January 2019.In August 2019, a substantial number of forest fires, totaling 30,901 individual fires, were reported, marking a threefold increase compared to the previous year. However, the number of fires decreased by one-third in September, and by October 7, it had dropped to approximately 10,000. It is important to note that deforestation is considered to have more severe consequences than burning. The National Institute for Space Research (INPE) in Brazil estimated that at least 7,747 km2 (2,991 sq mi) of the Brazilian Amazon rainforest were cleared during the first half of 2019. INPE subsequently reported that deforestation in the Brazilian Amazon reached a 12-year high between August 2019 and July 2020.Deforestation figures in Brazil are annually provided by the Instituto Nacional de Pesquisas Espaciais (INPE), based on satellite images captured during the dry season in the Amazon by the Landsat satellite. It's important to note that these estimates may focus solely on the loss of the Amazon rainforest and may not include the loss of natural fields or savannah within the Amazon biome.
†Value calculated from estimated forest loss, not directly known.
Impacts
Deforestation and biodiversity loss in the Amazon rainforest have resulted in significant risks of irreversible changes. Modeling studies have suggested that deforestation may be approaching a critical "tipping point" where large-scale "savannization" or desertification could occur, leading to catastrophic consequences for the global climate. This tipping point could trigger a self-perpetuating collapse of biodiversity and ecosystems in the region. Failing to prevent this tipping point could have severe impacts on the economy, natural capital, and ecosystem services. A study published in Nature Climate Change in 2022 provided empirical evidence that more than three-quarters of the Amazon rainforest has experienced a decline in resilience since the early 2000s, posing risks of dieback that would impact biodiversity, carbon storage, and climate change.To maintain a high level of biodiversity, research suggests that a threshold of 40% forest cover in the Amazon should be maintained.
Impact on global warming
Deforestation, along with other forms of ecosystem destruction such as peatbog degradation, can have multiple effects. It can reduce the carbon sink capacity of the land and contribute to increased emissions through factors like wildfires, land-use change, and reduced ecosystem health. These impacts can disrupt the normal carbon-absorbing processes of ecosystems, leading to stress and imbalance.
Historically, the Amazon Basin has played a significant role as a carbon sink, absorbing approximately 25% of the carbon captured by terrestrial land.However, a scientific review article published in 2021 indicates that current evidence suggests the Amazon basin is currently emitting more greenhouse gases than it absorbs overall. This shift is attributed to climate change impacts and human activities in the region, particularly wildfires, current land-use practices, and deforestation. These factors contribute to the release of forcing agents that are likely to result in a net warming effect. Warming temperatures and changing weather patterns also lead to physiological responses in the forest, further hindering the absorption of CO2.
Impacts on water supply
The deforestation of the Amazon rainforest has had a significant impact on Brazil's freshwater supply, particularly affecting the agricultural industry, which has been involved in clearing the forests. In 2005, certain regions of the Amazon basin experienced the most severe drought in over a century. This can be attributed to two key factors:
1. The rainforest plays a crucial role in contributing to rainfall across Brazil, even in distant areas. Deforestation has exacerbated the effects of droughts in 2005, 2010, and 2015–2016.2. The rainforest contributes to rainfall and facilitates water storage, which in turn provides freshwater to the rivers that supply Brazil and other countries with water.
Impact on local temperature
In 2019, a group of scientists conducted research indicating that under a "business as usual" scenario, the deforestation of the Amazon rainforest will lead to a temperature increase of 1.45 degrees in Brazil. They stated that this temperature rise could have various consequences, including increased human mortality rates and electricity demands, reduced agricultural yields and water resources, and the potential collapse of biodiversity, especially in tropical regions. Additionally, local warming may cause shifts in species distributions, including those involved in the transmission of infectious disease. The authors of the paper assert that deforestation is already contributing to the observed temperature rise.
Impact on indigenous people
More than one-third of the Amazon forest is designated as Indigenous territory, encompassing over 4,466 formally recognized territories. Until 2015, approximately 8% of deforestation in the Amazon occurred within forests inhabited by indigenous peoples, while 88% occurred in areas outside of indigenous territories and protected areas, despite these areas comprising less than 50% of the total Amazon region. Indigenous communities have historically relied on the forest for sustenance, shelter, water, materials, fuel, and medicinal resources. The forest holds significant cultural and spiritual importance for them. Consequently, deforestation rates tend to be lower within Indigenous Territories, even though pressures to clear land for other purposes persist.During the deforestation of the Amazon, native tribes have often faced mistreatment and abuse. Encroachments by loggers onto indigenous lands have led to conflicts resulting in fatalities. Some uncontacted indigenous groups have emerged from the forests and interacted with mainstream society due to threats from outsiders. When uncontacted tribes come into contact with outsiders, they are vulnerable to diseases against which they have little immunity. As a result, entire tribes can be severely impacted by epidemics, leading to significant population declines within a few years.A long-standing struggle has taken place over the control of indigenous territories in the Amazon, primarily involving the Brazilian government. The demand for these lands has stemmed, in part, from the aim of enhancing Brazil's economic standing. Various individuals, including ranchers and land speculators from the southeast, have sought to claim these lands for personal financial gain. In early 2019, Brazil's newly elected president, Jair Bolsonaro, issued an executive order empowering the agriculture ministry to regulate the land occupied by indigenous tribes in the Amazon.In the past, mining operations were permitted within the territory of an isolated indigenous group called the Yanomami. The conditions endured by these indigenous peoples resulted in many health issues, including tuberculosis. If their lands are utilized for further development, numerous tribal communities will be forcibly displaced, potentially leading to the loss of lives. In addition to the mistreatment of indigenous peoples, the exploitation of the forest itself will result in the depletion of vital resources necessary for their daily lives.
Efforts to stop and reverse deforestation
Norwegian Prime Minister Jens Stoltenberg announced on September 16, 2008, that the Norwegian government would contribute a donation of US$1 billion to the newly established Amazon Fund. The funds from this initiative would be allocated to projects aimed at mitigating the deforestation of the Amazon rainforest.In September 2015, Brazilian President Dilma Rousseff addressed the United Nations, reporting that Brazil had effectively reduced the deforestation rate in the Amazon by 82%. She also outlined Brazil's goals for the next 15 years, which included eliminating illegal deforestation, restoring and reforesting an area of 120,000 km2 (46,000 sq mi), and rehabilitating 150,000 km2 (58,000 sq mi) of degraded pastures.In August 2017, Brazilian President Michel Temer revoked the protected status of an Amazonian nature reserve, which spanned an area equivalent to Denmark in the northern states of Pará and Amapá.In April 2019, an Ecuadorian court issued an order to cease oil exploration activities in an area of 1,800 square kilometers (690 sq mi) within the Amazon rainforest.In May 2019, eight former environment ministers in Brazil expressed concerns about escalating deforestation in the Amazon during Jair Bolsonaro's first year as president. Carlos Nobre, an expert on the Amazon and climate change, warned in September 2019 that if deforestation rates continued at their current pace, the Amazon forest could reach a tipping point within 20 to 30 years, potentially resulting in large portions of the forest transforming into a dry savanna, particularly in the southern and northern regions.Bolsonaro has rebuffed European politicians' attempts to intervene in the matter of Amazon rainforest deforestation, citing it as Brazil's internal affairs. He has advocated for the opening of more areas, including those in the Amazon, for mining activities and mentioned discussions with US President Donald Trump about a joint development program for the Brazilian Amazon region.
Brazilian Economy Minister Paulo Guedes has expressed the belief that other countries should compensate Brazil for the oxygen produced within its borders but used elsewhere.In late August 2019, following an international outcry and warnings from experts about the escalating fires, the Brazilian government, led by Jair Bolsonaro, implemented measures to combat the fires. These measures included a 60-day ban on forest clearance using fires, deploying 44,000 soldiers to fight the fires, receiving four planes from Chile for firefighting purposes, accepting a $12 million aid package from the UK government, and softening his stance on aid from the G7. Bolsonaro also called for a Latin American conference to address the preservation of the Amazon.On November 2, 2021, during the COP26 climate summit, over 100 countries, representing approximately 85% of the world's forests, reached a significant agreement to end deforestation by 2030. This agreement, an improvement on the 2014 New York Declaration on Forests, which initially aimed to reduce deforestation by 50% by 2020 and end it by 2030, now includes Brazil as a signatory. It is worth noting that deforestation increased during the 2014–2020 period despite the previous agreement.In August 2023, Brazilian President Luiz Inacio Lula da Silva hosts a summit in Belem with eight South American countries to coordinate policies for the Amazon basin and develop a roadmap to save the world's largest rainforest, also serving as a preparatory event for the COP30 UN climate talks in 2025.In the first 8 months of 2023 deforestation rate in the Amazon rainforest declined by 48% what prevented the release of 196 million tons CO2 to the atmosphere. Financing from the Amazon Fund and cooperation between the Amazonian nations played a significant role in it. In the first 9 months of 2023 deforestation rate declined by 49.5% despite the worst drought in the last 40 years. Wildfires in September 2023 declined by 36% in comparison to September 2022. Switzerland and United States gave 8.4 million dollars to the Amazon fund for preventing deforestation.
Cost of rainforest conservation
According to the Woods Hole Research Institute (WHRC) in 2008, it was estimated that halting deforestation in the Brazilian rainforest would require an annual investment of US$100–600 million. A more recent study in 2022 suggested that the conservation of approximately 80% of the Brazilian rainforest remains achievable, with an estimated annual cost of US$1.7–2.8 billion to conserve an area of 3.5 million km2. By preventing deforestation, it would be possible to avoid carbon emissions at a cost of US$1.33 per ton of CO2, which is lower compared to the cost of reducing emissions through renewable fuel subsidies (US$100 per ton) or weatherization assistance programs such as building insulation (US$350/ton).
Future of the Amazon rainforest
Based on the deforestation rates observed in 2005, projections indicated that the Amazon rainforest would experience a 40% reduction within two decades. While the rate of deforestation has slowed since the early 2000s, the forest continues to shrink annually, and satellite data analysis reveals a significant increase in deforestation since 2018.
See also
2019 Brazil wildfires
Belo Monte Dam
Cattle ranching
Clearcutting
Construction of the Trans-Amazonian Highway
Deforestation
Deforestation in Brazil
Flying river
Livestock's Long Shadow
Logging
IBAMA
INCRA
Population and energy consumption in Brazilian Amazonia
Risks of using unsustainable agricultural practices in rainforests
Selective logging in the Amazon rainforest
Terra preta
Non-timber forest products
Orinoco Mining Arc
Fauna
Panthera onca onca
Peruvian jaguar
Southern jaguar
Bibliography
Bradford, Alina. "Deforestation: Facts, Causes, & Effects." Live Science. Deforestation: Facts, Causes & Effects. March 4, 2015. Web. July 16, 2017.
Monbiot, George (1991). Amazon watershed: the new environmental investigation. London, UK: Michael Joseph. ISBN 978-0-7181-3428-0.
Scheer, Roddy, and Moss, Doug. "Deforestation and its Extreme Effects on Global Warming." Scientific America. Deforestation and Its Extreme Effect on Global Warming. 2017. Web. July 16, 2017.
Schleifer, Philip (2023). "Global Shifts: Business, Politics, and Deforestation in a Changing World Economy". Cambridge MA / USA: MIT Press (open access). ISBN 9780262374439.
Tabuchi, Hiroko, Rigby, Claire, and White, Jeremy. "Amazon Deforestation, Once Tamed, now Comes Roaring Back." The New York Times. Amazon Deforestation, Once Tamed, Comes Roaring Back. Feb 24, 2017. Web. July 16, 2017.
References
External links
Media related to deforestation of the Amazon rainforest at Wikimedia Commons
(PDF) ARC OF DEFORESTATION EXPANSION
Camill, Phil. "The Deforestation of the Amazon". (1999). May 31, 2011.
"Amazon Deforestation Trend On The Increase". ScienceDaily LLC (2009). May 31, 2011.
Butler, Rhett. "Deforestation in the Amazon". Mongabay.com. July 9, 2014.
"Amazon Deforestation: Earth's Heart and Lungs Dismembered". LiveScience.com. January 9, 2009.
"The Roots of Deforestation in the Amazon". Effects-of-Deforestation.com. May 31, 2011.
"Amazon Deforestation Declines to Record Low". Nature.com. May 31, 2011.
"Brazil confirms rising deforestation in the Amazon". March 14, 2015.
Some people launder money. Other people launder cattle. Vox, October 19, 2022. |
political views of osama bin laden | Osama bin Laden took ideological guidance from prominent militant Islamist scholars and ideologues from the classical to contemporary eras, such as Ibn Taymiyya, Ibn al-Qayyim al-Jawziyyah, Sayyid Qutb and Abdullah Azzam. During his middle and high school years, bin Laden was educated in Al-Thager Model School, a public school in Jeddah run by Islamist exiles of the Muslim Brotherhood; during which he was immensely influenced by pan-Islamist ideals and displayed strict religious commitment. As a teenager, bin Laden attended and led Muslim Brotherhood-run "Awakening" camps held on desert outskirts that intended to raise the youth in religious values, instil martial spirit and sought spiritual seclusion from "the corruptions" of modernity and rapidly urbanising society of the 1970s in Saudi Arabia.
While some reporters have speculated that bin Laden was an adherent of the Wahhabi movement, other researchers have disputed this notion. he subscribed to the Athari (literalist) school of Islamic theology. During his studies in King Abdulaziz University, bin Laden became immersed in the writings of the Egyptian militant Islamist scholar Sayyid Qutb; most notably Milestones and In The Shade of the Qur'an. Bin Laden adopted Qutb's anti-Westernism, his assertion that the Muslim World has been steeped in a state of Jahiliyyah (pre-Islamic ignorance) and embraced his revolutionary call for overthrowing the Arab governments by means of an ideologically committed vanguard.To effectuate his beliefs, Osama bin Laden founded al-Qaeda, a pan-Islamist militant organization, with the objective of recruiting Muslim youth for participating in armed Jihad across various regions of the Islamic world such as Palestine, Kashmir, Central Asia, etc. In conjunction with several other Islamic leaders, he issued two fatwas—in 1996 and then again in 1998—that Muslims should fight those that either support Israel or any Western military forces in Islamic countries, stating that those in that mindset are the enemy, including citizens from the United States and allied countries. His goal was for Western military forces to withdraw from the Middle East and for foreign aid to Israel to cease as the aid is used to fund Israeli policy in the region.
Sharia
Following a form of Islamism, bin Laden believed that the restoration of God's law will set things right in the Muslim world. He stated, "When we used to follow Muhammad's revelation we were in great happiness and in great dignity, to Allah belongs the credit and praise." He believed "the only Islamic country" in the Muslim world was Afghanistan under the rule of Mullah Omar's Taliban before that regime was overthrown in late 2001.
Differences with Wahhabi ideology
Bin Laden's connection with contemporary Wahhabi Islam is disputed. Some believe his ideology is different in crucial ways. While modern Wahhabi doctrine states that only political leaders can call for jihad, bin Laden believed he could declare jihad. Modern Wahhabism forbids disobedience to a ruler unless the rule has commanded his/her subjects to violate religious commandments.A number of Islamists have asserted that bin Laden has no direct connections with Wahhabism, although he may have been inspired by the movement of Muhammad ibn 'Abd al- Wahhab and its ideals. Bin Laden's Yemeni origins also point to a non-Wahhabi background. Moreover, the traditional Wahhabi Shaykhs are strongly opposed to war-tactics like suicide bombings justified by the Al-Qaeda ideologues, which is regarded as a perversion of Islamic teachings. Furthermore, the basic goals of bin Laden are different to contemporary Wahhabists. Bin Laden was most interested in "resisting western domination and combating regimes that fail to rule according to Islamic law," while Wahhabism focuses on correct methods of worshiping God, removing idols, and ensuring adherence to Islamic law.
On the other hand, some believe bin Laden "adopted Wahhabi terminology" when they called America "the Hubal of the age", since Hubal was a stone idol and idolatry (shirk) was the primary Wahhabi sin. According to Jonathan Sozek:"Salafism can be understood as an umbrella term under which a movement like Wahhabism might be placed. Not everyone who identifies with Salafism is a Wahhabi... . Bin Laden himself has identified himself with Salafism (meaning little more, perhaps, than a Christian identifying themselves as an evangelical), but this says nothing as to his relationship to Wahhabism."
Jihad
Jihad, a common Arabic word meaning to "strife" or "struggle", is referred to in the Qur'an to indicate that Muslims must be willing to exert effort in the cause of God, using their wealth and themselves. It refers to the internal struggle to be a better Muslim, the struggle between good and evil. In a January 2004 message bin Laden called for the establishment of provisional underground ruling councils in Muslim countries to be made up of "ulema, leaders who are obeyed among their people, dignitaries, nobles, and merchants." The councils would be sure "the people" had "easy access to arms, particularly light weapons; anti-armored rockets, such as RPGs; and anti-tank mines" to fight "raids" by "the Romans", i.e. United States.His interviews, video messages and other communications always mentioned and almost always dwelt on need for jihad to right what he believed were injustices against Muslims by the United States and sometimes other non-Muslim states, the need to eliminate the state of Israel, and to force the withdrawal of the U.S. from the Middle East. Occasionally other issues arose; he called for Americans to "reject the immoral acts of fornication, homosexuality, intoxicants, gambling, and usury", in an October 2002 letter.
Former CIA officer and chief of Bin Laden Issue Station, Michael Scheuer writes: "In the context of the ideas Bin Laden shares with his brethren, the military actions of Al-Qaeda and its allies... are part of a defensive jihad sanctioned by the revealed word of God... bin Laden believes Islam is being attacked by America and its allies and is simply recognizing his responsibility to fight in a defensive jihad. Further, bin Laden is calling on other Muslims to similarly identify the threat and to do their duty to God and their brethren... Having defined the threat to Islam as the U.S.-led crusaders' attacks and prescribing a defensive jihad as the only appropriate response, bin Laden regards al Qaeda as having an important role to play—"the vanguard of a Muslim nation"
Grievances against countries
East Timor
In his November 2001 statement, bin Laden criticized the United Nations and Australian "Crusader" forces for ensuring the independence of the mostly Catholic East Timor from the mostly Muslim state of Indonesia.
India
Bin Laden considered India to be a part of the 'Crusader-Zionist-Hindu' conspiracy against the Islamic world.
Saudi Arabia
Bin Laden was born in Saudi Arabia and had a close relationship with the Saudi royal family, but his opposition to the Saudi government stemmed from his radical ideology. The Saudi decision to allow the U.S. military into the country in 1990 to defend against a possible attack by Saddam Hussein upset bin Laden, although he was not necessarily opposed to the royal family at this time or going to war with Iraq and even offered to send his mujahedeen from Afghanistan to defend Saudi Arabia from an Iraqi attack, an offer which was rebuked by King Fahd. From his point of view, "for the Muslim Saudi monarchy to invite non-Muslim American troops to fight against Muslim Iraqi soldiers was a serious violation of Islamic law".Bin Laden, in his 1996 fatwa entitled "Declaration of War against the Americans Occupying the Land of the Two Holy Places", identified several grievances that he had about Saudi Arabia, the birthplace and holy land of Islam. Bin Laden said these grievances about Saudi Arabia:
# The intimidation and harassment suffered by the leaders of the society, the scholars, heads of tribes, merchants, academic teachers and other eminent individuals;
The situation of the law within the country and the arbitrary declaration of what is Halal and Haram (lawful and unlawful) regardless of the Shari'ah as instituted by Allah;
The state of the press and the media which became a tool of truth-hiding and misinformation; the media carried out the plan of the enemy of idolising cult of certain personalities and spreading scandals among the believers to repel the people away from their religion, as Allah, the Exalted said: {surely- as for- those who love that scandal should circulate between the believers, they shall have a grievous chastisement in this world and in the here after} (An-Noor, 24:19).
Abuse and confiscation of human rights;
The financial and the economic situation of the country and the frightening future in the view of the enormous amount of debts and interest owed by the government; this is at the time when the wealth of the Ummah being wasted to satisfy personal desires of certain individuals!! while imposing more custom duties and taxes on the nation. (the prophet said about the woman who committed adultery: "She repented in such a way sufficient to bring forgiveness to a custom collector!!").
The miserable situation of the social services and infra-structure especially the water service and supply, the basic requirement of life.,
The state of the ill-trained and ill-prepared army and the impotence of its commander in chief despite the incredible amount of money that has been spent on the army. The gulf war clearly exposed the situation.,
Shari'a law was suspended and man made law was used instead.
And as far as the foreign policy is concerned the report exposed not only how this policy has disregarded the Islamic issues and ignored the Muslims, but also how help and support were provided to the enemy against the Muslims; the cases of Gaza-Ariha and the communist in the south of Yemen are still fresh in the memory, and more can be said.
Bin Laden wanted to overthrow the Saudi monarchy (and the governments of Middle Eastern states) and establish an Islamic state according to Shari'a law (Islamic Holy Law), to "unite all Muslims and to establish a government which follows the rule of the Caliphs."
Soviet Union
In 1979, bin Laden opposed the Soviet Union's invasion of Afghanistan and would soon heed the call to arms by Afghan freedom fighters. Bin Laden would use his own independent wealth and resources to get fighters from Egypt, Lebanon, Kuwait and Turkey to join the Afghans in their battle against the Soviets. While bin Laden praised the U.S. intervention early on, being happy that the Afghans were getting aid from all over the world to battle the Soviets, his view of the U.S. soon grew sour, stating "Personally neither I nor my brothers saw evidence of American help. When my mujahedin were victorious and the Russians were driven out, differences started..."
United Kingdom
Bin Laden believed that Israeli Jews controlled the British government, directing it to kill as many Muslims as it could. He cited British participation in 1998's Operation Desert Fox as proof of this allegation.
United States
Osama Bin Laden condemned United States as the head of the "Zionist-Crusader alliance" that is waging war against Muslims across the world. In particular, Bin Laden was fiercely opposed to the stationing of US troops in the Arabian Peninsula and urged Muslims to rise up in armed Jihad and expel American forces from Muslim lands. He asserted that it is the religious duty of all Muslims to fight defensive Jihad against United States and resist the numerous acts of American agression against the Muslim World.Numerous crimes of United States listed by Bin Laden included the sanctions against Iraq which resulted in hundreds of thousands of deaths, arms embargo in Bosnia which led to the Bosnian genocide, as well as massacres committed by US military and its allies in Somalia, Tajikistan, Palestine, Kashmir, Philippines, etc. In Al-Qaeda's "Declaration of War Against the Americans Occupying the Land of the Two Holy Places", also known as the "Ladenese Epistle", Osama Bin Laden stated:
It is no secret to you, my brothers, that the people of Islam have been afflicted with oppression, hostility, and injustice by the Judeo-Christian alliance and its supporters. This shows our enemies' belief that Muslims' blood is the cheapest and that their property and wealth is merely loot. Your blood has been spilt in Palestine and Iraq, and the horrific image of the massacre in Qana in Lebanon are still fresh in people's minds. The massacres that have taken place in Tajikistan, Burma, Kashmir, Assam, the Philippines, Fatani, Ogaden, Somalia, Eritrea, Chechnya, and Bosnia-Herzegovina send shivers down our spines and stir up our passions. All this has happened before the eyes and ears of the world, but the blatant imperial arrogance of America, under the cover of the immoral United Nations, has prevented the dispossessed from arming themselves. So the people of Islam realized that they were the fundamental target of the hostility of the Judeo-Crusader alliance.
Bin Laden's stated motivations of the September 11 attacks include the support of Israel by the United States, the presence of the U.S. military in the sacred Islamic lands of the Arabian Peninsula, and the U.S. enforcement of sanctions against Iraq. He first called for jihad against the United States in 1996. This call solely focused on U.S. troops in Saudi Arabia; bin Laden loathed their presence and wanted them removed in a "rain of bullets".Denouncing Americans as "the worst thieves in the world today and the worst terrorists", Bin Laden mocked American accusations of "terrorism" against Al-Qaeda. Bin Laden's hatred and disdain for the U.S. were also manifested while he lived in Sudan. There he told Al-Qaeda fighters-in-training:
America appeared so mighty ... but it was actually weak and cowardly. Look at Vietnam, look at Lebanon. Whenever soldiers start coming home in body bags, Americans panic and retreat. Such a country needs only to be confronted with two or three sharp blows, then it will flee in panic, as it always has. ... It cannot stand against warriors of faith who do not fear death.
In order to fight the U.S., apart the military option, he also called for asceticism as well as economic boycott, as during this August 1996 speech in the Hindu Kush mountains:
… in particular, we remind them of the following: the wealth you devote to the purchase price of American goods will be transformed into bullets shot into the breasts of our brothers in Palestine, in the Land of the Two Holy Sanctuaries, and elsewhere. In buying their goods we strengthen their economy while exacerbating our own poverty and weakness (...) we expect the women of the Land of the Two Holy Sanctuaries and elsewhere to carry out their role by practicing asceticism from the world, and by boycotting American goods. If economic boycotting is combined with the strugglers’ military operations, then the defeat of the enemy would be even nearer, by God’s permission. The opposite is also true: If Muslims do not cooperate with the struggling brothers, supplying them with assistance in curtailing economic collaboration with the American enemy, then they are supplying them with wealth that is the mainstay of war and the lifeblood of armies. In effect, they extend the period of war and abet the oppression of Muslims.
Grievances against the United States
In his 1998 fatwa entitled "Jihad Against Jews and Crusaders" bin Laden identified three grievances against the U.S.:
First, for over seven years the United States has been occupying the lands of Islam in the holiest of places, the Arabian Peninsula, plundering its riches, dictating to its rulers, humiliating its people, terrorizing its neighbors, and turning its bases in the Peninsula into a spearhead through which to fight the neighboring Muslim peoples.
If some people have in the past argued about the fact of the occupation, all the people of the Peninsula have now acknowledged it. The best proof of this is the Americans' continuing aggression against the Iraqi people using the Peninsula as a staging post, even though all its rulers are against their territories being used to that end, but they are helpless.
Second, despite the great devastation inflicted on the Iraqi people by the crusader-Zionist alliance, and despite the huge number of those killed, which has exceeded 1 million... despite all this, the Americans are once against trying to repeat the horrific massacres, as though they are not content with the protracted blockade imposed after the ferocious war or the fragmentation and devastation.
So here they come to annihilate what is left of this people and to humiliate their Muslim neighbors.
Third, if the Americans' aims behind these wars are religious and economic, the aim is also to serve the Jews' petty state and divert attention from its occupation of Jerusalem and murder of Muslims there. The best proof of this is their eagerness to destroy Iraq, the strongest neighboring Arab state, and their endeavor to fragment all the states of the region such as Iraq, Saudi Arabia, Egypt, and Sudan into paper statelets and through their disunion and weakness to guarantee Israel's survival and the continuation of the brutal crusade occupation of the Peninsula.
Bin Laden criticized the United States in a "letter to the American people" published in late 2002, and further outlined his grievances with the United States in a 2004 speech directed towards the American people.
Criticism of American media
Bin Laden had a negative opinion of American mass media, and he accused American news-outlets and journalists of attempting to incite the "innocent and good-hearted people in the West" to wage war against Muslims. He further asserted that voices of those Westerners who opposed US wars of aggression were suppressed by the American deep state and pro-Zionist lobbies.
Favorable opinion of two American authors
In 2011, in a review of a new book from former CIA officer Michael Scheuer, professor and writer Fouad Ajami wrote that "in 2007, [bin Laden] singled out two western authors whose knowledge he had high regard for: Noam Chomsky and Michael Scheuer.
John F. Kennedy conspiracy theory
Bin Laden supported the conspiracy theory that John F. Kennedy was killed by the "owners of the major corporations who were benefiting from its (Vietnam War) continuation":
In the Vietnam War, the leaders of the White House claimed at the time that it was a necessary and crucial war, and during it, Donald Rumsfeld and his aides murdered two million villagers. And when Kennedy took over the presidency and deviated from the general line of policy drawn up for the White House and wanted to stop this unjust war, that angered the owners of the major corporations who were benefiting from its continuation. And so Kennedy was killed, and al-Qaida wasn't present at that time, but rather, those corporations were the primary beneficiary from his killing. And the war continued after that for approximately one decade. But after it became clear to you that it was an unjust and unnecessary war, you made one of your greatest mistakes, in that you neither brought to account nor punished those who waged this war, not even the most violent of its murderers, Rumsfeld.
Military strategy
Targeting strategy
Osama bin Laden's military strategy supported the indiscriminate targeting of Americans, as retaliation against US military's attacks against Muslim women and children. He asserted that the US policy was to perpetrate scorched-earth tactics against its enemies, which killed civilians as well as combatants recklessly, and hence balanced retaliatory measures targeting American civilians was justified. Bin Laden denounced American accusations of terrorism against Al-Qaeda as part of a psychological warfare against Muslims who wanted to support armed resistance and join the Jihad against "Israeli-American occupation of Islamic sacred lands".
Some examples of American state terrorism condemned by Bin Laden included Hiroshima and Nagasaki atomic bombings, US support to Israeli massacres in Lebanon, sanctions against Iraq which resulted in hundreds of thousands of deaths, arms embargo against Bosnia, etc. Responding to American allegations of terrorism, Bin Laden stated in a November 1996 interview published by the "Nida'ul Islam" magazine:"As for their accusations [that we] terrorize the innocent, the children, and the women, these fall into the category of "accusing others of their own affliction in order to fool the masses." The evidence overwhelmingly shows America and Israel killing the weaker men, women, and children in the Muslim world and elsewhere. ... Despite the continuing American occupation of the country of the two sacred mosques, America continues to claim that it is upholding the banner of freedom and humanity, yet it perpetrated deeds which you would not find the most ravenous of animals debasing themselves to do."In a March 1997 interview with CNN journalist Peter Arnett, Bin Laden stated that US soldiers were the primary targets of Jihad against America and demanded all American nationals to leave Saudi Arabia. While maintaining that American civilians were not targeted to be attacked as part of al-Qaeda's 1996 declaration of war against USA, Bin Laden stated:"... we have focused our declaration on striking at the soldiers in the country of The Two Holy Places. The country of the Two Holy Places has in our religion a peculiarity of its own over the other Muslim countries. In our religion, it is not permissible for any non-Muslim to stay in our country. Therefore, even though American civilians are not targeted in our plan, they must leave. We do not guarantee their safety, because we are in a society of more than a billion Muslims. A reaction might take place as a result of the US government's targeting of Muslim civilians and executing more than 600,000 Muslim children in Iraq by preventing food and medicine from reaching them. So, the US is responsible for any reaction, because it extended its war against troops to civilians."In a 1998 interview, Bin stated stated that al-Qaeda fighters distinguished "between men and women, and between children and old people" during warfare; unlike hypocritical "infidels" who "preach one thing and do another." During the same interview, Bin Laden denied direct involvement in launching the 1998 US embassy bombings in East Africa, while praising the attacks as "a popular response" from Muslim youth fighting against American imperialism. In an interview with Tayseer Allouni on 21 October 2001, Bin Laden stated that those who say "killing a child is not valid" in Islam "speak without any knowledge of Islamic law", asserting that non-Muslim enemies can be targeted indiscriminately if they killed Muslim women and children. Thus, Bin Laden advocated the targeting of all Americans in retaliation against the indiscriminate military attacks of United States against Muslim populations. He further asserted that all Americans are guilty of their government's war against Muslims, arguing that American people elected these governments and paid taxes that funded the aggressive acts of US military.When he was asked about the Muslims killed in the September 11 attacks, bin Laden replied that "Islamic law says that Muslim should not stay long in the land of infidels". In addition to maintaining that the strikes were not directed against women and children and asserting that the primary targets of the 9/11 attacks were the symbols of American "economic and military power", Bin Laden stated: "This is a significant issue in Islamic jurisprudence. According to my information, if the enemy occupies an Islamic land and uses its people as human shields, a person has the right to attack the enemy. In the same way, if some thieves broke into a house and took a child hostage to protect themselves, the father has the right to attack the thieves, even if the child gets hurt. The United States and their allies are killing us in Palestine, Chechnya, Kashmir and Iraq."
Other ideologies
In his messages, bin Laden has opposed "pan-Arabism, socialism, communism, democracy and other doctrines," with the exception of Islam. Democracy and "legislative councils of representatives," are denounced, calling the first "the religion of ignorance," and the second "councils of polytheism." In what one critic has called a contradiction, he has also praised the principle of governmental "accountability," citing the Western democracy of Spain: "Spain is an infidel country, but its economy is stronger than ours because the ruler there is accountable."
Opposition to music
Bin Laden opposed music on religious grounds. Despite his love of horse racing and ownership of racing horses, the presence of a band and music at the Khartoum race track annoyed him so much that he stopped attending races in Sudan. "Music is the flute of the devil," he told his Sudanese stable-mate Issam Turabi. Despite his hatred for music, Bin Laden reportedly had a celebrity crush on American singer Whitney Houston, and, according to poet and activist Kola Boof, wanted to make her one of his wives.
Support for environmentalism
Bin Laden and his aides have, on more than one occasion, denounced the United States for damaging the environment.
You have destroyed nature with your industrial waste and gases more than any other nation in history. Despite this, you refuse to sign the Kyoto agreement so that you can secure the profit of your greedy companies and industries.
Ayman al-Zawahiri, bin Laden's aide, said global warming reflected how brutal and greedy the Western Crusader world is, with America at its top
Bin Laden has also called for a boycott of American goods and the destruction of the American economy as a way of fighting global warming.
Technology
On the subject of technology, bin Laden is said to have ambivalent feelings – being interested in "earth-moving machinery and genetic engineering of plants, on the one hand," but rejecting "chilled water on the other." In Afghanistan, his sons' education reportedly eschewed the arts and technology and amounted to "little other than memorizing the Quran all day".
Masturbation
Bin Laden was reported to have asserted that masturbation was justifiable in "extreme" cases. In a letter (dated December 2010) to a North African commander of Al-Qaeda known as "Abu Muhammad Salah", Bin Laden wrote: "Another very special and top secret matter (eyes only you, my brother Abu Muhammad Salah and Samir): it pertains to the problem of the brothers who are with you in their unfortunate celibacy and lack of availability of wives for them in the conditions that have been imposed on them. We pray to God to release them. I wrote to Shaykh/Doctor (Ayman), and I consulted with Shaykh (Abu Yahya). Dr. Ayman has written us his opinion... As we see it, we have no objection to clarifying to the brothers that they may, in such conditions, masturbate, since this is an extreme case. The ancestors approved this for the community. They advised the young men at the time of the conquest to do so. It has also been prescribed by the legists when needed, and there is no doubt that the brothers are in a state of extreme need. However, for those who are not accustomed to such a thing and are ashamed... it may negatively affect his understanding."
Accusations of porn-storage by US officials
Following Bin Laden's assassination, US officials, talking in context of anonymity, claimed to Western media that pornographic files got discovered in the computers of the raided Abottabad compound. However, the officials did not offer comments or evidence as to whether Bin Laden and his associates living inside the complex themselves had obtained the files or observed its contents.
Jews, Christians, and Shia Muslims
Bin Laden delivered many warnings against alleged Jewish conspiracies: "These Jews are masters of usury and leaders in treachery. They will leave you nothing, either in this world or the next." Nevertheless, Bin Laden denounced the European persecutions against Jews and blamed the horrors of the Holocaust on the morality of Western culture: "the morality and culture of the holocaust is your culture, not our culture. In fact, burning living beings is forbidden in our religion, even if they be small like the ant, so what of man?! The holocaust of the Jews was carried out by your brethren in the middle of Europe, but had it been closer to our countries, most of the Jews would have been saved by taking refuge with us. And my proof for that is in what your brothers, the Spanish, did when they set up the horrible courts of the Inquisition to try Muslims and Jews, when the Jews only found safe shelter by taking refuge in our countries. ... They are alive with us and we have not incinerated them, but we are a people who don’t sleep under oppression and reject humiliation and disgrace..."
At the same time, bin Laden's organization worked with Shia militants: "Every Muslim, from the moment they realize the distinction in their hearts, hates Americans, hates Jews, and hates Israelis. This is a part of our belief and our religion." It was apparently inspired by the successes of Shia radicalism—such as the 1979 Iranian Revolution, the implementation of Sharia by Ayatollah Khomeini, and the human wave attacks committed by radical Shia teenagers in the 1980s During The Iran–Iraq War. This point of view may have been influenced by the fact that Bin Laden's mother belonged to the Shia sect. While in Sudan, "senior managers in al Qaeda maintained contacts with" Shia Iran and Hezbollah, its closely allied Shia "worldwide terrorist organization. ... Al Qaeda members received advice and training from Hezbollah." where they are thought to have borrowed the techniques of suicide and simultaneous bombing. Because of the Shia-Sunni schism, this collaboration could only go so far. According to the US 9/11 Commission Report, Iran was rebuffed when it tried to strengthen relations with al Qaeda after the October 2000 attack on USS Cole, "because Bin Laden did not want to alienate his supporters in Saudi Arabia."
See also
War on Terror
Islamism
Palestinian political violence
References
External links
In The Words of Osama Bin Laden - slideshow by Life magazine |
united states rainfall climatology | The characteristics of United States rainfall climatology differ significantly across the United States and those under United States sovereignty. Summer and early fall bring brief, but frequent thundershowers and tropical cyclones which create a wet summer and drier winter in the eastern Gulf and lower East Coast. During the winter, and spring, Pacific storm systems bring Hawaii and the western United States most of their precipitation. Low pressure systems moving up the East Coast and through the Great Lakes, bring cold season precipitation to from the Midwest to New England, as well as Great Salt Lake. The snow to liquid ratio across the contiguous United States averages 13:1, meaning 13 inches (330 mm) of snow melts down to 1 inch (25 mm) of water.During the summer, the North American monsoon combined with Gulf of California and Gulf of Mexico moisture moving around the subtropical ridge in the Atlantic Ocean bring the promise of afternoon and evening air-mass thunderstorms to the southern tier of the country as well as the Great Plains. Equatorward of the subtropical ridge, tropical cyclones enhance precipitation across southern and eastern sections of the country, as well as Puerto Rico, the United States Virgin Islands, the Northern Mariana Islands, Guam, and American Samoa. Over the top of the ridge, the jet stream brings a summer precipitation maximum to the Great Plains and western Great Lakes. Large thunderstorm areas known as mesoscale convective complexes move through the Plains, Midwest, and Great Lakes during the warm season, contributing up to 10% of the annual precipitation to the region.
The El Niño–Southern Oscillation affects the precipitation distribution, by altering rainfall patterns across the West, Midwest, the Southeast, and throughout the tropics. There is also evidence that global warming is leading to increased precipitation to the eastern portions of North America, while droughts are becoming more frequent in the western portions. Furthermore, global La Niña meteorological events are generally associated with drier and hotter conditions and further exacerbation of droughts in California and the Southwestern and to some extent the Southeastern United States. Meteorological scientists have observed that La Niñas have become more frequent over time.
General
The eastern part of the contiguous United States east of the 98th meridian, the mountains of the Pacific Northwest, the Willamette Valley, and the Sierra Nevada range are the wetter portions of the nation, with average rainfall exceeding 30 inches (760 mm) per year. The drier areas are the Desert Southwest, Great Basin, valleys of northeast Arizona, eastern Utah, and central Wyoming. Increased warming within urban heat islands leads to an increase in rainfall downwind of cities.
Alaska
Juneau averages over 50 inches (1,270 mm) of precipitation a year, while other areas in southeast Alaska receive over 275 inches (6,980 mm). South central Alaska does not get nearly as much rain as the southeast of Alaska, though it does get more snow. On average, Anchorage receives 16 inches (406 mm) of precipitation a year, with around 75 inches (1,905 mm) of snow. The northern coast of the Gulf of Alaska receives up to 150 inches (3,800 mm) of precipitation annually. Across western sections of the state, the northern side of the Seward Peninsula is a desert with less than 10 inches (250 mm) of precipitation annually, while some locations between Dillingham and Bethel average around 100 inches (2,540 mm) of precipitation. Inland, often less than 10 inches (250 mm) falls a year, but what precipitation falls during the winter tends to stay throughout the season. La Niña events lead to drier than normal conditions, while El Niño events do not have a correlation towards dry or wet conditions. Precipitation increases by 10 to 40 percent when the Pacific decadal oscillation is positive.
West
From September through May, extratropical cyclones from the Pacific Ocean move inland into the region due to a southward migration of the jet stream during the cold season. This shift in the jet stream brings much of the annual precipitation to the region, and also brings the potential for heavy rain events. The West Coast occasionally experiences ocean-effect showers, usually in the form of rain at lower elevations south of the mouth of the Columbia River. These occur whenever an Arctic air mass from western Canada is drawn westward out over the Pacific Ocean, typically by way of the Fraser Valley, returning shoreward around a center of low pressure. Strong onshore flow is brought into the mountain ranges of the west, focusing significant precipitation into the Rocky Mountains, with rain shadows occurring in the Harney Basin, Great Basin, the central valley of California, and the lower Colorado River valley. In general, rainfall amounts are lower on the southern portions of the West coast. The biggest recipients of the precipitation are the coastal ranges such as the Olympic Mountains, the Cascades, and the Sierra Nevada range. Lesser amounts fall upon the Continental Divide. Cold-season precipitation into this region is the main supply of water to area rivers, such as the Colorado River and Rio Grande, and also acts as the main source of water to people living in this portion of the United States. During El Niño events, increased precipitation is expected in California due to a more southerly, zonal, storm track. California also enters a wet pattern when thunderstorm activity within the tropics associated with the Madden–Julian oscillation nears 150E longitude. During La Niña, increased precipitation is diverted into the Pacific Northwest due to a more northerly storm track.
Lake-effect snow off Great Salt Lake
The southern and southeastern sides of the Great Salt Lake receive significant lake-effect snow. Since the Great Salt Lake never freezes, the lake-effect can affect the weather along the Wasatch Front year round. The lake-effect largely contributes to the 55 inches (140 cm) to 80 inches (200 cm) annual snowfall amounts recorded south and east of the lake, with average snowfall amounts exceeding 600 inches (1,500 cm) in the Wasatch Mountains. The snow, which is often very light and dry due to the desert climate, is referred to as "The Greatest Snow on Earth" in the mountains. Lake-effect snow contributes to approximately 6-8 snowfalls per year in Salt Lake City, with approximately 10% of the city's precipitation being contributed by the phenomenon.
North American Monsoon
The North American Monsoon (NAM) occurs from late June or early July into September, originating over Mexico and spreading into the southwest United States by mid-July. This allows the wet season to start in the Southwest during the summer rather than early fall as seen across the remainder of the West. Within the United States, it affects Arizona, New Mexico, Nevada, Utah, Colorado, West Texas, and California. The North American monsoon is known to many as the Summer, Southwest, Mexican or Arizona monsoon. It is also sometimes called the Desert Monsoon as a large part of the affected area is desert.
When precipitable water values near 1.32 inches (34 mm), brief but often torrential thunderstorms and the hurricane force winds and hail can occur, especially over mountainous terrain. This activity is occasionally enhanced by the passage of retrograding (westward-moving) upper cyclones moving under the subtropical ridge and the entrainment of the remnants of tropical storms. Tropical cyclones from the eastern Pacific contribute to the moisture within the monsoon system, and bring up to 20 percent of the average annual rainfall to southern California. Flash flooding is a serious danger during the monsoon season. Dry washes can become raging rivers in an instant, even when no storms are visible as a storm can cause a flash flood tens of miles away. Lightning strikes are also a significant danger. Because it is dangerous to be caught in the open when these storms suddenly appear, many golf courses in Arizona have thunderstorm warning systems.
As much as 45% of the annual rainfall across New Mexico occurs during the summer monsoon. Many desert plants are adapted to take advantage of this brief wet season. Because of the monsoons, the Sonoran and Mojave are considered relatively "wet" when ranked among other deserts such as the Sahara. Monsoons play a vital role in managing wildfire threat by providing moisture at higher elevations and feeding desert streams. Heavy monsoon rain can lead to excess winter plant growth, in turn a summer wildfire risk. A lack of monsoon rain can hamper summer seeding, reducing excess winter plant growth but worsening drought.
Great Plains
Downslope winds off the Rocky Mountains can aid in forming the dry line. Major drought episodes in the midwestern United States are associated with an amplification of the upper tropospheric subtropical (or monsoon) ridge across the West and Plains, along with a weakening of the western edge of the "Bermuda high". During the summer, a southerly low-level jet draws moisture from the Gulf of Mexico. Additional moisture comes from more local sources, especially transpiring vegetation. Maximum precipitation generally occurs in late spring and early summer, with minimum precipitation in winter. During La Niña events, the storm track shifts far enough northward to bring wetter than normal conditions (in the form of increased snowfall) to the Midwestern states, as well as hot and dry summers.The convective season for the Plains ranges between May and September. Organized systems of thunderstorms known as mesoscale convective systems develop over the region during this period, with a bulk of the activity occurring between midnight and 6 a.m. local time. The time of maximum precipitation during the day gradually varies from late afternoon near the slopes of the Rockies to early morning near the Ohio River valley, in part reflecting the west-to-east propagation of mesoscale convective systems. Mesoscale convective systems bring 30 to 70 percent of the annual warm season rainfall to the Plains. An especially long-lived and well-organized type of mesoscale convective system called a mesoscale convective complex produces on average 8% to 18% of the annual warm season rainfall across the Plains and Midwest. Squall lines account for 30% of the large thunderstorm complexes which move through the region.
Gulf Coast and lower Atlantic Coast south of New England
In general, northern and western portions of this region have a winter/spring maximum in precipitation with late summer/early fall being drier, while southern and eastern portions have a summer maximum and winter/early spring minimum in precipitation.
Most locations on the East Coast from Boston northward show a slight winter maximum as winter storms drop heavy precipitation. South of Boston, convective storms are common in the hot summer months and seasonal rainfall shows a slight summer maximum (though not at all stations). As one moves from Virginia Beach southward, summer becomes the wettest season, as convective thunderstorms created in the hot and moist tropical air mass drop brief but intense precipitation. In winter these areas still sees precipitation as low pressure systems moving across the southern United States often tap moisture from the Gulf of Mexico and drop cold season precipitation from eastern Texas to the New York area.
On the Florida peninsula, a strong monsoon becomes dominant, with dry winters and heavy summer rainfall. In winter the strong subtropical ridge creates the stable air over Florida with little convection and few fronts. Along the Gulf Coast, and the south Atlantic states, decaying tropical systems added to summers peak rainfall.
Cold season
The subtropical jet stream brings in upper level moisture from the Pacific Ocean during the cold season. Ahead of storm systems, significant moisture becomes drawn in from the Gulf of Mexico, which increases moisture within the atmospheric column and leads to precipitation ahead of extratropical cyclones. During the El Niño portion of ENSO, increased precipitation falls along the Gulf coast and Southeast due to a stronger than normal, and more southerly, polar jet stream. In the area around Memphis, Tennessee and across the state of Mississippi, there are two rainfall maxima in the winter and spring. Across Georgia and South Carolina, the first of the annual precipitation maxima occurs in late winter, during February or March. Alabama has an annual rainfall maximum in winter or spring and a dry summer.
Warm season
During the summer, the subtropical ridge in the Atlantic Ocean strengthens, bringing in increasingly humid air from the warm Atlantic, Caribbean, and Gulf of Mexico. Once precipitable water values exceed 1.25 inches (32 mm), afternoon and evening thunderstorms break out at the western periphery of the subtropical ridge across the Southeast on a daily basis. Summer is the time of the second rainfall maximum during the year across Georgia, and the time of the main rainfall maximum in Florida. During the late summer and fall, tropical cyclones move into the region from the Atlantic and Gulf of Mexico, supplying portions of the area with one-quarter of their annual rainfall, on average. Fall is the time of the rainfall minimum across Louisiana. Sometimes, Gulf moisture sneaks up the Front Range of Rockies as far north as the northern High Plains, bringing higher dewpoint air into states such as Wyoming and Montana.
Great Lakes
Overall, late spring and early summer is the wettest time of year for the western portion of the region, with a winter minimum in precipitation. This is due to warm, moist, and unstable air moving along the jet stream where it is centralized. Which brings precipitation along the westerlies. In contrast, eastern portions of the regions have two precipitation maximums, one during spring, and again in November. While July and August are the driest months in the region. The reason being that this region is further away from the unstable air of the central U.S and has more moderators to the climate. Due to the fact that storms and winds generally move west to east, the winds that blow from the Great Lakes during the summer keep the area more stable. With thunderstorms generally being less common.
Cold season
Extratropical cyclones can bring moderate to heavy snowfall during the cold season. On the backside of these systems, particularly those moving through the eastern United States, lake effect snowfall is possible. Low level cold in the winter sweeping in from Canada combine with relatively warmer, unfrozen lakes to produce dramatic lake-effect snow on the eastern and southern shores of the Great Lakes. Lake-effect precipitation produces a significant difference between the snowfall around the Great Lakes, sometimes within small distances. Lake effect snowfall accounts for 30 to 60 percent of the annual snowfall near the coasts of the Great Lakes. Lake Erie has the distinction of being the only great lake capable of completely freezing over during the winter due to its relative shallowness. Once frozen, the resulting ice cover alleviates lake-effect snow downwind of the lake. The influence of the Great Lakes allows the region to lie within a Humid Continental Climate regime. Although certain scientists have argued that the eastern third resemble more of an oceanic climate
Warm season
Weather systems in the westerlies that cause precipitation move along jet stream, which migrates north into the region by summer. This also increases the likelihood for severe weather to develop due to stronger upper-level divergence in its vicinity. Mesoscale convective complexes move into the region from the Plains from late April through mid-July, with June the peak month for the western portions of the Great Lakes. These systems contribute about 2% of the annual precipitation for the region. Also, remnants of tropical cyclones occasionally move northward into the region, though their overall contribution to precipitation across the region is minimal. From the spring through the summer, areas near the shores of the relatively cooler Great Lakes develop sea breezes, which lowers rainfall amounts and increases sunshine near the immediate coastline. The eastern Great Lakes are significantly drier during the summer.
Northeast
Average precipitation across the region show maxima along the coastal plain and along the mountains of the Appalachians. Between 28 inches (710 mm) and 62 inches (1,600 mm) of precipitation falls annually across the area. Seasonally, there are slight changes to precipitation distribution through the year. For example, Burlington, Vermont has a summer maximum and a winter minimum. In contrast, Portland, Maine has a fall and winter maximum, with a summer minimum in precipitation. Temporally, a maximum in precipitation is seen around three peak times: 3 a.m., 10 a.m., and 6 p.m. During the summer, the 6 p.m. peak is most pronounced.
Cold season
Coastal extratropical cyclones, known as nor'easters, bring a bulk of the wintry precipitation to the region during the cold season as they track parallel to the coastline, forming along the natural temperature gradient of the Gulf stream before moving up the coastline. The Appalachian Mountains largely shield New York City and Philadelphia from picking up any lake-effect snow, though ocean-effect snows are possible near Cape Cod. The Finger Lakes of New York are long enough for lake-effect precipitation. Lake-effect snow from the Finger Lakes occurs in upstate New York until those lakes freeze over. Bay-effect snows fall downwind of Delaware Bay, Chesapeake Bay, and Massachusetts Bay when the basic criteria are met. Ocean effect snows are possible downwind of the Gulf Stream across the Southeast.
Warm season
During the summer and early fall, mesoscale convective systems can move into the area from Canada and the Great Lakes. Tropical cyclones and their remains occasionally move into the region from the south and southwest. Recently, the region has experienced a couple heavy rainfall events that exceeded the 50-year return period, during October 1996 and October 1998, which suggest an increase in heavy rainfall along the coast.
Pacific islands
Hawaii
Snow, although not usually associated with tropics, falls at higher elevations on the Big Island, on Mauna Loa as well as Mauna Kea, which reaches an altitude of 13,796 feet (4,205 m), in some winter months. Snow only rarely falls on Maui's Haleakala. Mount Waiʻaleʻale (Waiʻaleʻale), on the island of Kauai, is notable for its extreme rainfall, as it has the second highest average annual rainfall on Earth, with 460 inches (12,000 mm). Storm systems affect the state with heavy rains between October and March. Showers are common across the island chain, but thunderstorms are relatively rare. Local climates vary considerably on each island due to their topography, divisible into windward (Koʻolau) and leeward (Kona) regions based upon location relative to the higher mountains. The Kona coast is the only area in Hawaii with a summer precipitation maximum. Windward sides face the east to northeast trade winds and receive much more rainfall; leeward sides are drier and sunnier, with less rain and less cloud cover. In the late winter and spring during El Niño events, drier than average conditions can be expected in Hawaii.
Northern Marianas
The islands have a tropical marine climate moderated by seasonal northeast trade winds. There is a dry season which stretches from December to June, and a rainy season from July to November. Saipan's average annual precipitation is 82.36 inches (2,092 mm), with 67 percent falling during the rainy season. Typhoons frequent the island chain, which can lead to excessive rainfall.
Guam
Guam's climate is moderated by east to northeast trade winds through the year. The average annual rainfall for the island is 86 inches (2,200 mm). There is a distinct dry season from January to June, and a rainy season from July to December. Typhoons frequent the island, which can lead to excessive rainfall. During El Niño years, dry season precipitation averages below normal. However, the threat of a tropical cyclone is over triple what is normal during El Niño years, so extreme shorter duration rainfall events are possible.
American Samoa
American Samoa's climate regime is dominated by southeast trade winds. The island dependency is wet, with annual rainfall averaging near 120 inches (3,000 mm) at the airport, with amounts closer to 200 inches (5,100 mm) in other areas. There is a distinct rainy season when tropical cyclones occasionally visit between November and April. The dry season lasts from May to October. During El Niño events, precipitation averages about 10 percent above normal, while La Niña events lead to precipitation amounts which average close to 10 percent below normal.Pago Pago harbor in American Samoa has the highest annual rainfall of any harbor in the world. This is due to the nearby Rainmaker Mountain.
Atlantic islands
Puerto Rico
There is a pronounced rainy season from April to November across the commonwealth, encompassing the annual hurricane season. Due to the Commonwealth's topography, rainfall varies greatly across the archipelago. Pico del Este averages 171.09 inches (4,346 mm) of rainfall yearly while Magueyes Island averages only 29.32 inches (745 mm) a year. Despite known changes in tropical cyclone activity due to changes in the El Niño/Southern Oscillation (ENSO), there is no known relationship between rainfall in Puerto Rico and the ENSO cycle. However, when values of the North Atlantic oscillation are high during the winter, precipitation is lower than average for Puerto Rico. There have not been any documented cases of snow falling within Puerto Rico, though occasionally it is brought in from elsewhere as a publicity stunt.
United States Virgin Islands
The climate of the United States Virgin Islands has sustained easterly trade winds through the year. There is a rainy season which lasts from September to November, when hurricanes are more prone to visit the island chain. The average rainfall through the island chain ranges from 51.55 inches (1,309 mm) at Annually to 37.79 inches (960 mm) at East Hill.
Changes due to global warming
Increasing temperatures tend to increase evaporation which leads to more precipitation. As average global temperatures have risen, average global precipitation has also increased. Precipitation has generally increased over land north of 30°N from 1900 to 2005, but declined over the tropics since the 1970s. Eastern portions of North America have become wetter. There has been an increase in the number of heavy precipitation events over many areas during the past century, as well as an increase since the 1970s in the prevalence of droughts—especially in the tropics and subtropics. Over the contiguous United States, total annual precipitation increased at an average rate of 6.1 percent per century since 1900, with the greatest increases within the East North Central climate region (11.6 percent per century) and the South (11.1 percent). Hawaii was the only region to show a decrease (−9.25 percent). From this excess precipitation, crop losses are expected to increase by US$3 billion (2002 dollars) annually over the next 30 years.
See also
Climate of the United States
Drought in the United States
Dust Bowl
Floods in the United States
List of wettest tropical cyclones in the United States
Susan van den Heever, atmospheric scientist and professor
References
External links
Last 24 hours of rainfall over the lower 48 - National Weather Service rainfall network
Rainfall forecasts for the lower 48
Current map of forecast precipitation over the United States during the next three hours. |
north pole | The North Pole, also known as the Geographic North Pole, Terrestrial North Pole or 90th Parallel North, is the point in the Northern Hemisphere where the Earth's axis of rotation meets its surface. It is called the True North Pole to distinguish from the Magnetic North Pole.
The North Pole is by definition the northernmost point on the Earth, lying antipodally to the South Pole. It defines geodetic latitude 90° North, as well as the direction of true north. At the North Pole all directions point south; all lines of longitude converge there, so its longitude can be defined as any degree value. No time zone has been assigned to the North Pole, so any time can be used as the local time. Along tight latitude circles, counterclockwise is east and clockwise is west. The North Pole is at the center of the Northern Hemisphere. The nearest land is usually said to be Kaffeklubben Island, off the northern coast of Greenland about 700 km (430 mi) away, though some perhaps semi-permanent gravel banks lie slightly closer. The nearest permanently inhabited place is Alert on Ellesmere Island, Canada, which is located 817 km (508 mi) from the Pole.
While the South Pole lies on a continental land mass, the North Pole is located in the middle of the Arctic Ocean amid waters that are almost permanently covered with constantly shifting sea ice. The sea depth at the North Pole has been measured at 4,261 m (13,980 ft) by the Russian Mir submersible in 2007 and at 4,087 m (13,409 ft) by USS Nautilus in 1958. This makes it impractical to construct a permanent station at the North Pole (unlike the South Pole). However, the Soviet Union, and later Russia, constructed a number of manned drifting stations on a generally annual basis since 1937, some of which have passed over or very close to the Pole. Since 2002, a group of Russians have also annually established a private base, Barneo, close to the Pole. This operates for a few weeks during early spring. Studies in the 2000s predicted that the North Pole may become seasonally ice-free because of Arctic ice shrinkage, with timescales varying from 2016 to the late 21st century or later.
Attempts to reach the North Pole began in the late 19th century, with the record for "Farthest North" being surpassed on numerous occasions. The first undisputed expedition to reach the North Pole was that of the airship Norge, which overflew the area in 1926 with 16 men on board, including expedition leader Roald Amundsen. Three prior expeditions – led by Frederick Cook (1908, land), Robert Peary (1909, land) and Richard E. Byrd (1926, aerial) – were once also accepted as having reached the Pole. However, in each case later analysis of expedition data has cast doubt upon the accuracy of their claims. The first confirmed overland expedition to reach the North Pole was in 1968 by Ralph Plaisted, Walt Pederson, Gerry Pitzl and Jean-Luc Bombardier, using snowmobiles and with air support.
Precise definition
The Earth's axis of rotation – and hence the position of the North Pole – was commonly believed to be fixed (relative to the surface of the Earth) until, in the 18th century, the mathematician Leonhard Euler predicted that the axis might "wobble" slightly. Around the beginning of the 20th century astronomers noticed a small apparent "variation of latitude", as determined for a fixed point on Earth from the observation of stars. Part of this variation could be attributed to a wandering of the Pole across the Earth's surface, by a range of a few metres. The wandering has several periodic components and an irregular component. The component with a period of about 435 days is identified with the eight-month wandering predicted by Euler and is now called the Chandler wobble after its discoverer. The exact point of intersection of the Earth's axis and the Earth's surface, at any given moment, is called the "instantaneous pole", but because of the "wobble" this cannot be used as a definition of a fixed North Pole (or South Pole) when metre-scale precision is required.
It is desirable to tie the system of Earth coordinates (latitude, longitude, and elevations or orography) to fixed landforms. However, given plate tectonics and isostasy, there is no system in which all geographic features are fixed. Yet the International Earth Rotation and Reference Systems Service and the International Astronomical Union have defined a framework called the International Terrestrial Reference System.
Exploration
Pre-1900
As early as the 16th century, many prominent people correctly believed that the North Pole was in a sea, which in the 19th century was called the Polynya or Open Polar Sea. It was therefore hoped that passage could be found through ice floes at favorable times of the year. Several expeditions set out to find the way, generally with whaling ships, already commonly used in the cold northern latitudes.
One of the earliest expeditions to set out with the explicit intention of reaching the North Pole was that of British naval officer William Edward Parry, who in 1827 reached latitude 82°45′ North. In 1871, the Polaris expedition, a US attempt on the Pole led by Charles Francis Hall, ended in disaster. Another British Royal Navy attempt to get to the pole, part of the British Arctic Expedition, by Commander Albert H. Markham reached a then-record 83°20'26" North in May 1876 before turning back. An 1879–1881 expedition commanded by US naval officer George W. De Long ended tragically when their ship, the USS Jeannette, was crushed by ice. Over half the crew, including De Long, were lost.
In April 1895, the Norwegian explorers Fridtjof Nansen and Hjalmar Johansen struck out for the Pole on skis after leaving Nansen's icebound ship Fram. The pair reached latitude 86°14′ North before they abandoned the attempt and turned southwards, eventually reaching Franz Josef Land.
In 1897, Swedish engineer Salomon August Andrée and two companions tried to reach the North Pole in the hydrogen balloon Örnen ("Eagle"), but came down 300 km (190 mi) north of Kvitøya, the northeasternmost part of the Svalbard archipelago. They trekked to Kvitøya but died there three months after their crash. In 1930 the remains of this expedition were found by the Norwegian Bratvaag Expedition.
The Italian explorer Luigi Amedeo, Duke of the Abruzzi and Captain Umberto Cagni of the Italian Royal Navy (Regia Marina) sailed the converted whaler Stella Polare ("Pole Star") from Norway in 1899. On 11 March 1900, Cagni led a party over the ice and reached latitude 86° 34’ on 25 April, setting a new record by beating Nansen's result of 1895 by 35 to 40 km (22 to 25 mi). Cagni barely managed to return to the camp, remaining there until 23 June. On 16 August, the Stella Polare left Rudolf Island heading south and the expedition returned to Norway.
1900–1940
The US explorer Frederick Cook claimed to have reached the North Pole on 21 April 1908 with two Inuit men, Ahwelah and Etukishook, but he was unable to produce convincing proof and his claim is not widely accepted.The conquest of the North Pole was for many years credited to US Navy engineer Robert Peary, who claimed to have reached the Pole on 6 April 1909, accompanied by Matthew Henson and four Inuit men, Ootah, Seeglo, Egingwah, and Ooqueah. However, Peary's claim remains highly disputed and controversial. Those who accompanied Peary on the final stage of the journey were not trained in navigation, and thus could not independently confirm his navigational work, which some claim to have been particularly sloppy as he approached the Pole.
The distances and speeds that Peary claimed to have achieved once the last support party turned back seem incredible to many people, almost three times that which he had accomplished up to that point. Peary's account of a journey to the Pole and back while traveling along the direct line – the only strategy that is consistent with the time constraints that he was facing – is contradicted by Henson's account of tortuous detours to avoid pressure ridges and open leads.
The British explorer Wally Herbert, initially a supporter of Peary, researched Peary's records in 1989 and found that there were significant discrepancies in the explorer's navigational records. He concluded that Peary had not reached the Pole. Support for Peary came again in 2005, however, when British explorer Tom Avery and four companions recreated the outward portion of Peary's journey with replica wooden sleds and Canadian Eskimo Dog teams, reaching the North Pole in 36 days, 22 hours – nearly five hours faster than Peary. However, Avery's fastest 5-day march was 90 nautical miles (170 km), significantly short of the 135 nautical miles (250 km) claimed by Peary. Avery writes on his web site that "The admiration and respect which I hold for Robert Peary, Matthew Henson and the four Inuit men who ventured North in 1909, has grown enormously since we set out from Cape Columbia. Having now seen for myself how he travelled across the pack ice, I am more convinced than ever that Peary did indeed discover the North Pole."The first claimed flight over the Pole was made on 9 May 1926 by US naval officer Richard E. Byrd and pilot Floyd Bennett in a Fokker tri-motor aircraft. Although verified at the time by a committee of the National Geographic Society, this claim has since been undermined by the 1996 revelation that Byrd's long-hidden diary's solar sextant data (which the NGS never checked) consistently contradict his June 1926 report's parallel data by over 100 mi (160 km). The secret report's alleged en-route solar sextant data were inadvertently so impossibly overprecise that he excised all these alleged raw solar observations out of the version of the report finally sent to geographical societies five months later (while the original version was hidden for 70 years), a realization first published in 2000 by the University of Cambridge after scrupulous refereeing.The first consistent, verified, and scientifically convincing attainment of the Pole was on 12 May 1926, by Norwegian explorer Roald Amundsen and his US sponsor Lincoln Ellsworth from the airship Norge. Norge, though Norwegian-owned, was designed and piloted by the Italian Umberto Nobile. The flight started from Svalbard in Norway, and crossed the Arctic Ocean to Alaska. Nobile, with several scientists and crew from the Norge, overflew the Pole a second time on 24 May 1928, in the airship Italia. The Italia crashed on its return from the Pole, with the loss of half the crew.
Another transpolar flight was accomplished in a Tupolev ANT-25 airplane with a crew of Valery Chkalov, Georgy Baydukov and Alexander Belyakov, who flew over the North Pole on 19 June 1937, during their direct flight from the Soviet Union to the USA without any stopover.
Ice station
In May 1937 the world's first North Pole ice station, North Pole-1, was established by Soviet scientists 20 kilometres (13 mi) from the North Pole after the ever first landing of four heavy and one light aircraft onto the ice at the North Pole. The expedition members — oceanographer Pyotr Shirshov, meteorologist Yevgeny Fyodorov, radio operator Ernst Krenkel, and the leader Ivan Papanin — conducted scientific research at the station for the next nine months. By 19 February 1938, when the group was picked up by the ice breakers Taimyr and Murman, their station had drifted 2850 km to the eastern coast of Greenland.
1940–2000
In May 1945 an RAF Lancaster of the Aries expedition became the first Commonwealth aircraft to overfly the North Geographic and North Magnetic Poles. The plane was piloted by David Cecil McKinley of the Royal Air Force. It carried an 11-man crew, with Kenneth C. Maclure of the Royal Canadian Air Force in charge of all scientific observations. In 2006, Maclure was honoured with a spot in Canada's Aviation Hall of Fame.Discounting Peary's disputed claim, the first men to set foot at the North Pole were a Soviet party including geophysicists Mikhail Ostrekin and Pavel Senko, oceanographers Mikhail Somov and Pavel Gordienko, and other scientists and flight crew (24 people in total) of Aleksandr Kuznetsov's Sever-2 expedition (March–May 1948). It was organized by the Chief Directorate of the Northern Sea Route. The party flew on three planes (pilots Ivan Cherevichnyy, Vitaly Maslennikov and Ilya Kotov) from Kotelny Island to the North Pole and landed there at 4:44pm (Moscow Time, UTC+04:00) on 23 April 1948. They established a temporary camp and for the next two days conducted scientific observations. On 26 April the expedition flew back to the continent.
Next year, on 9 May 1949 two other Soviet scientists (Vitali Volovich and Andrei Medvedev) became the first people to parachute onto the North Pole. They jumped from a Douglas C-47 Skytrain, registered CCCP H-369.On 3 May 1952, U.S. Air Force Lieutenant Colonel Joseph O. Fletcher and Lieutenant William Pershing Benedict, along with scientist Albert P. Crary, landed a modified Douglas C-47 Skytrain at the North Pole. Some Western sources considered this to be the first landing at the Pole until the Soviet landings became widely known.
The United States Navy submarine USS Nautilus (SSN-571) crossed the North Pole on 3 August 1958. On 17 March 1959 USS Skate (SSN-578) surfaced at the Pole, breaking through the ice above it, becoming the first naval vessel to do so.The first confirmed surface conquest of the North Pole was accomplished by Ralph Plaisted, Walt Pederson, Gerry Pitzl and Jean Luc Bombardier, who traveled over the ice by snowmobile and arrived on 19 April 1968. The United States Air Force independently confirmed their position.
On 6 April 1969 Wally Herbert and companions Allan Gill, Roy Koerner and Kenneth Hedges of the British Trans-Arctic Expedition became the first men to reach the North Pole on foot (albeit with the aid of dog teams and airdrops). They continued on to complete the first surface crossing of the Arctic Ocean – and by its longest axis, Barrow, Alaska, to Svalbard – a feat that has never been repeated. Because of suggestions (later proven false) of Plaisted's use of air transport, some sources classify Herbert's expedition as the first confirmed to reach the North Pole over the ice surface by any means. In the 1980s Plaisted's pilots Weldy Phipps and Ken Lee signed affidavits asserting that no such airlift was provided. It is also said that Herbert was the first person to reach the pole of inaccessibility.
On 17 August 1977 the Soviet nuclear-powered icebreaker Arktika completed the first surface vessel journey to the North Pole.
In 1982 Ranulph Fiennes and Charles R. Burton became the first people to cross the Arctic Ocean in a single season. They departed from Cape Crozier, Ellesmere Island, on 17 February 1982 and arrived at the geographic North Pole on 10 April 1982. They travelled on foot and snowmobile. From the Pole, they travelled towards Svalbard but, due to the unstable nature of the ice, ended their crossing at the ice edge after drifting south on an ice floe for 99 days. They were eventually able to walk to their expedition ship MV Benjamin Bowring and boarded it on 4 August 1982 at position 80:31N 00:59W. As a result of this journey, which formed a section of the three-year Transglobe Expedition 1979–1982, Fiennes and Burton became the first people to complete a circumnavigation of the world via both North and South Poles, by surface travel alone. This achievement remains unchallenged to this day. The expedition crew included a Jack Russell Terrier named Bothie who became the first dog to visit both poles.In 1985 Sir Edmund Hillary (the first man to stand on the summit of Mount Everest) and Neil Armstrong (the first man to stand on the moon) landed at the North Pole in a small twin-engined ski plane. Hillary thus became the first man to stand at both poles and on the summit of Everest.
In 1986 Will Steger, with seven teammates, became the first to be confirmed as reaching the Pole by dogsled and without resupply.
USS Gurnard (SSN-662) operated in the Arctic Ocean under the polar ice cap from September to November 1984 in company with one of her sister ships, the attack submarine USS Pintado (SSN-672). On 12 November 1984 Gurnard and Pintado became the third pair of submarines to surface together at the North Pole. In March 1990, Gurnard deployed to the Arctic region during exercise Ice Ex '90 and completed only the fourth winter submerged transit of the Bering and Seas. Gurnard surfaced at the North Pole on 18 April, in the company of the USS Seahorse (SSN-669).On 6 May 1986 USS Archerfish (SSN 678), USS Ray (SSN 653) and USS Hawkbill (SSN-666) surfaced at the North Pole, the first tri-submarine surfacing at the North Pole.
On 21 April 1987 Shinji Kazama of Japan became the first person to reach the North Pole on a motorcycle.On 18 May 1987 USS Billfish (SSN 676), USS Sea Devil (SSN 664) and HMS Superb (S 109) surfaced at the North Pole, the first international surfacing at the North Pole.
In 1988 a team of 13 (9 Soviets, 4 Canadians) skied across the arctic from Siberia to northern Canada. One of the Canadians, Richard Weber, became the first person to reach the Pole from both sides of the Arctic Ocean.
On April 16, 1990, a German-Swiss expedition led by a team of the University of Giessen reached the Geographic North Pole for studies on pollution of pack ice, snow and air. Samples taken were analyzed in cooperation with the Geological Survey of Canada and the Alfred Wegener Institute for Polar and Marine Research. Further stops for sample collections were on multi-year sea ice at 86°N, at Cape Columbia and Ward Hunt Island.On 4 May 1990 Børge Ousland and Erling Kagge became the first explorers ever to reach the North Pole unsupported, after a 58-day ski trek from Ellesmere Island in Canada, a distance of 800 km.On 7 September 1991 the German research vessel Polarstern and the Swedish icebreaker Oden reached the North Pole as the first conventional powered vessels. Both scientific parties and crew took oceanographic and geological samples and had a common tug of war and a football game on an ice floe. Polarstern again reached the pole exactly 10 years later, with the Healy.
In 1998, 1999, and 2000, Lada Niva Marshs (special very large wheeled versions made by BRONTO, Lada/Vaz's experimental product division) were driven to the North Pole. The 1998 expedition was dropped by parachute and completed the track to the North Pole. The 2000 expedition departed from a Russian research base around 114 km from the Pole and claimed an average speed of 20–15 km/h in an average temperature of −30 °C.
21st century
Commercial airliner flights on the polar routes may pass within viewing distance of the North Pole. For example, a flight from Chicago to Beijing may come close as latitude 89° N, though because of prevailing winds return journeys go over the Bering Strait. In recent years journeys to the North Pole by air (landing by helicopter or on a runway prepared on the ice) or by icebreaker have become relatively routine, and are even available to small groups of tourists through adventure holiday companies. Parachute jumps have frequently been made onto the North Pole in recent years. The temporary seasonal Russian camp of Barneo has been established by air a short distance from the Pole annually since 2002, and caters for scientific researchers as well as tourist parties. Trips from the camp to the Pole itself may be arranged overland or by helicopter.
The first attempt at underwater exploration of the North Pole was made on 22 April 1998 by Russian firefighter and diver Andrei Rozhkov with the support of the Diving Club of Moscow State University, but ended in fatality. The next attempted dive at the North Pole was organized the next year by the same diving club, and ended in success on 24 April 1999. The divers were Michael Wolff (Austria), Brett Cormick (UK), and Bob Wass (USA).In 2005 the United States Navy submarine USS Charlotte (SSN-766) surfaced through 155 cm (61 in) of ice at the North Pole and spent 18 hours there.In July 2007 British endurance swimmer Lewis Gordon Pugh completed a 1 km (0.62 mi) swim at the North Pole. His feat, undertaken to highlight the effects of global warming, took place in clear water that had opened up between the ice floes. His later attempt to paddle a kayak to the North Pole in late 2008, following the erroneous prediction of clear water to the Pole, was stymied when his expedition found itself stuck in thick ice after only three days. The expedition was then abandoned.
By September 2007 the North Pole had been visited 66 times by different surface ships: 54 times by Soviet and Russian icebreakers, 4 times by Swedish Oden, 3 times by German Polarstern, 3 times by USCGC Healy and USCGC Polar Sea, and once by CCGS Louis S. St-Laurent and by Swedish Vidar Viking.
2007 descent to the North Pole seabed
On 2 August 2007 a Russian scientific expedition Arktika 2007 made the first ever manned descent to the ocean floor at the North Pole, to a depth of 4.3 km (2.7 mi), as part of the research programme in support of Russia's 2001 extended continental shelf claim to a large swathe of the Arctic Ocean floor. The descent took place in two MIR submersibles and was led by Soviet and Russian polar explorer Artur Chilingarov. In a symbolic act of visitation, the Russian flag was placed on the ocean floor exactly at the Pole.The expedition was the latest in a series of efforts intended to give Russia a dominant influence in the Arctic according to The New York Times. The warming Arctic climate and summer shrinkage of the iced area has attracted the attention of many countries, such as China and the United States, toward the top of the world, where resources and shipping routes may soon be exploitable.
MLAE 2009 Expedition
In 2009 the Russian Marine Live-Ice Automobile Expedition (MLAE-2009) with Vasily Elagin as a leader and a team of Afanasy Makovnev, Vladimir Obikhod, Alexey Shkrabkin, Sergey Larin, Alexey Ushakov and Nikolay Nikulshin reached the North Pole on two custom-built 6 x 6 low-pressure-tire ATVs. The vehicles, Yemelya-1 and Yemelya-2, were designed by Vasily Elagin, a Russian mountain climber, explorer and engineer. They reached the North Pole on 26 April 2009, 17:30 (Moscow time). The expedition was partly supported by Russian State Aviation. The Russian Book of Records recognized it as the first successful vehicle trip from land to the Geographical North Pole.
MLAE 2013 Expedition
On 1 March 2013 the Russian Marine Live-Ice Automobile Expedition (MLAE 2013) with Vasily Elagin as a leader, and a team of Afanasy Makovnev, Vladimir Obikhod, Alexey Shkrabkin, Andrey Vankov, Sergey Isayev and Nikolay Kozlov on two custom-built 6 x 6 low-pressure-tire ATVs—Yemelya-3 and Yemelya-4—started from Golomyanny Island (the Severnaya Zemlya Archipelago) to the North Pole across drifting ice of the Arctic Ocean. The vehicles reached the Pole on 6 April and then continued to the Canadian coast. The coast was reached on 30 April 2013 (83°08N, 075°59W Ward Hunt Island), and on 5 May 2013 the expedition finished in Resolute Bay, NU. The way between the Russian borderland (Machtovyi Island of the Severnaya Zemlya Archipelago, 80°15N, 097°27E) and the Canadian coast (Ward Hunt Island, 83°08N, 075°59W) took 55 days; it was ~2300 km across drifting ice and about 4000 km in total. The expedition was totally self-dependent and used no external supplies. The expedition was supported by the Russian Geographical Society.
Day and night
The sun at the North Pole is continuously above the horizon during the summer and continuously below the horizon during the winter. Sunrise is just before the March equinox (around 20 March); the Sun then takes three months to reach its highest point of near 23½° elevation at the summer solstice (around 21 June), after which time it begins to sink, reaching sunset just after the September equinox (around 23 September). When the Sun is visible in the polar sky, it appears to move in a horizontal circle above the horizon. This circle gradually rises from near the horizon just after the vernal equinox to its maximum elevation (in degrees) above the horizon at summer solstice and then sinks back toward the horizon before sinking below it at the autumnal equinox. Hence the North and South Poles experience the slowest rates of sunrise and sunset on Earth.
The twilight period that occurs before sunrise and after sunset has three different definitions:
a civil twilight period of about two weeks;
a nautical twilight period of about five weeks; and
an astronomical twilight period of about seven weeks.These effects are caused by a combination of the Earth's axial tilt and its revolution around the Sun. The direction of the Earth's axial tilt, as well as its angle relative to the plane of the Earth's orbit around the Sun, remains very nearly constant over the course of a year (both change very slowly over long time periods). At northern midsummer the North Pole is facing towards the Sun to its maximum extent. As the year progresses and the Earth moves around the Sun, the North Pole gradually turns away from the Sun until at midwinter it is facing away from the Sun to its maximum extent. A similar sequence is observed at the South Pole, with a six-month time difference.
Time
In most places on Earth, local time is determined by longitude, such that the time of day is more or less synchronised to the position of the Sun in the sky (for example, at midday, the Sun is roughly at its highest). This line of reasoning fails at the North Pole, where the Sun is experienced as rising and setting only once per year, and all lines of longitude, and hence all time zones, converge. There is no permanent human presence at the North Pole and no particular time zone has been assigned. Polar expeditions may use any time zone that is convenient, such as Greenwich Mean Time, or the time zone of the country from which they departed.
Climate, sea ice at North Pole
The North Pole is substantially warmer than the South Pole because it lies at sea level in the middle of an ocean (which acts as a reservoir of heat), rather than at altitude on a continental land mass. Despite being an ice cap, the northernmost weather station in Greenland has a tundra climate (Köppen ET) due to the July and August mean temperatures peaking just above freezing.Winter temperatures at the northernmost weather station in Greenland can range from about −50 to −13 °C (−58 to 9 °F), averaging around −31 °C (−24 °F), with the North Pole being slightly colder. However, a freak storm caused the temperature to reach 0.7 °C (33.3 °F) for a time at a World Meteorological Organization buoy, located at 87.45°N, on 30 December 2015. It was estimated that the temperature at the North Pole was between −1 and 2 °C (30 and 35 °F) during the storm. Summer temperatures (June, July, and August) average around the freezing point (0 °C (32 °F)). The highest temperature yet recorded is 13 °C (55 °F), much warmer than the South Pole's record high of only −12.3 °C (9.9 °F). A similar spike in temperatures occurred on 15 November 2016 when temperatures hit freezing. Yet again, February 2018 featured a storm so powerful that temperatures at Cape Morris Jesup, the world's northernmost weather station in Greenland, reached 6.1 °C (43.0 °F) and spent 24 straight hours above freezing. Meanwhile, the pole itself was estimated to reach a high temperature of 1.6 °C (34.9 °F). This same temperature of 1.6 °C (34.9 °F) was also recorded at the Hollywood Burbank Airport in Los Angeles at the very same time.The sea ice at the North Pole is typically around 2 to 3 m (6 ft 7 in to 9 ft 10 in) thick, although ice thickness, its spatial extent, and the fraction of open water within the ice pack can vary rapidly and profoundly in response to weather and climate. Studies have shown that the average ice thickness has decreased in recent years. It is likely that global warming has contributed to this, but it is not possible to attribute the recent abrupt decrease in thickness entirely to the observed warming in the Arctic. Reports have also predicted that within a few decades the Arctic Ocean will be entirely free of ice in the summer. This may have significant commercial implications; see "Territorial claims", below.
The retreat of the Arctic sea ice will accelerate global warming, as less ice cover reflects less solar radiation, and may have serious climate implications by contributing to Arctic cyclone generation.
Flora and fauna
Polar bears are believed to travel rarely beyond about 82° North, owing to the scarcity of food, though tracks have been seen in the vicinity of the North Pole, and a 2006 expedition reported sighting a polar bear just 1 mi (1.6 km) from the Pole. The ringed seal has also been seen at the Pole, and Arctic foxes have been observed less than 60 km (37 mi) away at 89°40′ N.Birds seen at or very near the Pole include the snow bunting, northern fulmar and black-legged kittiwake, though some bird sightings may be distorted by the tendency of birds to follow ships and expeditions.Fish have been seen in the waters at the North Pole, but these are probably few in number. A member of the Russian team that descended to the North Pole seabed in August 2007 reported seeing no sea creatures living there. However, it was later reported that a sea anemone had been scooped up from the seabed mud by the Russian team and that video footage from the dive showed unidentified shrimps and amphipods.
Territorial claims to the North Pole and Arctic regions
Currently, under international law, no country owns the North Pole or the region of the Arctic Ocean surrounding it. The five surrounding Arctic countries, Russia, Canada, Norway, Denmark (via Greenland), and the United States (via Alaska), are limited to a 200-nautical-mile (370 km; 230 mi) exclusive economic zone off their coasts, and the area beyond that is administered by the International Seabed Authority.
Upon ratification of the United Nations Convention on the Law of the Sea, a country has 10 years to make claims to an extended continental shelf beyond its 200-mile exclusive economic zone. If validated, such a claim gives the claimant state rights to what may be on or beneath the sea bottom within the claimed zone. Norway (ratified the convention in 1996), Russia (ratified in 1997), Canada (ratified in 2003) and Denmark (ratified in 2004) have all launched projects to base claims that certain areas of Arctic continental shelves should be subject to their sole sovereign exploitation.In 1907 Canada invoked a "sector principle" to claim sovereignty over a sector stretching from its coasts to the North Pole. This claim has not been relinquished, but was not consistently pressed until 2013.
Cultural associations
In some children's Christmas legends and Western folklore, the geographic North Pole is described as the location of Santa Claus' legendary workshop and residence, although the depictions have been inconsistent between the geographic and magnetic North Pole. Canada Post has assigned postal code H0H 0H0 to the North Pole (referring to Santa's traditional exclamation of "Ho ho ho!").This association reflects an age-old esoteric mythology of Hyperborea that posits the North Pole, the otherworldly world-axis, as the abode of God and superhuman beings.As Henry Corbin has documented, the North Pole plays a key part in the cultural worldview of Sufism and Iranian mysticism. "The Orient sought by the mystic, the Orient that cannot be located on our maps, is in the direction of the north, beyond the north.".In Mandaean cosmology, the North Pole and Polaris are considered to be auspicious, since they are associated with the World of Light. Mandaeans face north when praying, and temples are also oriented towards the north. On the contrary, South is associated with the World of Darkness.Owing to its remoteness, the Pole is sometimes identified with a mysterious mountain of ancient Iranian tradition called Mount Qaf (Jabal Qaf), the "farthest point of the earth". According to certain authors, the Jabal Qaf of Muslim cosmology is a version of Rupes Nigra, a mountain whose ascent, like Dante's climbing of the Mountain of Purgatory, represents the pilgrim's progress through spiritual states. In Iranian theosophy, the heavenly Pole, the focal point of the spiritual ascent, acts as a magnet to draw beings to its "palaces ablaze with immaterial matter."
See also
Notes
References
Further reading
External links
Arctic Council
The Northern Forum
North Pole Web Cam
FAQ on the Arctic and the North Pole
Daylight, Darkness and Changing of the Seasons at the North Pole
Video of the Nuclear Icebreaker Yamal visiting the North Pole in 2001
Polar Discovery: North Pole Observatory Expedition |
caspian sea | The Caspian Sea is the world's largest inland body of water, often described as the world's largest lake or a full-fledged sea. An endorheic basin, it lies between Europe and Asia: east of the Caucasus, west of the broad steppe of Central Asia, south of the fertile plains of Southern Russia in Eastern Europe, and north of the mountainous Iranian Plateau of West Asia. It covers a surface area of 371,000 km2 (143,000 sq mi) (excluding the highly saline lagoon of Garabogazköl to its east), an area approximately equal to that of Japan, with a volume of 78,200 km3 (19,000 cu mi). It has a salinity of approximately 1.2% (12 g/L), about a third of the salinity of average seawater. It is bounded by Kazakhstan to the northeast, Russia to the northwest, Azerbaijan to the southwest, Iran to the south, and Turkmenistan to the southeast.
The sea stretches 1,200 km (750 mi) from north to south, with an average width of 320 km (200 mi). Its gross coverage is 386,400 km2 (149,200 sq mi) and the surface is about 27 m (89 ft) below sea level. Its main freshwater inflow, Europe's longest river, the Volga, enters at the shallow north end. Two deep basins form its central and southern zones. These lead to horizontal differences in temperature, salinity, and ecology. The seabed in the south reaches 1,023 m (3,356 ft) below sea level, which is the second-lowest natural non-oceanic depression on Earth after Lake Baikal (−1,180 m or −3,870 ft). Written accounts from the ancient inhabitants of its coast perceived the Caspian Sea as an ocean, probably because of its salinity and large size. With a surface area of 371,000 square kilometres (143,000 sq mi), the Caspian Sea is nearly five times as big as Lake Superior (82,000 square kilometres (32,000 sq mi)). The Caspian Sea is home to a wide range of species and is famous for its caviar and oil industries. Pollution from the oil industry and dams on rivers draining into it have harmed its ecology. It is predicted that during the 21st century the depth of the sea will decrease by 9–18 m (30–60 ft) due to global warming and the process of desertification, causing an ecocide.
Etymology
The sea's name stems from Caspi, an ancient people who lived to the southwest of the sea in Transcaucasia. Strabo (died circa AD 24) wrote that "to the country of the Albanians (Caucasus Albania, not to be confused with the country of Albania) belongs also the territory called Caspiane, which was named after the Caspian tribe, as was also the sea; but the tribe has now disappeared". Moreover, the Caspian Gates, part of Iran's Tehran province, may evince such a people migrated to the south. The Iranian city of Qazvin shares the root of its name with this common name for the sea. The traditional and medieval Arabic name for the sea was Bahr ('sea') Khazar, but in recent centuries the common and standard name in Arabic language has become بحر قزوين Baḥr Qazvin, the Arabized form of Caspian. In modern Russian language, it is known as Russian: Каспи́йское мо́ре, Kaspiyskoye more.Some Turkic ethnic groups refer to it with the Caspi(an) descriptor; in Kazakh it is called Каспий теңізі, Kaspiy teñizi, Kyrgyz: Каспий деңизи, romanized: Kaspiy deñizi, Uzbek: Kaspiy dengizi. Others refer to it as the Khazar sea: Turkmen: Hazar deňizi; Azerbaijani: Xəzər dənizi, Turkish: Hazar denizi. In all these the first word refers to the historical Khazar Khaganate, a large empire based to the north of the Caspian Sea between the 7th and 10th centuries.In Iran, the lake is referred to as the Mazandaran Sea (Persian: دریای مازندران), after the historic Mazandaran Province at its southern shores.Old Russian sources use the Khvalyn or Khvalis Sea (Хвалынское море / Хвалисское море) after the name of Khwarezmia.Among Greeks and Persians in classical antiquity it was the Hyrcanian ocean.Renaissance European maps labelled it as the Abbacuch Sea (Oronce Fine's 1531 world map), Mar de Bachu (Ortellius' 1570 map), or Mar de Sala (the Mercator 1569 world map).
It was also sometimes called the Kumyk Sea and Tarki Sea (derived from the name of the Kumyks and their historical capital Tarki).
Basin countries
Border countries
Kazakhstan
Iran
Azerbaijan
Russia
Turkmenistan
Non-border countries
Armenia (all)
Georgia (its east part)
Turkey (extreme north-eastern parts)
Uzbekistan (extreme western parts)
Physical characteristics
Formation
The Caspian Sea is at its South Caspian Basin, like the Black Sea, a remnant of the ancient Paratethys Sea. Its seafloor is, therefore, a standard oceanic basalt and not a continental granite body. It is estimated to be about 30 million years old, and became landlocked in the Late Miocene, about 5.5 million years ago, due to tectonic uplift and a fall in sea level. The Caspian Sea was a comparatively small endorheic lake during the Pliocene, but its surface area increased fivefold around the time of the Pliocene-Pleistocene transition. During warm and dry climatic periods, the landlocked sea almost dried up, depositing evaporitic sediments like halite that were covered by wind-blown deposits and were sealed off as an evaporite sink when cool, wet climates refilled the basin. (Comparable evaporite beds underlie the Mediterranean.) Due to the current inflow of fresh water in the north, the Caspian Sea water is almost fresh in its northern portions, getting more brackish toward the south. It is most saline on the Iranian shore, where the catchment basin contributes little flow. Currently, the mean salinity of the Caspian is one third that of Earth's oceans. The Garabogazköl embayment, which dried up when water flow from the main body of the Caspian was blocked in the 1980s but has since been restored, routinely exceeds oceanic salinity by a factor of 10.
Geography
The Caspian Sea is the largest inland body of water in the world by area and accounts for 40–44% of the total lake waters of the world, and covers an area larger than Germany. The coastlines of the Caspian are shared by Azerbaijan, Iran, Kazakhstan, Russia, and Turkmenistan. The Caspian is divided into three distinct physical regions: the Northern, Middle, and Southern Caspian. The Northern–Middle boundary is the Mangyshlak Threshold, which runs through Chechen Island and Cape Tiub-Karagan. The Middle–Southern boundary is the Apsheron Threshold, a sill of tectonic origin between the Eurasian continent and an oceanic remnant, that runs through Zhiloi Island and Cape Kuuli. The Garabogazköl Bay is the saline eastern inlet of the Caspian, which is part of Turkmenistan and at times has been a lake in its own right due to the isthmus that cuts it off from the Caspian.
Differences between the three regions are dramatic. The Northern Caspian only includes the Caspian shelf, and is very shallow; it accounts for less than 1% of the total water volume with an average depth of only 5–6 m (16–20 ft). The sea noticeably drops off towards the Middle Caspian, where the average depth is 190 m (620 ft). The Southern Caspian is the deepest, with oceanic depths of over 1,000 m (3,300 ft), greatly exceeding the depth of other regional seas, such as the Persian Gulf. The Middle and Southern Caspian account for 33% and 66% of the total water volume, respectively. The northern portion of the Caspian Sea typically freezes in the winter, and in the coldest winters ice forms in the south as well.
Over 130 rivers provide inflow to the Caspian, the Volga River being the largest. A second affluent, the Ural River, flows in from the north, and the Kura River from the west. In the past, the Amu Darya (Oxus) of Central Asia in the east often changed course to empty into the Caspian through a now-desiccated riverbed called the Uzboy River, as did the Syr Darya farther north. The Caspian has several small islands, primarily located in the north with a collective land area of roughly 2,000 km2 (770 sq mi). Adjacent to the North Caspian is the Caspian Depression, a low-lying region 27 m (89 ft) below sea level. The Central Asian steppes stretch across the northeast coast, while the Caucasus mountains hug the western shore. The biomes to both the north and east are characterized by cold, continental deserts. Conversely, the climate to the southwest and south are generally warm with uneven elevation due to a mix of highlands and mountain ranges; the drastic changes in climate alongside the Caspian have led to a great deal of biodiversity in the region.
The Caspian Sea has numerous islands near the coasts, but none in the deeper parts of the sea. Ogurja Ada is the largest island. The island is 37 km (23 mi) long, with gazelles roaming freely on it. In the North Caspian, the majority of the islands are small and uninhabited, like the Tyuleniy Archipelago, an Important Bird Area (IBA).
Climate
The climate of the Caspian Sea is variable, with the cold desert climate (BWk), cold semi-arid climate (BSk), and humid continental climate (Dsa, Dfa) being present in the northern portions of the Caspian Sea, while the Mediterranean climate (Csa) and humid subtropical climate (Cfa) are present in the southern portions of the Caspian Sea.
Hydrology
The Caspian has characteristics common to both seas and lakes. It is often listed as the world's largest lake, although it is not freshwater: the 1.2% salinity classes it with brackish water bodies.
It contains about 3.5 times as much water, by volume, as all five of North America's Great Lakes combined. The Volga River (about 80% of the inflow) and the Ural River discharge into the Caspian Sea, but it has no natural outflow other than by evaporation. Thus the Caspian ecosystem is a closed basin, with its own sea level history that is independent of the eustatic level of the world's oceans.
The sea level of the Caspian has fallen and risen, often rapidly, many times over the centuries. Some Russian historians claim that a medieval rising of the Caspian, perhaps caused by the Amu Darya changing its inflow to the Caspian from the 13th century to the 16th century, caused the coastal towns of Khazaria, such as Atil, to flood. In 2004, the water level was 28 m (92 ft) below sea level.
Over the centuries, Caspian Sea levels have changed in synchrony with the estimated discharge of the Volga, which in turn depends on rainfall levels in its vast catchment basin. Precipitation is related to variations in the amount of North Atlantic depressions that reach the interior, and they in turn are affected by cycles of the North Atlantic oscillation. Thus levels in the Caspian Sea relate to atmospheric conditions in the North Atlantic, thousands of kilometres to the northwest.The last short-term sea-level cycle started with a sea-level fall of 3 m (10 ft) from 1929 to 1977, followed by a rise of 3 m (10 ft) from 1977 until 1995. Since then smaller oscillations have taken place.A study by the Azerbaijan Academy of Sciences estimated that the level of the sea was dropping by more than six centimetres per year due to increased evaporation due to rising temperatures caused by climate change.
Environmental degradation
The Volga River, the longest river in Europe, drains 20% of the European land area and is the source of 80% of the Caspian's inflow. Heavy development in its lower reaches has caused numerous unregulated releases of chemical and biological pollutants. The UN Environment Programme warns that the Caspian "suffers from an enormous burden of pollution from oil extraction and refining, offshore oil fields, radioactive wastes from nuclear power plants and huge volumes of untreated sewage and industrial waste introduced mainly by the Volga River".The magnitude of fossil fuel extraction and transport activity in the Caspian also poses a risk to the environment. The island of Vulf off Baku, for example, has suffered ecological damage as a result of the petrochemical industry; this has significantly decreased the number of species of marine birds in the area. Existing and planned oil and gas pipelines under the sea further increase the potential threat to the environment.The high concentration of mud volcanoes under the Caspian Sea were thought to be the cause of a fire that broke out 75 kilometers from Baku on July 5, 2021. The State oil company of Azerbaijan SOCAR said preliminary information indicated it was a mud volcano which spewed both mud and flammable gas.It is calculated that during the 21st century, the water level of the Caspian Sea will decrease by 9–18 m (30–60 ft) due to the acceleration of evaporation due to global warming and the process of desertification, causing an ecocide.On October 23, 2021, Kazakhstan President Kassym-Jomart Tokayev signed the Protocol for the Protection of the Caspian Sea against Pollution from Land-based Sources in order to ensure better protection for the biodiversity of the Caspian Sea.
Flora and fauna
Flora
The rising level of the Caspian Sea between 1995 and 1996 reduced the number of habitats for rare species of aquatic vegetation. This has been attributed to a general lack of seeding material in newly formed coastal lagoons and water bodies.Many rare and endemic plant species of Russia are associated with the tidal areas of the Volga delta and riparian forests of the Samur River delta. The shoreline is also a unique refuge for plants adapted to the loose sands of the Central Asian Deserts. The principal limiting factors to successful establishment of plant species are hydrological imbalances within the surrounding deltas, water pollution, and various land reclamation activities. The water level change within the Caspian Sea is an indirect reason for which plants may not get established.
These affect aquatic plants of the Volga Delta, such as Aldrovanda vesiculosa and the native Nelumbo caspica. About 11 plant species are found in the Samur River delta, including the unique liana forests that date back to the Tertiary period.Since 2019 UNESCO has admitted the lush Hyrcanian forests of Mazandaran, Iran as World Heritage Site under category (ix).
Fauna
The Caspian turtle (Mauremys caspica), although found in neighboring areas, is a wholly freshwater species. The zebra mussel is native to the Caspian and Black Sea basins, but has become an invasive species elsewhere, when introduced. The area has given its name to several species, including the Caspian gull and the Caspian tern. The Caspian seal (Pusa caspica) is the only aquatic mammal endemic to the Caspian Sea, being one of very few seal species that live in inland waters, but it is different from those inhabiting freshwaters due to the hydrological environment of the sea. A century ago the Caspian was home to more than one million seals. Today, fewer than 10% remain.Archeological studies of Gobustan Rock Art have identified what may be dolphins and porpoises, likely present in the Caspian Sea at least until the Quaternary or much more recent periods such as the last glacial period or antiquity. Although the rock art on Kichikdash Mountain is assumed to be of a dolphin it might instead represent the famous beluga sturgeon due to its size (430 cm in length), but fossil records suggest certain ancestors of modern dolphins and whales, such as Macrokentriodon morani (bottlenose dolphins) and Balaenoptera sibbaldina (blue whales) were presumably larger than their present descendants. From the same artworks, auks, like Brunnich's Guillemot could also have been in the sea as well, and these petroglyphs suggest marine inflow between the current Caspian Sea and the Arctic Ocean or North Sea, or the Black Sea. This is supported by the existences of current endemic, oceanic species such as lagoon cockles which was genetically identified to originate in Caspian/Black Seas regions.The sea's basin (including associated waters such as rivers) has 160 native species and subspecies of fish in more than 60 genera. About 62% of the species and subspecies are endemic, as are 4–6 genera (depending on taxonomic treatment). The lake proper has 115 natives, including 73 endemics (63.5%). Among the more than 50 genera in the lake proper, 3–4 are endemic: Anatirostrum, Caspiomyzon, Chasar (often included in Ponticola) and Hyrcanogobius. By far the most numerous families in the lake proper are gobies (35 species and subspecies), cyprinids (32) and clupeids (22). Two particularly rich genera are Alosa with 18 endemic species/subspecies and Benthophilus with 16 endemic species. Other examples of endemics are four species of Clupeonella, Gobio volgensis, two Rutilus, three Sabanejewia, Stenodus leucichthys, two Salmo, two Mesogobius and three Neogobius. Most non-endemic natives are either shared with the Black Sea basin or widespread Palearctic species such as crucian carp, Prussian carp, common carp, common bream, common bleak, asp, white bream, sunbleak, common dace, common roach, common rudd, European chub, sichel, tench, European weatherfish, wels catfish, northern pike, burbot, European perch and zander. Almost 30 non-indigenous, introduced fish species have been reported from the Caspian Sea, but only a few have become established.Six sturgeon species, the Russian, bastard, Persian, sterlet, starry and beluga, are native to the Caspian Sea. The last of these is arguably the largest freshwater fish in the world. The sturgeon yield roe (eggs) that are processed into caviar. Overfishing has depleted a number of the historic fisheries. In recent years, overfishing has threatened the sturgeon population to the point that environmentalists advocate banning sturgeon fishing completely until the population recovers. The high price of sturgeon caviar – more than 1,500 Azerbaijani manats (US$880 as of April 2019) per kilo – allows fishermen to afford bribes to ensure the authorities look the other way, making regulations in many locations ineffective. Caviar harvesting further endangers the fish stocks, since it targets reproductive females.
Reptiles native to the region include the spur-thighed tortoise (Testudo graeca buxtoni) and Horsfield's tortoise.
The Asiatic cheetah used to occur in the Trans-Caucasus and Central Asia, but is today restricted to Iran.
The Asiatic lion used to occur in the Trans-Caucasus, Iran, and possibly the southern part of Turkestan.
The Caspian tiger used to occur in northern Iran, the Caucasus and Central Asia.
The endangered Persian leopard is found in Iran, the Caucasus and Central Asia.
History
Geology
The main geologic history locally had two stages. The first is the Miocene, determined by tectonic events that correlate with the closing of the Tethys Sea. The second is the Pleistocene noted for its glaciation cycles and the full run of the present Volga. During the first stage, the Tethys Sea had evolved into the Sarmatian Lake, that was created from the modern Black Sea and south Caspian, when the collision of the Arabian peninsula with West Asia pushed up the Kopet Dag and Caucasus Mountains, lasting south and west limits to the basin. This orogenic movement was continuous, while the Caspian was regularly disconnected from the Black Sea. In the late Pontian, a mountain arch rose across the south basin and divided it into the Khachmaz and Lankaran Lakes (or early Balaxani). The period of restriction to the south basin was reversed during the Akchagylian – the lake became more than three times its size today and took again the first of a series of contacts with the Black Sea and with Lake Aral. A recession of Lake Akchagyl completed stage one.
Early settlement nearby
The earliest hominid remains found around the Caspian Sea are from Dmanisi dating back to around 1.8 Ma and yielded a number of skeletal remains of Homo erectus or Homo ergaster. More later evidence for human occupation of the region came from a number of caves in Georgia and Azerbaijan such as Kudaro and Azykh Caves. There is evidence for Lower Palaeolithic human occupation south of the Caspian from western Alburz. These are Ganj Par and Darband Cave sites.
Neanderthal remains also have been discovered at a cave in Georgia. Discoveries in the Hotu cave and the adjacent Kamarband cave, near the town of Behshahr, Mazandaran south of the Caspian in Iran, suggest human habitation of the area as early as 11,000 years ago. Ancient Greeks focused on the civilization on the south shore – they call it the (H)yr(c/k)anian Sea (Ancient Greek: Υρκανία θάλαττα, with sources noting the latter word was evolving then to today's Thelessa: late Ancient Greek: θάλασσα).
Chinese maximal limit
Later, in the Tang dynasty (618–907), the sea was the western limit of the Chinese Empire.
Fossil fuel
The area is rich in fossil fuels. Oil wells were being dug in the region as early as the 10th century to reach oil "for use in everyday life, both for medicinal purposes and for heating and lighting in homes". By the 16th century, Europeans were aware of the rich oil and gas deposits locally. English traders Thomas Bannister and Jeffrey Duckett described the area around Baku as "a strange thing to behold, for there issueth out of the ground a marvelous quantity of oil, which serveth all the country to burn in their houses. This oil is black and is called nefte. There is also by the town of Baku, another kind of oil which is white and very precious [i.e., petroleum]."Today, oil and gas platforms abound along the edges of the sea.
Geography, geology and navigation studies
During the rule of Peter I the Great, Fedor I. Soimonov was a pioneering explorer of the sea. He was a hydrographer who charted and greatly expanded knowledge of the sea. He drew a set of four maps and wrote Pilot of the Caspian Sea, the first lengthy report and modern maps. These were published in 1720 by the Russian Academy of Sciences.
Cities
Ancient
Hyrcania, ancient state in the north of Iran
Sari, Mazandaran Province of Iran
Anzali, Gilan Province of Iran
Astara, Gilan Province of Iran
Astarabad, Golestan Province of Iran
Tamisheh, Golestan Province of Iran
Atil, Khazaria
Khazaran
Baku, Azerbaijan
Derbent, Dagestan, Russia
Xacitarxan, modern-day Astrakhan
Modern
Economy
Countries in the Caspian region, particularly Azerbaijan, Kazakhstan and Turkmenistan, have high-value natural-resource-based economies, where the oil and gas compose more than 10 percent of their GDP and 40 percent of their exports. All the Caspian region economies are highly dependent on this type of mineral wealth. The world energy markets were influenced by Azerbaijan and Kazakhstan, as they became strategically crucial in this sphere, thus attracting the largest share of foreign direct investment (FDI).
All of the countries are rich in solar energy and harnessing potential, with the highest rainfall much less than the mountains of central Europe in the mountains of the west, which are also rich in hydroelectricity sources.
Iran has high fossil fuel energy potential. It has reserves of 137.5 billion barrels of crude oil, the fourth largest in the world, producing around four million barrels a day. Iran has an estimated 988.4 trillion cubic feet of natural gas, around 16 percent of world reserves, thus key to current paradigms in global energy security.Russia's economy ranks as the twelfth largest by nominal GDP and sixth largest by purchasing power parity in 2015. Russia's extensive mineral and energy resources are the largest such reserves in the world, making it the second leading producer of oil and natural gas globally.Caspian littoral states join efforts to develop infrastructure, tourism and trade in the region. The first Caspian Economic Forum was convened on August 12, 2019, in Turkmenistan and brought together representatives of Kazakhstan, Russia, Azerbaijan, Iran and that state. It hosted several meetings of their ministers of economy and transport.The Caspian countries develop robust cooperation in the tech and digital field as part of the Caspian Digital Hub. The project helps expand data transmission capabilities in Kazakhstan as well as data transit capabilities between Asia and Europe. The project generated interest from investors from all over the world, including the UK.
Oil and gas
The Caspian Sea region presently is a significant, but not major, supplier of crude oil to world markets, based upon estimates by BP Amoco and the U.S. Energy Information Administration, U.S. Department of Energy. The region output about 1.4–1.5 million barrels per day plus natural gas liquids in 2001, 1.9% of total world output. More than a dozen countries output more than this top figure. Caspian region production has been higher, but waned during and after the collapse of the Soviet Union. Kazakhstan accounts for 55% and Azerbaijan for about 20% of the states' oil output.The world's first offshore wells and machine-drilled wells were made in Bibi-Heybat Bay, near Baku, Azerbaijan. In 1873, exploration and development of oil began in some of the largest fields known to exist in the world at that time on the Absheron Peninsula near the villages of Balakhanli, Sabunchi, Ramana, and Bibi Heybat. Total recoverable reserves were more than 500 million tons. By 1900, Baku had more than 3,000 oil wells, 2,000 of which were producing at industrial levels. By the end of the 19th century, Baku became known as the "black gold capital", and many skilled workers and specialists flocked to the city.
By the beginning of the 20th century, Baku was the center of the international oil industry. In 1920, when the Bolsheviks captured Azerbaijan, all private property, including oil wells and factories, was confiscated. Rapidly the republic's oil industry came under the control of the Soviet Union. By 1941, Azerbaijan was producing a record 23.5 million tons of oil per year – its Baku region output was nearly 72 percent of the Soviet Union's oil.In 1994, the "Contract of the Century" was signed, heralding extra-regional development of the Baku oil fields. The large Baku–Tbilisi–Ceyhan pipeline conveys Azeri oil to the Turkish Mediterranean port of Ceyhan and opened in 2006.
The Vladimir Filanovsky oil field in the Russian section of the body of water was discovered in 2005. It is reportedly the largest found in 25 years. It was announced in October 2016 that Lukoil would start production from it.
Transport
Baku has the main moorings of all large vessels, such as oil tankers, in Azerbaijan. It is the largest port of the Caspian Sea. The port (and tankers) have access to the oceans along the Caspian Sea–Volga–Don Canal, and the Don–Sea of Azov. A northern alternate is the Volga–Baltic (a sea which has a connection to the North Sea of the Atlantic, as the White Sea does via the White Sea-Baltic canal). Baku Sea Trade Port and Caspian Shipping Company CJSC, have a big role in the sea transportation of Azerbaijan. The Caspian Sea Shipping Company CJSC has two fleets plus shipyards. Its transport fleet has 51 vessels: 20 tankers, 13 ferries, 15 universal dry cargo vessels, 2 Ro-Ro vessels, as well as 1 technical vessel and 1 floating workshop. Its specialized fleet has 210 vessels: 20 cranes, 25 towing and supplying vehicles, 26 passenger, two pipe-laying, six fire-fighting, seven engineering-geological, two diving and 88 auxiliary vessels.The Caspian Sea Shipping Company of Azerbaijan, which acts as a liaison in the Transport Corridor Europe-Caucasus-Asia (TRACECA), simultaneously with the transportation of cargo and passengers in the Trans-Caspian direction, also performs work to fully ensure the processes of oil and gas production at sea. In the 19th century, the sharp increase in oil production in Baku gave a huge impetus to the development of shipping in the Caspian Sea, and as a result, there was a need to create fundamentally new floating facilities for the transportation of oil and oil products.
Political issues
Many of the islands along the Azerbaijani coast retain great geopolitical and economic importance for demarcation-line oil fields relying on their national status. Bulla Island, Pirallahı Island, and Nargin, which is still used as a former Soviet base and is the largest island in the Baku bay, hold oil reserves.
The collapse of the Soviet Union allowed the market opening of the region. This led to intense investment and development by international oil companies. In 1998, Dick Cheney commented that "I can't think of a time when we've had a region emerge as suddenly to become as strategically significant as the Caspian."A key problem to further local development is arriving at precise, agreed demarcation lines among the five littoral states. The current disputes along Azerbaijan's maritime borders with Turkmenistan and Iran could impinge future development.
Much controversy currently exists over the proposed Trans-Caspian oil and gas pipelines. These projects would allow Western markets easier access to Kazakh oil and, potentially, Uzbek and Turkmen gas as well. Russia officially opposes the project on environmental grounds. However, analysts note that the pipelines would bypass Russia completely, thereby denying the country valuable transit fees, as well as destroying its current monopoly on westward-bound hydrocarbon exports from the region. Recently, both Kazakhstan and Turkmenistan have expressed their support for the Trans-Caspian Pipeline.Leaked U.S. diplomatic cables revealed that BP covered up a gas leak and blowout incident in September 2008 at an operating gas field in the Azeri-Chirag-Guneshi area of the Azerbaijan Caspian Sea.
Territorial status
Coastline
Five states are located along about 4,800 km (3,000 mi) of Caspian coastline. The length of the coastline of these countries:
Kazakhstan - 1,422 km (884 mi)
Turkmenistan - 1,035 km (643 mi)
Azerbaijan - 813 km (505 mi)
Russia - 747 km (464 mi)
Iran - 728 km (452 mi)
Negotiations
In 2000, negotiations as to the demarcation of the sea had been going on for nearly a decade among all the states bordering it. Whether it was by law a sea, a lake, or an agreed hybrid, the decision would set the demarcation rules and was heavily debated. Access to mineral resources (oil and natural gas), access for fishing, and access to international waters (through Russia's Volga river and the canals connecting it to the Black Sea and Baltic Sea) all rest on the negotiations' outcome. Access to the Volga is key for market efficiency and economic diversity of the landlocked states of Azerbaijan, Kazakhstan, and Turkmenistan. This concerns Russia as more traffic seeks to use – and at some points congest – its inland waterways. If the body of water is, by law, a sea, many precedents and international treaties oblige free access to foreign vessels. If it is a lake there are no such obligations.
Resolving and improving some environmental issues properly rests on the status and borders issue.
All five Caspian littoral states maintain naval forces on the sea.According to a treaty signed between Iran and the Soviet Union, the sea is technically a lake and was divided into two sectors (Iranian and Soviet), but the resources (then mainly fish) were commonly shared. The line between the two sectors was considered an international border in a common lake, like Lake Albert. The Soviet sector was sub-divided into the four littoral republics' administrative sectors.
Russia, Kazakhstan, and Azerbaijan have bilateral agreements with each other based on median lines. Because of their use by the three nations, median lines seem to be the most likely method of delineating territory in future agreements. However, Iran insists on a single, multilateral agreement between the five nations (aiming for a one-fifth share). Azerbaijan is at odds with Iran over some of the sea's oil fields. Occasionally, Iranian patrol boats have fired at vessels sent by Azerbaijan for exploration into the disputed region. There are similar tensions between Azerbaijan and Turkmenistan (the latter claims that the former has pumped more oil than agreed from a field, recognized by both parties as shared).
The Caspian littoral states' meeting in 2007 signed an accord that only allows littoral-state flag-bearing ships to enter the sea.Negotiations among the five states ebbed and flowed, from about 1990 to 2018. Progress was notable in the fourth Caspian Summit held in Astrakhan in 2014.
Caspian Summit
The Caspian Summit is a head of state-level meeting of the five littoral states. The fifth Caspian Summit took place on August 12, 2018, in the Kazakh port city of Aktau. The five leaders signed the ‘Convention on the Legal Status of the Caspian Sea’.Representatives of the Caspian littoral states held a meeting in the capital of Kazakhstan on September 28, 2018, as a follow-up to the Aktau Summit. The conference was hosted by the Kazakh Ministry of Investment and Development. The participants in the meeting agreed to host an investment forum for the Caspian region every two years.
Convention on the legal status of the Caspian Sea
The five littoral states build consensus on legally binding governance of the Caspian Sea through Special Working Groups of a Convention on the Legal Status of the Caspian Sea. In advance of a Caspian Summit, the 51st Special Working Group took place in Astana in May 2018 and found consensus on multiple agreements: Agreements on cooperation in the field of transport; trade and economic cooperation; prevention of incidents on the sea; combating terrorism; fighting against organized crime; and border security cooperation.The convention grants jurisdiction over 24 km (15 mi) of territorial waters to each neighboring country, plus an additional 16 km (10 mi) of exclusive fishing rights on the surface, while the rest is international waters. The seabed, on the other hand, remains undefined, subject to bilateral agreements between countries. Thus, the Caspian Sea is legally neither fully a sea nor a lake.While the convention addresses caviar production, oil and gas extraction, and military uses, it does not touch on environmental issues.
Crossborder inflow
UNECE recognizes several rivers that cross international borders which flow into the Caspian Sea.
These are:
Transportation
Although the Caspian Sea is endorheic, its main tributary, the Volga, is connected by important shipping canals with the Don River (and thus the Black Sea) and with the Baltic Sea, with branch canals to Northern Dvina and to the White Sea.
Another Caspian tributary, the Kuma River, is connected by an irrigation canal with the Don basin as well.
Scheduled ferry services (including train ferries) across the sea chiefly are between:
Türkmenbaşy in Turkmenistan, (formerly Krasnovodsk) and Baku.
Aktau, Kazakhstan and Baku.
Cities in Iran and Russia (chiefly for cargo.)
Canals
As an endorheic basin, the Caspian Sea basin has no natural connection with the ocean. Since the medieval period, traders reached the Caspian via a number of portages that connected the Volga and its tributaries with the Don River (which flows into the Sea of Azov) and various rivers that flow into the Baltic Sea. Primitive canals connecting the Volga Basin with the Baltic were constructed as early as the early 18th century. Since then, a number of canal projects have been completed.
The two modern canal systems that connect the Volga Basin, and hence the Caspian Sea, with the ocean are the Volga–Baltic Waterway and the Volga–Don Canal.
The proposed Pechora–Kama Canal was a project that was widely discussed between the 1930s and 1980s. Shipping was a secondary consideration. Its main goal was to redirect some of the water of the Pechora River (which flows into the Arctic Ocean) via the Kama River into the Volga. The goals were both irrigation and the stabilization of the water level in the Caspian, which was thought to be falling dangerously fast at the time. During 1971, some peaceful nuclear construction experiments were carried out in the region by the U.S.S.R.
In June 2007, in order to boost his oil-rich country's access to markets, Kazakhstan's President Nursultan Nazarbayev proposed a 700 km (435 mi) link between the Caspian Sea and the Black Sea. It is hoped that the "Eurasia Canal" (Manych Ship Canal) would transform landlocked Kazakhstan and other Central Asian countries into maritime states, enabling them to significantly increase trade volume. Although the canal would traverse Russian territory, it would benefit Kazakhstan through its Caspian Sea ports. The most likely route for the canal, the officials at the Committee on Water Resources at Kazakhstan's Agriculture Ministry say, would follow the Kuma–Manych Depression, where currently a chain of rivers and lakes is already connected by an irrigation canal (the Kuma–Manych Canal). Upgrading the Volga–Don Canal would be another option.
See also
References
External links
Kropotkin, Peter Alexeivitch; Bealby, John Thomas (1911). "Caspian Sea" . In Chisholm, Hugh (ed.). Encyclopædia Britannica. Vol. 5 (11th ed.). Cambridge University Press. pp. 452–455.
Names of the Caspian Sea
Caspian Sea Region
Dating Caspian sea level changes |
solar irradiance | Solar irradiance is the power per unit area (surface power density) received from the Sun in the form of electromagnetic radiation in the wavelength range of the measuring instrument.
Solar irradiance is measured in watts per square metre (W/m2) in SI units.
Solar irradiance is often integrated over a given time period in order to report the radiant energy emitted into the surrounding environment (joule per square metre, J/m2) during that time period. This integrated solar irradiance is called solar irradiation, solar exposure, solar insolation, or insolation.
Irradiance may be measured in space or at the Earth's surface after atmospheric absorption and scattering. Irradiance in space is a function of distance from the Sun, the solar cycle, and cross-cycle changes.
Irradiance on the Earth's surface additionally depends on the tilt of the measuring surface, the height of the Sun above the horizon, and atmospheric conditions.
Solar irradiance affects plant metabolism and animal behavior.The study and measurement of solar irradiance have several important applications, including the prediction of energy generation from solar power plants, the heating and cooling loads of buildings, climate modeling and weather forecasting, passive daytime radiative cooling applications, and space travel.
Types
There are several measured types of solar irradiance.
Total solar irradiance (TSI) is a measure of the solar power over all wavelengths per unit area incident on the Earth's upper atmosphere. It is measured perpendicular to the incoming sunlight. The solar constant is a conventional measure of mean TSI at a distance of one astronomical unit (AU).
Direct normal irradiance (DNI), or beam radiation, is measured at the surface of the Earth at a given location with a surface element perpendicular to the Sun direction. It excludes diffuse solar radiation (radiation that is scattered or reflected by atmospheric components). Direct irradiance is equal to the extraterrestrial irradiance above the atmosphere minus the atmospheric losses due to absorption and scattering. Losses depend on time of day (length of light's path through the atmosphere depending on the solar elevation angle), cloud cover, moisture content and other contents. The irradiance above the atmosphere also varies with time of year (because the distance to the Sun varies), although this effect is generally less significant compared to the effect of losses on DNI.
Diffuse horizontal irradiance (DHI), or diffuse sky radiation is the radiation at the Earth's surface from light scattered by the atmosphere. It is measured on a horizontal surface with radiation coming from all points in the sky excluding circumsolar radiation (radiation coming from the sun disk). There would be almost no DHI in the absence of atmosphere.
Global horizontal irradiance (GHI) is the total irradiance from the Sun on a horizontal surface on Earth. It is the sum of direct irradiance (after accounting for the solar zenith angle of the Sun z) and diffuse horizontal irradiance:
GHI
=
DHI
+
DNI
×
cos
(
z
)
{\displaystyle {\text{GHI}}={\text{DHI}}+{\text{DNI}}\times \cos(z)}
Global tilted irradiance (GTI) is the total radiation received on a surface with defined tilt and azimuth, fixed or Sun-tracking. GTI can be measured or modeled from GHI, DNI, DHI. It is often a reference for photovoltaic power plants, while photovoltaic modules are mounted on the fixed or tracking constructions.
Global normal irradiance (GNI) is the total irradiance from the Sun at the surface of Earth at a given location with a surface element perpendicular to the Sun.
Units
The SI unit of irradiance is watts per square metre (W/m2 = Wm−2). The unit of insolation often used in the solar power industry is kilowatt hours per square metre (kWh/m2).The Langley is an alternative unit of insolation. One Langley is one thermochemical calorie per square centimetre or 41,840 J/m2.
Irradiation at the top of the atmosphere
The average annual solar radiation arriving at the top of the Earth's atmosphere is about 1361 W/m2. This represents the power per unit area of solar irradiance across the spherical surface surrounding the Sun with a radius equal to the distance to the Earth (1 AU). This means that the approximately circular disc of the Earth, as viewed from the Sun, receives a roughly stable 1361 W/m2 at all times. The area of this circular disc is πr2, in which r is the radius of the Earth. Because the Earth is approximately spherical, it has total area
4
π
r
2
{\displaystyle 4\pi r^{2}}
, meaning that the solar radiation arriving at the top of the atmosphere, averaged over the entire surface of the Earth, is simply divided by four to get 340 W/m2. In other words, averaged over the year and the day, the Earth's atmosphere receives 340 W/m2 from the Sun. This figure is important in radiative forcing.
Derivation
The distribution of solar radiation at the top of the atmosphere is determined by Earth's sphericity and orbital parameters.
This applies to any unidirectional beam incident to a rotating sphere.
Insolation is essential for numerical weather prediction and understanding seasons and climatic change. Application to ice ages is known as Milankovitch cycles.
Distribution is based on a fundamental identity from spherical trigonometry, the spherical law of cosines:
where a, b and c are arc lengths, in radians, of the sides of a spherical triangle. C is the angle in the vertex opposite the side which has arc length c. Applied to the calculation of solar zenith angle Θ, the following applies to the spherical law of cosines:
This equation can be also derived from a more general formula:
where β is an angle from the horizontal and γ is an azimuth angle.
The separation of Earth from the Sun can be denoted RE and the mean distance can be denoted R0, approximately 1 astronomical unit (AU). The solar constant is denoted S0. The solar flux density (insolation) onto a plane tangent to the sphere of the Earth, but above the bulk of the atmosphere (elevation 100 km or greater) is:
The average of Q over a day is the average of Q over one rotation, or the hour angle progressing from h = π to h = −π:
Let h0 be the hour angle when Q becomes positive. This could occur at sunrise when
Θ
=
1
2
π
{\displaystyle \Theta ={\tfrac {1}{2}}\pi }
, or for h0 as a solution of
or
If tan(φ) tan(δ) > 1, then the sun does not set and the sun is already risen at h = π, so ho = π. If tan(φ) tan(δ) < −1, the sun does not rise and
Q
¯
day
=
0
{\displaystyle {\overline {Q}}^{\text{day}}=0}
.
R
o
2
R
E
2
{\displaystyle {\frac {R_{o}^{2}}{R_{E}^{2}}}}
is nearly constant over the course of a day, and can be taken outside the integral
Therefore:
Let θ be the conventional polar angle describing a planetary orbit. Let θ = 0 at the vernal equinox. The declination δ as a function of orbital position is
where ε is the obliquity. (Note: The correct formula, valid for any axial tilt, is
sin
(
δ
)
=
sin
(
ε
)
sin
(
θ
)
{\displaystyle \sin(\delta )=\sin(\varepsilon )\sin(\theta )}
.) The conventional longitude of perihelion ϖ is defined relative to the vernal equinox, so for the elliptical orbit:
or
With knowledge of ϖ, ε and e from astrodynamical calculations and So from a consensus of observations or theory,
Q
¯
day
{\displaystyle {\overline {Q}}^{\text{day}}}
can be calculated for any latitude φ and θ. Because of the elliptical orbit, and as a consequence of Kepler's second law, θ does not progress uniformly with time. Nevertheless, θ = 0° is exactly the time of the vernal equinox, θ = 90° is exactly the time of the summer solstice, θ = 180° is exactly the time of the autumnal equinox and θ = 270° is exactly the time of the winter solstice.
A simplified equation for irradiance on a given day is:
where n is a number of a day of the year.
Variation
Total solar irradiance (TSI) changes slowly on decadal and longer timescales. The variation during solar cycle 21 was about 0.1% (peak-to-peak). In contrast to older reconstructions, most recent TSI reconstructions point to an increase of only about 0.05% to 0.1% between the 17th century Maunder Minimum and the present.
Ultraviolet irradiance (EUV) varies by approximately 1.5 percent from solar maxima to minima, for 200 to 300 nm wavelengths. However, a proxy study estimated that UV has increased by 3.0% since the Maunder Minimum.
Some variations in insolation are not due to solar changes but rather due to the Earth moving between its perihelion and aphelion, or changes in the latitudinal distribution of radiation. These orbital changes or Milankovitch cycles have caused radiance variations of as much as 25% (locally; global average changes are much smaller) over long periods. The most recent significant event was an axial tilt of 24° during boreal summer near the Holocene climatic optimum.
Obtaining a time series for a
Q
¯
d
a
y
{\displaystyle {\overline {Q}}^{\mathrm {day} }}
for a particular time of year, and particular latitude, is a useful application in the theory of Milankovitch cycles. For example, at the summer solstice, the declination δ is equal to the obliquity ε. The distance from the Sun is
For this summer solstice calculation, the role of the elliptical orbit is entirely contained within the important product
e
sin
(
ϖ
)
{\displaystyle e\sin(\varpi )}
, the precession index, whose variation dominates the variations in insolation at 65° N when eccentricity is large. For the next 100,000 years, with variations in eccentricity being relatively small, variations in obliquity dominate.
Measurement
The space-based TSI record comprises measurements from more than ten radiometers and spans three solar cycles.
All modern TSI satellite instruments employ active cavity electrical substitution radiometry. This technique measures the electrical heating needed to maintain an absorptive blackened cavity in thermal equilibrium with the incident sunlight which passes through a precision aperture of calibrated area. The aperture is modulated via a shutter. Accuracy uncertainties of < 0.01% are required to detect long term solar irradiance variations, because expected changes are in the range 0.05–0.15 W/m2 per century.
Intertemporal calibration
In orbit, radiometric calibrations drift for reasons including solar degradation of the cavity, electronic degradation of the heater, surface degradation of the precision aperture and varying surface emissions and temperatures that alter thermal backgrounds. These calibrations require compensation to preserve consistent measurements.For various reasons, the sources do not always agree. The Solar Radiation and Climate Experiment/Total Irradiance Measurement (SORCE/TIM) TSI values are lower than prior measurements by the Earth Radiometer Budget Experiment (ERBE) on the Earth Radiation Budget Satellite (ERBS), VIRGO on the Solar Heliospheric Observatory (SoHO) and the ACRIM instruments on the Solar Maximum Mission (SMM), Upper Atmosphere Research Satellite (UARS) and ACRIMSAT. Pre-launch ground calibrations relied on component rather than system-level measurements since irradiance standards at the time lacked sufficient absolute accuracies.Measurement stability involves exposing different radiometer cavities to different accumulations of solar radiation to quantify exposure-dependent degradation effects. These effects are then compensated for in the final data. Observation overlaps permits corrections for both absolute offsets and validation of instrumental drifts.Uncertainties of individual observations exceed irradiance variability (~0.1%). Thus, instrument stability and measurement continuity are relied upon to compute real variations.
Long-term radiometer drifts can potentially be mistaken for irradiance variations which can be misinterpreted as affecting climate. Examples include the issue of the irradiance increase between cycle minima in 1986 and 1996, evident only in the ACRIM composite (and not the model) and the low irradiance levels in the PMOD composite during the 2008 minimum.
Despite the fact that ACRIM I, ACRIM II, ACRIM III, VIRGO and TIM all track degradation with redundant cavities, notable and unexplained differences remain in irradiance and the modeled influences of sunspots and faculae.
Persistent inconsistencies
Disagreement among overlapping observations indicates unresolved drifts that suggest the TSI record is not sufficiently stable to discern solar changes on decadal time scales. Only the ACRIM composite shows irradiance increasing by ~1 W/m2 between 1986 and 1996; this change is also absent in the model.Recommendations to resolve the instrument discrepancies include validating optical measurement accuracy by comparing ground-based instruments to laboratory references, such as those at National Institute of Science and Technology (NIST); NIST validation of aperture area calibrations uses spares from each instrument; and applying diffraction corrections from the view-limiting aperture.For ACRIM, NIST determined that diffraction from the view-limiting aperture contributes a 0.13% signal not accounted for in the three ACRIM instruments. This correction lowers the reported ACRIM values, bringing ACRIM closer to TIM. In ACRIM and all other instruments but TIM, the aperture is deep inside the instrument, with a larger view-limiting aperture at the front. Depending on edge imperfections this can directly scatter light into the cavity. This design admits into the front part of the instrument two to three times the amount of light intended to be measured; if not completely absorbed or scattered, this additional light produces erroneously high signals. In contrast, TIM's design places the precision aperture at the front so that only desired light enters.Variations from other sources likely include an annual systematics in the ACRIM III data that is nearly in phase with the Sun-Earth distance and 90-day spikes in the VIRGO data coincident with SoHO spacecraft maneuvers that were most apparent during the 2008 solar minimum.
TSI Radiometer Facility
TIM's high absolute accuracy creates new opportunities for measuring climate variables. TSI Radiometer Facility (TRF) is a cryogenic radiometer that operates in a vacuum with controlled light sources. L-1 Standards and Technology (LASP) designed and built the system, completed in 2008. It was calibrated for optical power against the NIST Primary Optical Watt Radiometer, a cryogenic radiometer that maintains the NIST radiant power scale to an uncertainty of 0.02% (1σ). As of 2011 TRF was the only facility that approached the desired <0.01% uncertainty for pre-launch validation of solar radiometers measuring irradiance (rather than merely optical power) at solar power levels and under vacuum conditions.TRF encloses both the reference radiometer and the instrument under test in a common vacuum system that contains a stationary, spatially uniform illuminating beam. A precision aperture with an area calibrated to 0.0031% (1σ) determines the beam's measured portion. The test instrument's precision aperture is positioned in the same location, without optically altering the beam, for direct comparison to the reference. Variable beam power provides linearity diagnostics, and variable beam diameter diagnoses scattering from different instrument components.The Glory/TIM and PICARD/PREMOS flight instrument absolute scales are now traceable to the TRF in both optical power and irradiance. The resulting high accuracy reduces the consequences of any future gap in the solar irradiance record.
2011 reassessment
The most probable value of TSI representative of solar minimum is 1360.9±0.5 W/m2, lower than the earlier accepted value of 1365.4±1.3 W/m2, established in the 1990s. The new value came from SORCE/TIM and radiometric laboratory tests. Scattered light is a primary cause of the higher irradiance values measured by earlier satellites in which the precision aperture is located behind a larger, view-limiting aperture. The TIM uses a view-limiting aperture that is smaller than the precision aperture that precludes this spurious signal. The new estimate is from better measurement rather than a change in solar output.A regression model-based split of the relative proportion of sunspot and facular influences from SORCE/TIM data accounts for 92% of observed variance and tracks the observed trends to within TIM's stability band. This agreement provides further evidence that TSI variations are primarily due to solar surface magnetic activity.Instrument inaccuracies add a significant uncertainty in determining Earth's energy balance. The energy imbalance has been variously measured (during a deep solar minimum of 2005–2010) to be +0.58±0.15 W/m2, +0.60±0.17 W/m2 and +0.85 W/m2. Estimates from space-based measurements range +3–7 W/m2. SORCE/TIM's lower TSI value reduces this discrepancy by 1 W/m2. This difference between the new lower TIM value and earlier TSI measurements corresponds to a climate forcing of −0.8 W/m2, which is comparable to the energy imbalance.
2014 reassessment
In 2014 a new ACRIM composite was developed using the updated ACRIM3 record. It added corrections for scattering and diffraction revealed during recent testing at TRF and two algorithm updates. The algorithm updates more accurately account for instrument thermal behavior and parsing of shutter cycle data. These corrected a component of the quasi-annual spurious signal and increased the signal-to-noise ratio, respectively. The net effect of these corrections decreased the average ACRIM3 TSI value without affecting the trending in the ACRIM Composite TSI.Differences between ACRIM and PMOD TSI composites are evident, but the most significant is the solar minimum-to-minimum trends during solar cycles 21-23. ACRIM found an increase of +0.037%/decade from 1980 to 2000 and a decrease thereafter. PMOD instead presents a steady decrease since 1978. Significant differences can also be seen during the peak of solar cycles 21 and 22. These arise from the fact that ACRIM uses the original TSI results published by the satellite experiment teams while PMOD significantly modifies some results to conform them to specific TSI proxy models. The implications of increasing TSI during the global warming of the last two decades of the 20th century are that solar forcing may be a marginally larger factor in climate change than represented in the CMIP5 general circulation climate models.
Irradiance on Earth's surface
Average annual solar radiation arriving at the top of the Earth's atmosphere is roughly 1361 W/m2. The Sun's rays are attenuated as they pass through the atmosphere, leaving maximum normal surface irradiance at approximately 1000 W/m2 at sea level on a clear day. When 1361 W/m2 is arriving above the atmosphere (when the Sun is at the zenith in a cloudless sky), direct sun is about 1050 W/m2, and global radiation on a horizontal surface at ground level is about 1120 W/m2.
The latter figure includes radiation scattered or reemitted by the atmosphere and surroundings. The actual figure varies with the Sun's angle and atmospheric circumstances. Ignoring clouds, the daily average insolation for the Earth is approximately 6 kWh/m2 = 21.6 MJ/m2.
The output of, for example, a photovoltaic panel, partly depends on the angle of the sun relative to the panel. One Sun is a unit of power flux, not a standard value for actual insolation. Sometimes this unit is referred to as a Sol, not to be confused with a sol, meaning one solar day.
Absorption and reflection
Part of the radiation reaching an object is absorbed and the remainder reflected. Usually, the absorbed radiation is converted to thermal energy, increasing the object's temperature. Manmade or natural systems, however, can convert part of the absorbed radiation into another form such as electricity or chemical bonds, as in the case of photovoltaic cells or plants. The proportion of reflected radiation is the object's reflectivity or albedo.
Projection effect
Insolation onto a surface is largest when the surface directly faces (is normal to) the sun. As the angle between the surface and the Sun moves from normal, the insolation is reduced in proportion to the angle's cosine; see effect of Sun angle on climate.
In the figure, the angle shown is between the ground and the sunbeam rather than between the vertical direction and the sunbeam; hence the sine rather than the cosine is appropriate. A sunbeam one mile wide arrives from directly overhead, and another at a 30° angle to the horizontal. The sine of a 30° angle is 1/2, whereas the sine of a 90° angle is 1. Therefore, the angled sunbeam spreads the light over twice the area. Consequently, half as much light falls on each square mile.
This projection effect is the main reason why Earth's polar regions are much colder than equatorial regions. On an annual average, the poles receive less insolation than does the equator, because the poles are always angled more away from the Sun than the tropics, and moreover receive no insolation at all for the six months of their respective winters.
Absorption effect
At a lower angle, the light must also travel through more atmosphere. This attenuates it (by absorption and scattering) further reducing insolation at the surface.
Attenuation is governed by the Beer-Lambert Law, namely that the transmittance or fraction of insolation reaching the surface decreases exponentially in the optical depth or absorbance (the two notions differing only by a constant factor of ln(10) = 2.303) of the path of insolation through the atmosphere. For any given short length of the path, the optical depth is proportional to the number of absorbers and scatterers along that length, typically increasing with decreasing altitude. The optical depth of the whole path is then the integral (sum) of those optical depths along the path.
When the density of absorbers is layered, that is, depends much more on vertical than horizontal position in the atmosphere, to a good approximation the optical depth is inversely proportional to the projection effect, that is, to the cosine of the zenith angle. Since transmittance decreases exponentially with increasing optical depth, as the sun approaches the horizon there comes a point when absorption dominates projection for the rest of the day. With a relatively high level of absorbers this can be a considerable portion of the late afternoon, and likewise of the early morning. Conversely, in the (hypothetical) total absence of absorption, the optical depth remains zero at all altitudes of the sun, that is, transmittance remains 1, and so only the projection effect applies.
Solar potential maps
Assessment and mapping of solar potential at the global, regional and country levels have been the subject of significant academic and commercial interest. One of the earliest attempts to carry out comprehensive mapping of solar potential for individual countries was the Solar & Wind Resource Assessment (SWERA) project, funded by the United Nations Environment Program and carried out by the US National Renewable Energy Laboratory. Other examples include global mapping by the National Aeronautics and Space Administration and other similar institutes, many of which are available on the Global Atlas for Renewable Energy provided by the International Renewable Energy Agency. A number of commercial firms now exist to provide solar resource data to solar power developers, including 3E, Clean Power Research, SoDa Solar Radiation Data, Solargis, Vaisala (previously 3Tier), and Vortex, and these firms have often provided solar potential maps for free. In January 2017 the Global Solar Atlas was launched by the World Bank, using data provided by Solargis, to provide a single source for high-quality solar data, maps, and GIS layers covering all countries.
Maps of GHI potential by region and country (Note: colors are not consistent across maps)
Solar radiation maps are built using databases derived from satellite imagery, as for example using visible images from Meteosat Prime satellite. A method is applied to the images to determine solar radiation. One well validated satellite-to-irradiance model is the SUNY model. The accuracy of this model is well evaluated. In general, solar irradiance maps are accurate, especially for Global Horizontal Irradiance.
Applications
Solar power
Solar irradiation figures are used to plan the deployment of solar power systems.
In many countries, the figures can be obtained from an insolation map or from insolation tables that reflect data over the prior 30–50 years.
Different solar power technologies are able to use different components of the total irradiation. While solar photovoltaics panels are able to convert to electricity both direct irradiation and diffuse irradiation, concentrated solar power is only able to operate efficiently with direct irradiation, thus making these systems suitable only in locations with relatively low cloud cover.
Because solar collectors panels are almost always mounted at an angle towards the Sun, insolation figures must be adjusted to find the amount of sunlight falling on the panel. This will prevent estimates that are inaccurately low for winter and inaccurately high for summer.
This also means that the amount of sunlight falling on a solar panel at high latitude is not as low compared to one at the equator as would appear from just considering insolation on a horizontal surface.
Horizontal insolation values range from 800 to 950 kWh/(kWp·y) in Norway to up to 2,900 kWh/(kWp·y) in Australia. But a properly tilted panel at 50° latitude receives 1860 kWh/m2/y, compared to 2370 at the equator. In fact, under clear skies a solar panel placed horizontally at the north or south pole at midsummer receives more sunlight over 24 hours (cosine of angle of incidence equal to sin(23.5°) or about 0.40) than a horizontal panel at the equator at the equinox (average cosine equal to 1/π or about 0.32).
Photovoltaic panels are rated under standard conditions to determine the Wp (peak watts) rating, which can then be used with insolation, adjusted by factors such as tilt, tracking and shading, to determine the expected output.
Buildings
In construction, insolation is an important consideration when designing a building for a particular site.The projection effect can be used to design buildings that are cool in summer and warm in winter, by providing vertical windows on the equator-facing side of the building (the south face in the northern hemisphere, or the north face in the southern hemisphere): this maximizes insolation in the winter months when the Sun is low in the sky and minimizes it in the summer when the Sun is high. (The Sun's north–south path through the sky spans 47° through the year).
Civil engineering
In civil engineering and hydrology, numerical models of snowmelt runoff use observations of insolation.
This permits estimation of the rate at which water is released from a melting snowpack.
Field measurement is accomplished using a pyranometer.
Climate research
Irradiance plays a part in climate modeling and weather forecasting. A non-zero average global net radiation at the top of the atmosphere is indicative of Earth's thermal disequilibrium as imposed by climate forcing.
The impact of the lower 2014 TSI value on climate models is unknown. A few tenths of a percent change in the absolute TSI level is typically considered to be of minimal consequence for climate simulations. The new measurements require climate model parameter adjustments.
Experiments with GISS Model 3 investigated the sensitivity of model performance to the TSI absolute value during the present and pre-industrial epochs, and describe, for example, how the irradiance reduction is partitioned between the atmosphere and surface and the effects on outgoing radiation.Assessing the impact of long-term irradiance changes on climate requires greater instrument stability combined with reliable global surface temperature observations to quantify climate response processes to radiative forcing on decadal time scales. The observed 0.1% irradiance increase imparts 0.22 W/m2 climate forcing, which suggests a transient climate response of 0.6 °C per W/m2. This response is larger by a factor of 2 or more than in the IPCC-assessed 2008 models, possibly appearing in the models' heat uptake by the ocean.
Global cooling
Measuring a surface's capacity to reflect solar irradiance is essential to passive daytime radiative cooling, which has been proposed as a method of reversing local and global temperature increases associated with global warming. In order to measure the cooling power of a passive radiative cooling surface, both the absorbed powers of atmospheric and solar radiations must be quantified. On a clear day, solar irradiance can reach 1000 W/m2 with a diffuse component between 50 and 100 W/m2. On average the cooling power of a passive daytime radiative cooling surface has been estimated at ~100-150 W/m2.
Space
Insolation is the primary variable affecting equilibrium temperature in spacecraft design and planetology.
Solar activity and irradiance measurement is a concern for space travel. For example, the American space agency, NASA, launched its Solar Radiation and Climate Experiment (SORCE) satellite with Solar Irradiance Monitors.
See also
References
Bibliography
Willson, Richard C.; H.S. Hudson (1991). "The Sun's luminosity over a complete solar cycle". Nature. 351 (6321): 42–4. Bibcode:1991Natur.351...42W. doi:10.1038/351042a0. S2CID 4273483.
"The Sun and Climate". U.S. Geological Survey Fact Sheet 0095-00. Retrieved 2005-02-21.
Foukal, Peter; et al. (1977). "The effects of sunspots and faculae on the solar constant". Astrophysical Journal. 215: 952. Bibcode:1977ApJ...215..952F. doi:10.1086/155431.
Stetson, H.T. (1937). Sunspots and Their Effects. New York: McGraw Hill.
Yaskell, Steven Haywood (31 December 2012). Grand Phases On The Sun: The case for a mechanism responsible for extended solar minima and maxima. Trafford Publishing. ISBN 978-1-4669-6300-9.
External links
Global Solar Atlas - browse or download maps and GIS data layers (global or per country) of the long-term averages of solar irradiation data (published by the World Bank, provided by Solargis)]
Solcast - solar irradiance data updated every 10–15 minutes. Recent, live, historical and forecast, free for public research use
Recent Total Solar Irradiance data updated every Monday
San Francisco Solar Map
European Commission- Interactive Maps
Yesterday's Australian Solar Radiation Map
Solar Radiation using Google Maps
SMARTS, software to compute solar insolation of each date/location of earth Solar Resource Data and Tools
NASA Surface meteorology and Solar Energy
insol: R package for insolation on complex terrain
Online insolation calculator |
polar amplification | Polar amplification is the phenomenon that any change in the net radiation balance (for example greenhouse intensification) tends to produce a larger change in temperature near the poles than in the planetary average. This is commonly referred to as the ratio of polar warming to tropical warming. On a planet with an atmosphere that can restrict emission of longwave radiation to space (a greenhouse effect), surface temperatures will be warmer than a simple planetary equilibrium temperature calculation would predict. Where the atmosphere or an extensive ocean is able to transport heat polewards, the poles will be warmer and equatorial regions cooler than their local net radiation balances would predict. The poles will experience the most cooling when the global-mean temperature is lower relative to a reference climate; alternatively, the poles will experience the greatest warming when the global-mean temperature is higher.In the extreme, the planet Venus is thought to have experienced a very large increase in greenhouse effect over its lifetime, so much so that its poles have warmed sufficiently to render its surface temperature effectively isothermal (no difference between poles and equator). On Earth, water vapor and trace gasses provide a lesser greenhouse effect, and the atmosphere and extensive oceans provide efficient poleward heat transport. Both palaeoclimate changes and recent global warming changes have exhibited strong polar amplification, as described below.
Arctic amplification is polar amplification of the Earth's North Pole only; Antarctic amplification is that of the South Pole.
History
An observation-based study related to Arctic amplification was published in 1969 by Mikhail Budyko, and the study conclusion has been summarized as "Sea ice loss affects Arctic temperatures through the surface albedo feedback." The same year, a similar model was published by William D. Sellers. Both studies attracted significant attention since they hinted at the possibility for a runaway positive feedback within the global climate system. In 1975, Manabe and Wetherald published the first somewhat plausible general circulation model that looked at the effects of an increase of greenhouse gas. Although confined to less than one-third of the globe, with a "swamp" ocean and only land surface at high latitudes, it showed an Arctic warming faster than the tropics (as have all subsequent models).
Amplification
Amplifying mechanisms
Feedbacks associated with sea ice and snow cover are widely cited as one of the principal causes of terrestrial polar amplification. These feedbacks are particularly noted in local polar amplification, although recent work has shown that the lapse rate feedback is likely equally important to the ice-albedo feedback for Arctic amplification. Supporting this idea, large-scale amplification is also observed in model worlds with no ice or snow. It appears to arise both from a (possibly transient) intensification of poleward heat transport and more directly from changes in the local net radiation balance. Local radiation balance is crucial because an overall decrease in outgoing longwave radiation will produce a larger relative increase in net radiation near the poles than near the equator. Thus, between the lapse rate feedback and changes in the local radiation balance, much of polar amplification can be attributed to changes in outgoing longwave radiation. This is especially true for the Arctic, whereas the elevated terrain in Antarctica limits the influence of the lapse rate feedback.Some examples of climate system feedbacks thought to contribute to recent polar amplification include the reduction of snow cover and sea ice, changes in atmospheric and ocean circulation, the presence of anthropogenic soot in the Arctic environment, and increases in cloud cover and water vapor. CO2 forcing has also been attributed to polar amplification. Most studies connect sea ice changes to polar amplification. Both ice extent and thickness impact polar amplification. Climate models with smaller baseline sea ice extent and thinner sea ice coverage exhibit stronger polar amplification. Some models of modern climate exhibit Arctic amplification without changes in snow and ice cover.The individual processes contributing to polar warming are critical to understanding climate sensitivity. Polar warming also affects many ecosystems, including marine and terrestrial ecosystems, climate systems, and human populations. Polar amplification is largely driven by local polar processes with hardly any remote forcing, whereas polar warming is regulated by tropical and midlatitude forcing. These impacts of polar amplification have led to continuous research in the face of global warming.
Ocean circulation
It has been estimated that 70% of global wind energy is transferred to the ocean and takes place within the Antarctic Circumpolar Current (ACC). Eventually, upwelling due to wind-stress transports cold Antarctic waters through the Atlantic surface current, while warming them over the equator, and into the Arctic environment. This is especially noticed in high latitudes. Thus, warming in the Arctic depends on the efficiency of the global ocean transport and plays a role in the polar see-saw effect.Decreased oxygen and low-pH during La Niña are processes that correlate with decreased primary production and a more pronounced poleward flow of ocean currents. It has been proposed that the mechanism of increased Arctic surface air temperature anomalies during La Niña periods of ENSO may be attributed to the Tropically Excited Arctic Warming Mechanism (TEAM), when Rossby waves propagate more poleward, leading to wave dynamics and an increase in downward infrared radiation.
Amplification factor
Polar amplification is quantified in terms of a polar amplification factor, generally defined as the ratio of some change in a polar temperature to a corresponding change in a broader average temperature:
P
A
F
=
Δ
T
p
Δ
T
¯
{\displaystyle {PAF}={\Delta {T}_{p} \over \Delta {\overline {T}}}}
,where
Δ
T
p
{\displaystyle \Delta {T}_{p}}
is a change in polar temperature and
Δ
T
¯
{\displaystyle \Delta {\overline {T}}}
is, for example, a corresponding change in a global mean temperature.
Common implementations define the temperature changes directly as the anomalies in surface air temperature relative to a recent reference interval (typically 30 years). Others have used the ratio of the variances of surface air temperature over an extended interval.
Amplification phase
It is observed that Arctic and Antarctic warming commonly proceed out of phase because of orbital forcing, resulting in the so-called polar see-saw effect.
Paleoclimate polar amplification
The glacial / interglacial cycles of the Pleistocene provide extensive palaeoclimate evidence of polar amplification, both from the Arctic and the Antarctic. In particular, the temperature rise since the last glacial maximum 20,000 years ago provides a clear picture. Proxy temperature records from the Arctic (Greenland) and from the Antarctic indicate polar amplification factors on the order of 2.0.
Recent Arctic amplification
Suggested mechanisms leading to the observed Arctic amplification include Arctic sea ice decline (open water reflects less sunlight than sea ice), atmospheric heat transport from the equator to the Arctic, and the lapse rate feedback.The Arctic was historically described as warming twice as fast as the global average, but this estimate was based on older observations which missed the more recent acceleration. By 2021, enough data was available to show that the Arctic had warmed three times as fast as the globe - 3.1 °C between 1971 and 2019, as opposed to the global warming of 1 °C over the same period. Moreover, this estimate defines the Arctic as everything above 60th parallel north, or a full third of the Northern Hemisphere: in 2021–2022, it was found that since 1979, the warming within the Arctic Circle itself (above the 66th parallel) has been nearly four times faster than the global average. Within the Arctic Circle itself, even greater Arctic amplification occurs in the Barents Sea area, with hotspots around West Spitsbergen Current: weather stations located on its path record decadal warming up to seven times faster than the global average. This has fuelled concerns that unlike the rest of the Arctic sea ice, ice cover in the Barents Sea may permanently disappear even around 1.5 degrees of global warming.The acceleration of Arctic amplification has not been linear: a 2022 analysis found that it occurred in two sharp steps, with the former around 1986, and the latter after 2000. The first acceleration is attributed to the increase in anthropogenic radiative forcing in the region, which is in turn likely connected to the reductions in stratospheric sulfur aerosols pollution in Europe in the 1980s in order to combat acid rain. Since sulphate aerosols have a cooling effect, their absence is likely to have increased Arctic temperatures by up to 0.5 degrees Celsius. The second acceleration has no known cause, which is why it did not show up in any climate models. It is likely to be an example of multi-decadal natural variability, like the suggested link between Arctic temperatures and Atlantic Multi-decadal Oscillation (AMO), in which case it can be expected to reverse in the future. However, even the first increase in Arctic amplification was only accurately simulated by a fraction of the current CMIP6 models.
Possible impacts on mid-latitude weather
See also
Arctic dipole anomaly
Arctic oscillation
Climate of the Arctic
Polar vortex
Sudden stratospheric warming
References
External links
Turton, Steve (3 June 2021). "Why is the Arctic warming faster than other parts of the world? Scientists explain". WEForum.org. World Economic Forum. Archived from the original on 3 June 2021. |
holarctic realm | The Holarctic realm is a biogeographic realm that comprises the majority of habitats found throughout the continents in the Northern Hemisphere. It corresponds to the floristic Boreal Kingdom. It includes both the Nearctic zoogeographical region (which covers most of North America), and Alfred Wallace's Palearctic zoogeographical region (which covers North Africa, and all of Eurasia except for Southeast Asia, the Indian subcontinent, the southern Arabian Peninsula).
These regions are further subdivided into a variety of ecoregions. Many ecosystems and the animal and plant communities that depend on them extend across a number of continents and cover large portions of the Holarctic realm. This continuity is the result of those regions’ shared glacial history.
Major ecosystems
Within the Holarctic realm, there are a variety of ecosystems. The type of ecosystem found in a given area depends on its latitude and the local geography. In the far north, a band of Arctic tundra encircles the shore of the Arctic Ocean. The ground beneath this land is permafrost (frozen year-round). In these difficult growing conditions, few plants can survive. South of the tundra, the boreal forest stretches across North America and Eurasia. This land is characterized by coniferous trees. Further south, the ecosystems become more diverse. Some areas are temperate grassland, while others are temperate forests dominated by deciduous trees. Many of the southernmost parts of the Holarctic are deserts, which are dominated by plants and animals adapted to the dry conditions.
Animal species with a Holarctic distribution
A variety of animal species are distributed across continents, throughout much of the Holarctic realm. These include the brown bear, grey wolf, red fox, wolverine, moose, caribou, golden eagle and common raven.
The brown bear (Ursus arctos) is found in mountainous and semi-open areas distributed throughout the Holarctic. It once occupied much larger areas, but has been driven out by human development and the resulting habitat fragmentation. Today it is only found in remaining wilderness areas.
The grey wolf (Canis lupus) is found in a wide variety of habitats from tundra to desert, with different populations adapted for each. Its historical distribution encompasses the vast majority of the Holarctic realm, though human activities such as development and active extermination have extirpated the species from much of this range.
The red fox (Vulpes vulpes) is a highly adaptable predator. It has the widest distribution of any terrestrial carnivore, and is adapted to a wide range of habitats, including areas of intense human development. Like the wolf, it is distributed throughout the majority of the Holarctic, but it has avoided extirpation.
The wolverine (Gulo gulo) is a large member of the weasel family found primarily in the arctic and in boreal forests, ranging south in mountainous regions. It is distributed in such areas throughout Eurasia and North America.
The moose (Alces alces) is the largest member of the deer family. It is found throughout most of the boreal forest through continental Eurasia into Scandinavia, eastern North America, and boreal and montane regions of western North America. In some areas it ranges south into the deciduous forest.
The caribou, or reindeer (Rangifer tarandus) is found in boreal forest and tundra in the northern parts of the Holarctic. In Eurasia, it has been domesticated. It is divided into several subspecies, which are adapted to different habitats and geographic areas.
The golden eagle (Aquila chrysaetos) is one of the best-known birds of prey in the Northern Hemisphere. It is the most widely distributed species of eagle. Golden eagles use their agility and speed combined with powerful feet and massive, sharp talons to snatch up a variety of prey (mainly hares, rabbits, marmots and other ground squirrels).
The common raven (Corvus corax) is the most widespread of the corvids, and one of the largest. It is found in a variety of habitats, but primarily wooded northern areas. It has been known to adapt well to areas of human activity. Their distribution also makes up most of the Holarctic realm.
Leptothorax acervorum is a small red Holarctic ant widely distributed across Eurasia, ranging from central Spain and Italy to the northernmost parts of Scandinavia and Siberia.
Zygiella x-notata is a species of orb-weaving spider with a Holarctic distribution, mostly inhabiting urban and suburban regions of Europe and parts of North America.
Origin
The continuity of the northern parts of the Holarctic results from their shared glacial history. During the Pleistocene (Ice Age), these areas were subjected to repeated glaciations. Icecaps expanded, scouring the land of life and reshaping its topography. During glacial periods, species survived in refugia, small areas that maintained a suitable climate due to local geography. These areas are believed to have been primarily in southern regions, but some genetic and paleontological evidence points to additional refugia in the sheltered areas of the north.Wherever these areas were found, they became source populations during interglacial periods. When the glaciers receded, plants and animals spread rapidly into the newly opened areas. Different taxa responded to these rapidly changing conditions in different ways. Tree species spread outward from refugia during interglacial periods, but in varied patterns, with different trees dominating in different periods. Insects, on the other hand, shifted their ranges with the climate, maintaining consistency in species for the most part throughout the period. Their high degree of mobility allowed them to move as the glaciers advanced or retreated, maintaining a constant habitat despite the climatic oscillations. Despite their apparent lack of mobility, plants managed to colonize new areas rapidly as well. Studies of fossil pollen indicate that trees recolonized these lands at an exponential rate. Mammals recolonized at varying rates. Brown bears, for instance, moved quickly from refugia with the receding glaciers, becoming one of the first large mammals to recolonize the land. The Last Glacial Period ended about 10,000 years ago, resulting in the present distribution of ecoregions.
Another factor contributing to the continuity of Holarctic ecosystems is the movement between continents allowed by the Bering land bridge, which was exposed by the lowering of sea level due to the expansion of the ice caps. The communities found in the Palearctic and the Nearctic are different, but have many species in common. This is the result of several faunal interchanges that took place across the Bering land bridge. However, these migrations were mostly limited to large, cold-tolerant species. Today it is mainly these species which are found throughout the realm.
Threats
As the Holarctic is an enormous area, it is subject to environmental problems of international scale. The primary threats throughout the region result from global warming and habitat fragmentation. The former is of particular concern in the north, as these ecosystems are adapted to cold. The latter is more of a concern in the south, where development is prevalent.
Global warming is a threat to all the Earth's ecosystems, but it is a more immediate threat to those found in cold climates. The communities of species found at these latitudes are adapted to the cold, so any significant warming can upset the balance. For instance, insects struggle to survive the cold winters typical of the boreal forest. Many do not make it, especially in harsh winters. However, recently the winters have grown milder, which has had a drastic effect on the forest. Winter mortality of some insect species drastically decreased, allowing the population to build on itself in subsequent years. In some areas the effects have been severe. Spruce beetle outbreaks have wiped out up to ninety percent of the Kenai Peninsula's spruce trees; this is blamed primarily on a series of unusually warm years since 1987.In this case a native species has caused massive disturbance of habitat as a result of climate change. Warming temperatures may also allow pest species to enlarge their range, moving into habitats that were previously unsuitable. Studies of potential areas for outbreaks of bark beetles indicate that as the climate shifts, these beetles will expand to the north and to higher elevations than they have previously affected. With warmer temperatures, insect infestation will become a greater problem throughout the northern parts of the Holarctic.
Another potential effect of global warming to northern ecosystems is the melting of permafrost. This can have significant effects on the plant communities that are adapted to the frozen soil, and may also have implications for further climate change. As permafrost melts, any trees growing above it may die, and the land shifts from forest to peatland. In the far north, shrubs may later take over what was formerly tundra. The precise effect depends on whether the water that was locked up is able to drain off. In either case, the habitat will undergo a shift. Melting permafrost may also accelerate climate change in the future. Within the permafrost, vast quantities of carbon are locked up. If this soil melts, the carbon may be released into the air as either carbon dioxide or methane. Both of these are greenhouse gases.Habitat fragmentation threatens a wide variety of habitats throughout the world, and the Holarctic is no exception. Fragmentation has a variety of negative effects on populations. As populations become cut off, their genetic diversity suffers and they become susceptible to sudden disasters and extinction. While the northern parts of the Holarctic represent some of the largest areas of wilderness left on Earth, the southern parts are in some places extensively developed. This realm contains most of the world's developed countries, including the United States and the nations of Western Europe. Temperate forests were the primary ecosystem in many of the most developed areas today. These lands are now used for intensive agriculture or have become urbanized. As lands have been developed for agricultural uses and human occupation, natural habitat has for the most part become limited to areas considered unsuitable for human use, such as slopes or rocky areas. This pattern of development limits the ability of animals, especially large ones, to migrate from place to place.
Large carnivores are particularly affected by habitat fragmentation. These mammals, such as brown bears and wolves, require large areas of land with relatively intact habitat to survive as individuals. Much larger areas are required to maintain a sustainable population. They may also serve as keystone species, regulating the populations of the species they prey on. Thus, their conservation has direct implications for a wide range of species, and is difficult to accomplish politically due to the large size of the areas they need. With increasing development, these species in particular are at risk, which could have effects that carry down throughout the ecosystem.
Conservation actions
The threats to the Holarctic realm are not going unrecognized. Many efforts are being made to mitigate these threats, with the hope of preserving the biodiversity of the region. International agreements to combat global warming may help to lessen the effects of climate change on this region. Efforts are also underway to fight habitat fragmentation, both on local and regional scales.
The most comprehensive effort to combat global warming to date is the Kyoto Protocol. Developed countries who sign this protocol agree to cut their collective greenhouse gas emissions by five percent since 1990 by sometime between 2008 and 2012. The vast majority of these nations are found within the Holarctic. Each country is given a target for emission levels, and they may trade emissions credits in a market-based system that includes developing countries as well. Once this period is ended, a new agreement will be written to further mitigate the effects of climate change. The process of drafting a new agreement has already begun. In late 2007, an international meeting in Bali was held to begin planning for the successor to the Kyoto Protocol. This agreement will aim to build on the successes and failures of Kyoto to produce a more effective method of cutting greenhouse gas emissions (UNFCCC). If these efforts are successful, the biodiversity of the Holarctic and the rest of the world will see fewer effects of climate change.
Fighting habitat fragmentation is a major challenge in conserving the wide-ranging species of the Holarctic. Some efforts are limited to a local scale of protection, while others are regional in scope. Local efforts include creating reserves and establishing safe routes for animals to cross roads and other human-made barriers. Regional efforts to combat habitat fragmentation take a broader scope.
One major such effort in the Holarctic is the Yellowstone to Yukon Conservation Initiative. This organization was started in 1997 to help establish a contiguous network of protection for the northern Rocky Mountains, from mid Wyoming to the border between Alaska and Canada's Yukon. It brings together a wide variety of environmental organizations for a shared purpose. The goal of the Initiative is to create a core of protected areas, connected by corridors and surrounded by buffer zones. This will build on the many existing protected areas in this region, with a focus on integrating existing and future human activities into the conservation plan rather than seeking to exclude them (Yellowstone to Yukon). If these efforts are successful, they will be especially beneficial to wide-ranging species such as grizzly bears. If these species can survive, other members of the communities they live in will survive as well.
References
United Nations Framework Convention on Climate Change. Available at: http://unfccc.int/2860.php. Accessed December 2007.
Yellowstone to Yukon Conservation Initiative. Updated 2006. Available at http://www.y2y.net. Accessed December 2007. |
island country | An island country, island state, or island nation is a country whose primary territory consists of one or more islands or parts of islands. Approximately 25% of all independent countries are island countries. Island countries are historically more stable than many continental states but are vulnerable to conquest by naval superpowers. Indonesia is the largest and most populated island country in the world.There are great variations between island country economies: they may rely mainly on extractive industries, such as mining, fishing and agriculture, and/or on services such as transit hubs, tourism, and financial services. Many islands have low-lying geographies and their economies and population centers develop along coast plains and ports; such states may be vulnerable to the effects of climate change, especially sea level rise.
Remote or significant islands and archipelagos that are not themselves sovereign are often known as dependencies or overseas territories.
Politics
Historically, island countries have tended to be less prone to political instability than their continental counterparts. The percentage of island countries that are democratic is higher than that of continental countries.
Island territories
While island countries by definition are sovereign states, there are also several islands and archipelagos around the world that operate semi-autonomously from their official sovereign states. These are often known as dependencies or overseas territories and can be similar in nature to proper island countries.
War
Island countries have often been the basis of maritime conquest and historical rivalry between other countries.
Island countries are more susceptible to attack by large, continental countries due to their size and dependence on sea and air lines of communication.
Many island countries are also vulnerable to predation by mercenaries and other foreign invaders,
although their isolation also makes them a difficult target.
Natural resources
Many developing small island countries rely heavily on fish for their main supply of food.
Some are turning to renewable energy—such as wind power, hydropower, geothermal power and biodiesel from copra oil—to defend against potential rises in oil prices.
Geography
Some island countries are more affected than other countries by climate change, which produces problems such as reduced land use, water scarcity, and sometimes even resettlement issues. Some low-lying island countries are slowly being submerged by the rising water levels of the Pacific Ocean.
Climate change also impacts island countries by causing natural disasters such as tropical cyclones, hurricanes, flash floods and droughts.
Climate change
Economics
Many island countries rely heavily on imports and are greatly affected by changes in the global economy. Due to the nature of island countries their economies are often characterised by being smaller, relatively isolated from world trade and economy, more vulnerable to shipping costs, and more likely to suffer environmental damage to infrastructure; exceptions include Japan, Taiwan and the United Kingdom.
The dominant industry for many island countries is tourism.
Composition
Island countries are typically small with low populations, although some, like Indonesia, Japan, and the Philippines are notable exceptions.Some island countries are centred on one or two major islands, such as the United Kingdom, Trinidad and Tobago, New Zealand, Cuba, Bahrain, Singapore, Sri Lanka, Iceland, Malta, and Taiwan. Others are spread out over hundreds or thousands of smaller islands, such as Japan, Indonesia, the Philippines, The Bahamas, Seychelles, and the Maldives. Some island countries share one or more of their islands with other countries, such as the United Kingdom and Ireland; Haiti and the Dominican Republic; and Indonesia, which shares islands with Papua New Guinea, Brunei, East Timor, and Malaysia. Bahrain, Singapore, and the United Kingdom have fixed links such as bridges and tunnels to the continental landmass: Bahrain is linked to Saudi Arabia by the King Fahd Causeway, Singapore to Malaysia by the Johor–Singapore Causeway and Second Link, and the United Kingdom has a railway connection to France through the Channel Tunnel.
Geographically, the country of Australia is considered a continental landmass rather than an island, covering the largest landmass of the Australian continent. In the past, however, it was considered an island country for tourism purposes (among others) and is sometimes referred to as such.
See also
References
External links
Island countris – NationsOnline.org |
ice age | An ice age is a long period of reduction in the temperature of Earth's surface and atmosphere, resulting in the presence or expansion of continental and polar ice sheets and alpine glaciers. Earth's climate alternates between ice ages and greenhouse periods, during which there are no glaciers on the planet. Earth is currently in the ice age called Quaternary glaciation. Individual pulses of cold climate within an ice age are termed glacial periods (or, alternatively, glacials, glaciations, glacial stages, stadials, stades, or colloquially, ice ages), and intermittent warm periods within an ice age are called interglacials or interstadials.In glaciology, ice age implies the presence of extensive ice sheets in the northern and southern hemispheres. By this definition, Earth is in an interglacial period—the Holocene. The amount of anthropogenic greenhouse gases emitted into Earth's oceans and atmosphere is projected to delay the next glacial period, which otherwise would begin in around 50,000 years, by between 100,000 and 500,000 years.
History of research
In 1742, Pierre Martel (1706–1767), an engineer and geographer living in Geneva, visited the valley of Chamonix in the Alps of Savoy. Two years later he published an account of his journey. He reported that the inhabitants of that valley attributed the dispersal of erratic boulders to the glaciers, saying that they had once extended much farther. Later similar explanations were reported from other regions of the Alps. In 1815 the carpenter and chamois hunter Jean-Pierre Perraudin (1767–1858) explained erratic boulders in the Val de Bagnes in the Swiss canton of Valais as being due to glaciers previously extending further. An unknown woodcutter from Meiringen in the Bernese Oberland advocated a similar idea in a discussion with the Swiss-German geologist Jean de Charpentier (1786–1855) in 1834. Comparable explanations are also known from the Val de Ferret in the Valais and the Seeland in western Switzerland and in Goethe's scientific work. Such explanations could also be found in other parts of the world. When the Bavarian naturalist Ernst von Bibra (1806–1878) visited the Chilean Andes in 1849–1850, the natives attributed fossil moraines to the former action of glaciers.Meanwhile, European scholars had begun to wonder what had caused the dispersal of erratic material. From the middle of the 18th century, some discussed ice as a means of transport. The Swedish mining expert Daniel Tilas (1712–1772) was, in 1742, the first person to suggest drifting sea ice was a cause of the presence of erratic boulders in the Scandinavian and Baltic regions. In 1795, the Scottish philosopher and gentleman naturalist, James Hutton (1726–1797), explained erratic boulders in the Alps by the action of glaciers. Two decades later, in 1818, the Swedish botanist Göran Wahlenberg (1780–1851) published his theory of a glaciation of the Scandinavian peninsula. He regarded glaciation as a regional phenomenon.
Only a few years later, the Danish-Norwegian geologist Jens Esmark (1762–1839) argued for a sequence of worldwide ice ages. In a paper published in 1824, Esmark proposed changes in climate as the cause of those glaciations. He attempted to show that they originated from changes in Earth's orbit. Esmark discovered the similarity between moraines near Haukalivatnet lake near sea level in Rogaland and moraines at branches of Jostedalsbreen. Esmark's discovery were later attributed to or appropriated by Theodor Kjerulf and Louis Agassiz.During the following years, Esmark's ideas were discussed and taken over in parts by Swedish, Scottish and German scientists. At the University of Edinburgh Robert Jameson (1774–1854) seemed to be relatively open to Esmark's ideas, as reviewed by Norwegian professor of glaciology Bjørn G. Andersen (1992). Jameson's remarks about ancient glaciers in Scotland were most probably prompted by Esmark. In Germany, Albrecht Reinhard Bernhardi (1797–1849), a geologist and professor of forestry at an academy in Dreissigacker (since incorporated in the southern Thuringian city of Meiningen), adopted Esmark's theory. In a paper published in 1832, Bernhardi speculated about the polar ice caps once reaching as far as the temperate zones of the globe.In Val de Bagnes, a valley in the Swiss Alps, there was a long-held local belief that the valley had once been covered deep in ice, and in 1815 a local chamois hunter called Jean-Pierre Perraudin attempted to convert the geologist Jean de Charpentier to the idea, pointing to deep striations in the rocks and giant erratic boulders as evidence. Charpentier held the general view that these signs were caused by vast floods, and he rejected Perraudin's theory as absurd. In 1818 the engineer Ignatz Venetz joined Perraudin and Charpentier to examine a proglacial lake above the valley created by an ice dam as a result of the 1815 eruption of Mount Tambora, which threatened to cause a catastrophic flood when the dam broke. Perraudin attempted unsuccessfully to convert his companions to his theory, but when the dam finally broke, there were only minor erratics and no striations, and Venetz concluded that Perraudin was right and that only ice could have caused such major results. In 1821 he read a prize-winning paper on the theory to the Swiss Society, but it was not published until Charpentier, who had also become converted, published it with his own more widely read paper in 1834.In the meantime, the German botanist Karl Friedrich Schimper (1803–1867) was studying mosses which were growing on erratic boulders in the alpine upland of Bavaria. He began to wonder where such masses of stone had come from. During the summer of 1835 he made some excursions to the Bavarian Alps. Schimper came to the conclusion that ice must have been the means of transport for the boulders in the alpine upland. In the winter of 1835 to 1836 he held some lectures in Munich. Schimper then assumed that there must have been global times of obliteration ("Verödungszeiten") with a cold climate and frozen water. Schimper spent the summer months of 1836 at Devens, near Bex, in the Swiss Alps with his former university friend Louis Agassiz (1801–1873) and Jean de Charpentier. Schimper, Charpentier and possibly Venetz convinced Agassiz that there had been a time of glaciation. During the winter of 1836/37, Agassiz and Schimper developed the theory of a sequence of glaciations. They mainly drew upon the preceding works of Venetz, Charpentier and on their own fieldwork. Agassiz appears to have been already familiar with Bernhardi's paper at that time. At the beginning of 1837, Schimper coined the term "ice age" ("Eiszeit") for the period of the glaciers. In July 1837 Agassiz presented their synthesis before the annual meeting of the Swiss Society for Natural Research at Neuchâtel. The audience was very critical, and some were opposed to the new theory because it contradicted the established opinions on climatic history. Most contemporary scientists thought that Earth had been gradually cooling down since its birth as a molten globe.In order to persuade the skeptics, Agassiz embarked on geological fieldwork. He published his book Study on Glaciers ("Études sur les glaciers") in 1840. Charpentier was put out by this, as he had also been preparing a book about the glaciation of the Alps. Charpentier felt that Agassiz should have given him precedence as it was he who had introduced Agassiz to in-depth glacial research. As a result of personal quarrels, Agassiz had also omitted any mention of Schimper in his book.It took several decades before the ice age theory was fully accepted by scientists. This happened on an international scale in the second half of the 1870s, following the work of James Croll, including the publication of Climate and Time, in Their Geological Relations in 1875, which provided a credible explanation for the causes of ice ages.
Evidence
There are three main types of evidence for ice ages: geological, chemical, and paleontological.
Geological evidence for ice ages comes in various forms, including rock scouring and scratching, glacial moraines, drumlins, valley cutting, and the deposition of till or tillites and glacial erratics. Successive glaciations tend to distort and erase the geological evidence for earlier glaciations, making it difficult to interpret. Furthermore, this evidence was difficult to date exactly; early theories assumed that the glacials were short compared to the long interglacials. The advent of sediment and ice cores revealed the true situation: glacials are long, interglacials short. It took some time for the current theory to be worked out.
The chemical evidence mainly consists of variations in the ratios of isotopes in fossils present in sediments and sedimentary rocks and ocean sediment cores. For the most recent glacial periods, ice cores provide climate proxies, both from the ice itself and from atmospheric samples provided by included bubbles of air. Because water containing lighter isotopes has a lower heat of evaporation, its proportion decreases with warmer conditions. This allows a temperature record to be constructed. This evidence can be confounded, however, by other factors recorded by isotope ratios.
The paleontological evidence consists of changes in the geographical distribution of fossils. During a glacial period, cold-adapted organisms spread into lower latitudes, and organisms that prefer warmer conditions become extinct or retreat into lower latitudes. This evidence is also difficult to interpret because it requires:
sequences of sediments covering a long period of time, over a wide range of latitudes and which are easily correlated;
ancient organisms which survive for several million years without change and whose temperature preferences are easily diagnosed; and
the finding of the relevant fossils.Despite the difficulties, analysis of ice core and ocean sediment cores has provided a credible record of glacials and interglacials over the past few million years. These also confirm the linkage between ice ages and continental crust phenomena such as glacial moraines, drumlins, and glacial erratics. Hence the continental crust phenomena are accepted as good evidence of earlier ice ages when they are found in layers created much earlier than the time range for which ice cores and ocean sediment cores are available.
Major ice ages
There have been at least five major ice ages in Earth's history (the Huronian, Cryogenian, Andean-Saharan, late Paleozoic, and the latest Quaternary Ice Age). Outside these ages, Earth was previously thought to have been ice-free even in high latitudes; such periods are known as greenhouse periods. However, other studies dispute this, finding evidence of occasional glaciations at high latitudes even during apparent greenhouse periods.
Rocks from the earliest well-established ice age, called the Huronian, have been dated to around 2.4 to 2.1 billion years ago during the early Proterozoic Eon. Several hundreds of kilometers of the Huronian Supergroup are exposed 10 to 100 kilometers (6 to 62 mi) north of the north shore of Lake Huron, extending from near Sault Ste. Marie to Sudbury, northeast of Lake Huron, with giant layers of now-lithified till beds, dropstones, varves, outwash, and scoured basement rocks. Correlative Huronian deposits have been found near Marquette, Michigan, and correlation has been made with Paleoproterozoic glacial deposits from Western Australia. The Huronian ice age was caused by the elimination of atmospheric methane, a greenhouse gas, during the Great Oxygenation Event.The next well-documented ice age, and probably the most severe of the last billion years, occurred from 720 to 630 million years ago (the Cryogenian period) and may have produced a Snowball Earth in which glacial ice sheets reached the equator, possibly being ended by the accumulation of greenhouse gases such as CO2 produced by volcanoes. "The presence of ice on the continents and pack ice on the oceans would inhibit both silicate weathering and photosynthesis, which are the two major sinks for CO2 at present." It has been suggested that the end of this ice age was responsible for the subsequent Ediacaran and Cambrian explosion, though this model is recent and controversial.
The Andean-Saharan occurred from 460 to 420 million years ago, during the Late Ordovician and the Silurian period.
The evolution of land plants at the onset of the Devonian period caused a long term increase in planetary oxygen levels and reduction of CO2 levels, which resulted in the late Paleozoic icehouse. Its former name, the Karoo glaciation, was named after the glacial tills found in the Karoo region of South Africa. There were extensive polar ice caps at intervals from 360 to 260 million years ago in South Africa during the Carboniferous and early Permian periods. Correlatives are known from Argentina, also in the center of the ancient supercontinent Gondwanaland.
Although the Mesozoic Era retained a greenhouse climate over its timespan and was previously assumed to have been entirely glaciation-free, more recent studies suggest that brief periods of glaciation occurred in both hemispheres during the Early Cretaceous. Geologic and palaeoclimatological records suggest the existence of glacial periods during the Valanginian, Hauterivian, and Aptian stages of the Early Cretaceous. Ice-rafted glacial dropstones indicate that in the Northern Hemisphere, ice sheets may have extended as far south as the Iberian Peninsula during the Hauterivian and Aptian. Although ice sheets largely disappeared from Earth for the rest of the period (potential reports from the Turonian, otherwise the warmest period of the Phanerozoic, are disputed), ice sheets and associated sea ice appear to have briefly returned to Antarctica near the very end of the Maastrichtian just prior to the Cretaceous-Paleogene extinction event.The Quaternary Glaciation / Quaternary Ice Age started about 2.58 million years ago at the beginning of the Quaternary Period when the spread of ice sheets in the Northern Hemisphere began. Since then, the world has seen cycles of glaciation with ice sheets advancing and retreating on 40,000- and 100,000-year time scales called glacial periods, glacials or glacial advances, and interglacial periods, interglacials or glacial retreats. Earth is currently in an interglacial, and the last glacial period ended about 11,700 years ago. All that remains of the continental ice sheets are the Greenland and Antarctic ice sheets and smaller glaciers such as on Baffin Island.
The definition of the Quaternary as beginning 2.58 Ma is based on the formation of the Arctic ice cap. The Antarctic ice sheet began to form earlier, at about 34 Ma, in the mid-Cenozoic (Eocene-Oligocene Boundary). The term Late Cenozoic Ice Age is used to include this early phase.Ice ages can be further divided by location and time; for example, the names Riss (180,000–130,000 years bp) and Würm (70,000–10,000 years bp) refer specifically to glaciation in the Alpine region. The maximum extent of the ice is not maintained for the full interval. The scouring action of each glaciation tends to remove most of the evidence of prior ice sheets almost completely, except in regions where the later sheet does not achieve full coverage.
Glacials and interglacials
Within the current glaciation, more temperate and more severe periods have occurred. The colder periods are called glacial periods, the warmer periods interglacials, such as the Eemian Stage. There is evidence that similar glacial cycles occurred in previous glaciations, including the Andean-Saharan and the late Paleozoic ice house. The glacial cycles of the late Paleozoic ice house are likely responsible for the deposition of cyclothems.Glacials are characterized by cooler and drier climates over most of Earth and large land and sea ice masses extending outward from the poles. Mountain glaciers in otherwise unglaciated areas extend to lower elevations due to a lower snow line. Sea levels drop due to the removal of large volumes of water above sea level in the icecaps. There is evidence that ocean circulation patterns are disrupted by glaciations. The glacials and interglacials coincide with changes in orbital forcing of climate due to Milankovitch cycles, which are periodic changes in Earth's orbit and the tilt of Earth's rotational axis.
Earth has been in an interglacial period known as the Holocene for around 11,700 years, and an article in Nature in 2004 argues that it might be most analogous to a previous interglacial that lasted 28,000 years. Predicted changes in orbital forcing suggest that the next glacial period would begin at least 50,000 years from now. Moreover, anthropogenic forcing from increased greenhouse gases is estimated to potentially outweigh the orbital forcing of the Milankovitch cycles for hundreds of thousands of years.
Feedback processes
Each glacial period is subject to positive feedback which makes it more severe, and negative feedback which mitigates and (in all cases so far) eventually ends it.
Positive
An important form of feedback is provided by Earth's albedo, which is how much of the sun's energy is reflected rather than absorbed by Earth. Ice and snow increase Earth's albedo, while forests reduce its albedo. When the air temperature decreases, ice and snow fields grow, and they reduce forest cover. This continues until competition with a negative feedback mechanism forces the system to an equilibrium.
One theory is that when glaciers form, two things happen: the ice grinds rocks into dust, and the land becomes dry and arid. This allows winds to transport iron rich dust into the open ocean, where it acts as a fertilizer that causes massive algal blooms that pulls large amounts of CO2 out of the atmosphere. This in turn makes it even colder and causes the glaciers to grow more.In 1956, Ewing and Donn hypothesized that an ice-free Arctic Ocean leads to increased snowfall at high latitudes. When low-temperature ice covers the Arctic Ocean there is little evaporation or sublimation and the polar regions are quite dry in terms of precipitation, comparable to the amount found in mid-latitude deserts. This low precipitation allows high-latitude snowfalls to melt during the summer. An ice-free Arctic Ocean absorbs solar radiation during the long summer days, and evaporates more water into the Arctic atmosphere. With higher precipitation, portions of this snow may not melt during the summer and so glacial ice can form at lower altitudes and more southerly latitudes, reducing the temperatures over land by increased albedo as noted above. Furthermore, under this hypothesis the lack of oceanic pack ice allows increased exchange of waters between the Arctic and the North Atlantic Oceans, warming the Arctic and cooling the North Atlantic. (Current projected consequences of global warming include a brief ice-free Arctic Ocean period by 2050.) Additional fresh water flowing into the North Atlantic during a warming cycle may also reduce the global ocean water circulation. Such a reduction (by reducing the effects of the Gulf Stream) would have a cooling effect on northern Europe, which in turn would lead to increased low-latitude snow retention during the summer. It has also been suggested that during an extensive glacial, glaciers may move through the Gulf of Saint Lawrence, extending into the North Atlantic Ocean far enough to block the Gulf Stream.
Negative
Ice sheets that form during glaciations erode the land beneath them. This can reduce the land area above sea level and thus diminish the amount of space on which ice sheets can form. This mitigates the albedo feedback, as does the rise in sea level that accompanies the reduced area of ice sheets, since open ocean has a lower albedo than land.Another negative feedback mechanism is the increased aridity occurring with glacial maxima, which reduces the precipitation available to maintain glaciation. The glacial retreat induced by this or any other process can be amplified by similar inverse positive feedbacks as for glacial advances.According to research published in Nature Geoscience, human emissions of carbon dioxide (CO2) will defer the next glacial period. Researchers used data on Earth's orbit to find the historical warm interglacial period that looks most like the current one and from this have predicted that the next glacial period would usually begin within 1,500 years. They go on to predict that emissions have been so high that it will not.
Causes
The causes of ice ages are not fully understood for either the large-scale ice age periods or the smaller ebb and flow of glacial–interglacial periods within an ice age. The consensus is that several factors are important: atmospheric composition, such as the concentrations of carbon dioxide and methane (the specific levels of the previously mentioned gases are now able to be seen with the new ice core samples from the European Project for Ice Coring in Antarctica (EPICA) Dome C in Antarctica over the past 800,000 years); changes in Earth's orbit around the Sun known as Milankovitch cycles; the motion of tectonic plates resulting in changes in the relative location and amount of continental and oceanic crust on Earth's surface, which affect wind and ocean currents; variations in solar output; the orbital dynamics of the Earth–Moon system; the impact of relatively large meteorites and volcanism including eruptions of supervolcanoes.Some of these factors influence each other. For example, changes in Earth's atmospheric composition (especially the concentrations of greenhouse gases) may alter the climate, while climate change itself can change the atmospheric composition (for example by changing the rate at which weathering removes CO2).
Maureen Raymo, William Ruddiman and others propose that the Tibetan and Colorado Plateaus are immense CO2 "scrubbers" with a capacity to remove enough CO2 from the global atmosphere to be a significant causal factor of the 40 million year Cenozoic Cooling trend. They further claim that approximately half of their uplift (and CO2 "scrubbing" capacity) occurred in the past 10 million years.
Changes in Earth's atmosphere
There is evidence that greenhouse gas levels fell at the start of ice ages and rose during the retreat of the ice sheets, but it is difficult to establish cause and effect (see the notes above on the role of weathering). Greenhouse gas levels may also have been affected by other factors which have been proposed as causes of ice ages, such as the movement of continents and volcanism.
The Snowball Earth hypothesis maintains that the severe freezing in the late Proterozoic was ended by an increase in CO2 levels in the atmosphere, mainly from volcanoes, and some supporters of Snowball Earth argue that it was caused in the first place by a reduction in atmospheric CO2. The hypothesis also warns of future Snowball Earths.
In 2009, further evidence was provided that changes in solar insolation provide the initial trigger for Earth to warm after an Ice Age, with secondary factors like increases in greenhouse gases accounting for the magnitude of the change.
Position of the continents
The geological record appears to show that ice ages start when the continents are in positions which block or reduce the flow of warm water from the equator to the poles and thus allow ice sheets to form. The ice sheets increase Earth's reflectivity and thus reduce the absorption of solar radiation. With less radiation absorbed the atmosphere cools; the cooling allows the ice sheets to grow, which further increases reflectivity in a positive feedback loop. The ice age continues until the reduction in weathering causes an increase in the greenhouse effect.
There are three main contributors from the layout of the continents that obstruct the movement of warm water to the poles:
A continent sits on top of a pole, as Antarctica does today.
A polar sea is almost land-locked, as the Arctic Ocean is today.
A supercontinent covers most of the equator, as Rodinia did during the Cryogenian period.Since today's Earth has a continent over the South Pole and an almost land-locked ocean over the North Pole, geologists believe that Earth will continue to experience glacial periods in the geologically near future.
Some scientists believe that the Himalayas are a major factor in the current ice age, because these mountains have increased Earth's total rainfall and therefore the rate at which carbon dioxide is washed out of the atmosphere, decreasing the greenhouse effect. The Himalayas' formation started about 70 million years ago when the Indo-Australian Plate collided with the Eurasian Plate, and the Himalayas are still rising by about 5 mm per year because the Indo-Australian plate is still moving at 67 mm/year. The history of the Himalayas broadly fits the long-term decrease in Earth's average temperature since the mid-Eocene, 40 million years ago.
Fluctuations in ocean currents
Another important contribution to ancient climate regimes is the variation of ocean currents, which are modified by continent position, sea levels and salinity, as well as other factors. They have the ability to cool (e.g. aiding the creation of Antarctic ice) and the ability to warm (e.g. giving the British Isles a temperate as opposed to a boreal climate). The closing of the Isthmus of Panama about 3 million years ago may have ushered in the present period of strong glaciation over North America by ending the exchange of water between the tropical Atlantic and Pacific Oceans.Analyses suggest that ocean current fluctuations can adequately account for recent glacial oscillations. During the last glacial period the sea-level has fluctuated 20–30 m as water was sequestered, primarily in the Northern Hemisphere ice sheets. When ice collected and the sea level dropped sufficiently, flow through the Bering Strait (the narrow strait between Siberia and Alaska is about 50 m deep today) was reduced, resulting in increased flow from the North Atlantic. This realigned the thermohaline circulation in the Atlantic, increasing heat transport into the Arctic, which melted the polar ice accumulation and reduced other continental ice sheets. The release of water raised sea levels again, restoring the ingress of colder water from the Pacific with an accompanying shift to northern hemisphere ice accumulation.According to a study published in Nature in 2021, all glacial periods of ice ages over the last 1.5 million years were associated with northward shifts of melting Antarctic icebergs which changed ocean circulation patterns, leading to more CO2 being pulled out of the atmosphere. The authors suggest that this process may be disrupted in the future as the Southern Ocean will become too warm for the icebergs to travel far enough to trigger these changes.
Uplift of the Tibetan plateau
Matthias Kuhle's geological theory of Ice Age development was suggested by the existence of an ice sheet covering the Tibetan Plateau during the Ice Ages (Last Glacial Maximum?). According to Kuhle, the plate-tectonic uplift of Tibet past the snow-line has led to a surface of c. 2,400,000 square kilometres (930,000 sq mi) changing from bare land to ice with a 70% greater albedo. The reflection of energy into space resulted in a global cooling, triggering the Pleistocene Ice Age. Because this highland is at a subtropical latitude, with 4 to 5 times the insolation of high-latitude areas, what would be Earth's strongest heating surface has turned into a cooling surface.
Kuhle explains the interglacial periods by the 100,000-year cycle of radiation changes due to variations in Earth's orbit. This comparatively insignificant warming, when combined with the lowering of the Nordic inland ice areas and Tibet due to the weight of the superimposed ice-load, has led to the repeated complete thawing of the inland ice areas.
Variations in Earth's orbit
The Milankovitch cycles are a set of cyclic variations in characteristics of Earth's orbit around the Sun. Each cycle has a different length, so at some times their effects reinforce each other and at other times they (partially) cancel each other.
There is strong evidence that the Milankovitch cycles affect the occurrence of glacial and interglacial periods within an ice age. The present ice age is the most studied and best understood, particularly the last 400,000 years, since this is the period covered by ice cores that record atmospheric composition and proxies for temperature and ice volume. Within this period, the match of glacial/interglacial frequencies to the Milanković orbital forcing periods is so close that orbital forcing is generally accepted. The combined effects of the changing distance to the Sun, the precession of Earth's axis, and the changing tilt of Earth's axis redistribute the sunlight received by Earth. Of particular importance are changes in the tilt of Earth's axis, which affect the intensity of seasons. For example, the amount of solar influx in July at 65 degrees north latitude varies by as much as 22% (from 450 W/m2 to 550 W/m2). It is widely believed that ice sheets advance when summers become too cool to melt all of the accumulated snowfall from the previous winter. Some believe that the strength of the orbital forcing is too small to trigger glaciations, but feedback mechanisms like CO2 may explain this mismatch.
While Milankovitch forcing predicts that cyclic changes in Earth's orbital elements can be expressed in the glaciation record, additional explanations are necessary to explain which cycles are observed to be most important in the timing of glacial–interglacial periods. In particular, during the last 800,000 years, the dominant period of glacial–interglacial oscillation has been 100,000 years, which corresponds to changes in Earth's orbital eccentricity and orbital inclination. Yet this is by far the weakest of the three frequencies predicted by Milankovitch. During the period 3.0–0.8 million years ago, the dominant pattern of glaciation corresponded to the 41,000-year period of changes in Earth's obliquity (tilt of the axis). The reasons for dominance of one frequency versus another are poorly understood and an active area of current research, but the answer probably relates to some form of resonance in Earth's climate system. Recent work suggests that the 100K year cycle dominates due to increased southern-pole sea-ice increasing total solar reflectivity.The "traditional" Milankovitch explanation struggles to explain the dominance of the 100,000-year cycle over the last 8 cycles. Richard A. Muller, Gordon J. F. MacDonald, and others have pointed out that those calculations are for a two-dimensional orbit of Earth but the three-dimensional orbit also has a 100,000-year cycle of orbital inclination. They proposed that these variations in orbital inclination lead to variations in insolation, as Earth moves in and out of known dust bands in the solar system. Although this is a different mechanism to the traditional view, the "predicted" periods over the last 400,000 years are nearly the same. The Muller and MacDonald theory, in turn, has been challenged by Jose Antonio Rial.Another worker, William Ruddiman, has suggested a model that explains the 100,000-year cycle by the modulating effect of eccentricity (weak 100,000-year cycle) on precession (26,000-year cycle) combined with greenhouse gas feedbacks in the 41,000- and 26,000-year cycles. Yet another theory has been advanced by Peter Huybers who argued that the 41,000-year cycle has always been dominant, but that Earth has entered a mode of climate behavior where only the second or third cycle triggers an ice age. This would imply that the 100,000-year periodicity is really an illusion created by averaging together cycles lasting 80,000 and 120,000 years. This theory is consistent with a simple empirical multi-state model proposed by Didier Paillard. Paillard suggests that the late Pleistocene glacial cycles
can be seen as jumps between three quasi-stable climate states. The jumps are induced by the orbital forcing, while in the early Pleistocene the 41,000-year glacial cycles resulted from jumps between only two climate states. A dynamical
model explaining this behavior was proposed by Peter Ditlevsen. This is in support of the suggestion that the late Pleistocene glacial cycles are not due to the weak 100,000-year eccentricity cycle, but a non-linear response to mainly the 41,000-year obliquity cycle.
Variations in the Sun's energy output
There are at least two types of variation in the Sun's energy output:
In the very long term, astrophysicists believe that the Sun's output increases by about 7% every one billion years.
Shorter-term variations such as sunspot cycles, and longer episodes such as the Maunder Minimum, which occurred during the coldest part of the Little Ice Age.The long-term increase in the Sun's output cannot be a cause of ice ages.
Volcanism
Volcanic eruptions may have contributed to the inception and/or the end of ice age periods. At times during the paleoclimate, carbon dioxide levels were two or three times greater than today. Volcanoes and movements in continental plates contributed to high amounts of CO2 in the atmosphere. Carbon dioxide from volcanoes probably contributed to periods with highest overall temperatures. One suggested explanation of the Paleocene–Eocene Thermal Maximum is that undersea volcanoes released methane from clathrates and thus caused a large and rapid increase in the greenhouse effect. There appears to be no geological evidence for such eruptions at the right time, but this does not prove they did not happen.
Recent glacial and interglacial phases
The current geological period, the Quaternary, which began about 2.6 million years ago and extends into the present, is marked by warm and cold episodes, cold phases called glacials (Quaternary ice age) lasting about 100,000 years, and which are then interrupted by the warmer interglacials which lasted about 10,000–15,000 years. The last cold episode of the Last Glacial Period ended about 10,000 years ago. Earth is currently in an interglacial period of the Quaternary, called the Holocene.
Glacial stages in North America
The major glacial stages of the current ice age in North America are the Illinoian, Eemian and Wisconsin glaciation. The use of the Nebraskan, Afton, Kansan, and Yarmouthian stages to subdivide the ice age in North America has been discontinued by Quaternary geologists and geomorphologists. These stages have all been merged into the Pre-Illinoian in the 1980s.During the most recent North American glaciation, during the latter part of the Last Glacial Maximum (26,000 to 13,300 years ago), ice sheets extended to about 45th parallel north. These sheets were 3 to 4 kilometres (1.9 to 2.5 mi) thick.
This Wisconsin glaciation left widespread impacts on the North American landscape. The Great Lakes and the Finger Lakes were carved by ice deepening old valleys. Most of the lakes in Minnesota and Wisconsin were gouged out by glaciers and later filled with glacial meltwaters. The old Teays River drainage system was radically altered and largely reshaped into the Ohio River drainage system. Other rivers were dammed and diverted to new channels, such as Niagara Falls, which formed a dramatic waterfall and gorge, when the waterflow encountered a limestone escarpment. Another similar waterfall, at the present Clark Reservation State Park near Syracuse, New York, is now dry.
The area from Long Island to Nantucket, Massachusetts was formed from glacial till, and the plethora of lakes on the Canadian Shield in northern Canada can be almost entirely attributed to the action of the ice. As the ice retreated and the rock dust dried, winds carried the material hundreds of miles, forming beds of loess many dozens of feet thick in the Missouri Valley. Post-glacial rebound continues to reshape the Great Lakes and other areas formerly under the weight of the ice sheets.
The Driftless Area, a portion of western and southwestern Wisconsin along with parts of adjacent Minnesota, Iowa, and Illinois, was not covered by glaciers.
Last Glacial Period in the semiarid Andes around Aconcagua and Tupungato
A specially interesting climatic change during glacial times has taken place in the semi-arid Andes. Beside the expected cooling down in comparison with the current climate, a significant precipitation change happened here. So, researches in the presently semiarid subtropic Aconcagua-massif (6,962 m) have shown an unexpectedly extensive glacial glaciation of the type "ice stream network". The connected valley glaciers exceeding 100 km in length, flowed down on the East-side of this section of the Andes at 32–34°S and 69–71°W as far as a height of 2,060 m and on the western luff-side still clearly deeper. Where current glaciers scarcely reach 10 km in length, the snowline (ELA) runs at a height of 4,600 m and at that time was lowered to 3,200 m asl, i.e. about 1,400 m. From this follows that—beside of an annual depression of temperature about c. 8.4 °C— here was an increase in precipitation. Accordingly, at glacial times the humid climatic belt that today is situated several latitude degrees further to the S, was shifted much further to the N.
Effects of glaciation
Although the last glacial period ended more than 8,000 years ago, its effects can still be felt today. For example, the moving ice carved out the landscape in Canada (See Canadian Arctic Archipelago), Greenland, northern Eurasia and Antarctica. The erratic boulders, till, drumlins, eskers, fjords, kettle lakes, moraines, cirques, horns, etc., are typical features left behind by the glaciers. The weight of the ice sheets was so great that they deformed Earth's crust and mantle. After the ice sheets melted, the ice-covered land rebounded. Due to the high viscosity of Earth's mantle, the flow of mantle rocks which controls the rebound process is very slow—at a rate of about 1 cm/year near the center of rebound area today.
During glaciation, water was taken from the oceans to form the ice at high latitudes, thus global sea level dropped by about 110 meters, exposing the continental shelves and forming land-bridges between land-masses for animals to migrate. During deglaciation, the melted ice-water returned to the oceans, causing sea level to rise. This process can cause sudden shifts in coastlines and hydration systems resulting in newly submerged lands, emerging lands, collapsed ice dams resulting in salination of lakes, new ice dams creating vast areas of freshwater, and a general alteration in regional weather patterns on a large but temporary scale. It can even cause temporary reglaciation. This type of chaotic pattern of rapidly changing land, ice, saltwater and freshwater has been proposed as the likely model for the Baltic and Scandinavian regions, as well as much of central North America at the end of the last glacial maximum, with the present-day coastlines only being achieved in the last few millennia of prehistory. Also, the effect of elevation on Scandinavia submerged a vast continental plain that had existed under much of what is now the North Sea, connecting the British Isles to Continental Europe.The redistribution of ice-water on the surface of Earth and the flow of mantle rocks causes changes in the gravitational field as well as changes to the distribution of the moment of inertia of Earth. These changes to the moment of inertia result in a change in the angular velocity, axis, and wobble of Earth's rotation.
The weight of the redistributed surface mass loaded the lithosphere, caused it to flex and also induced stress within Earth. The presence of the glaciers generally suppressed the movement of faults below. During deglaciation, the faults experience accelerated slip triggering earthquakes. Earthquakes triggered near the ice margin may in turn accelerate ice calving and may account for the Heinrich events. As more ice is removed near the ice margin, more intraplate earthquakes are induced and this positive feedback may explain the fast collapse of ice sheets.
In Europe, glacial erosion and isostatic sinking from weight of ice made the Baltic Sea, which before the Ice Age was all land drained by the Eridanos River.
Future ice ages
A 2015 report by the Past Global Changes Project says simulations show that a new glaciation is unlikely to happen within the next approximately 50,000 years, before the next strong drop in Northern Hemisphere summer insolation occurs "if either atmospheric CO2 concentration
remains above 300 ppm or cumulative carbon emissions exceed 1000 Pg C" (i.e. 1000 gigatonnes carbon). "Only for an atmospheric CO2 content below the preindustrial level may a glaciation occur within the next 10 ka. ... Given the continued anthropogenic CO2 emissions, glacial inception is very unlikely to occur in
the next 50 ka, because the timescale for CO2 and temperature reduction toward unperturbed values in the
absence of active removal is very long [IPCC, 2013], and only weak precessional forcing occurs in the next two
precessional cycles." (A precessional cycle is around 21,000 years, the time it takes for the perihelion to move all the way around the tropical year.)Ice ages go through cycles of about 100,000 years, but the next one may well be avoided due to our carbon dioxide emissions.
See also
References
Works cited
Montgomery, Keith (2010). "Development of the glacial theory, 1800–1870". Historical Simulation
External links
Cracking the Ice Age from PBS
Rina Torchinsky (9 Aug 2021). "Scientists unveil 'best-preserved Ice Age animal ever found'". AccuWeather.
Raymo, M. (July 2011). "Overview of the Uplift-Weathering Hypothesis". Archived from the original on 2008-10-22.
Eduard Y. Osipov, Oleg M. Khlystov. Glaciers and meltwater flux to Lake Baikal during the Last Glacial Maximum. Archived 2016-03-12 at the Wayback Machine
Black, R. (9 January 2012). "Carbon emissions 'will defer Ice Age'". BBC News: Science and Environment. |
world economic forum | The World Economic Forum (WEF) is an international non-governmental organization for public - private sector collaboration based in Cologny, Canton of Geneva, Switzerland. It was founded on 24 January 1971 by German engineer Klaus Schwab. The foundation, which is mostly funded by its 1,000 member companies – typically global enterprises with more than US$5 billion in turnover – as well as public subsidies, views its own mission as "improving the state of the world by engaging business, political, academic, and other leaders of society to shape global, regional, and industry agendas".The WEF is mostly known for its annual meeting at the end of January in Davos, a mountain resort in the eastern Alps region of Switzerland. The meeting brings together some 3,000 paying members and selected participants – among whom are investors, business leaders, political leaders, economists, celebrities and journalists – for up to five days to discuss global issues across 500 sessions.
Aside from Davos, the organization convenes regional conferences in locations across Africa, East Asia, Latin America, and India and holds two additional annual meetings in China and the United Arab Emirates. It furthermore produces a series of reports, engages its members in sector-specific initiatives and provides a platform for leaders from selected stakeholder groups to collaborate on projects and initiatives.The Forum suggests that a globalised world is best managed by a self-selected coalition of multinational corporations, governments and civil society organizations (CSOs), which it expresses through initiatives like the "Great Reset" and the "Global Redesign".The World Economic Forum and its annual meeting in Davos have received criticism over the years, including the organization's corporate capture of global and democratic institutions, its institutional whitewashing initiatives, the public cost of security, the organization's tax-exempt status, unclear decision processes and membership criteria, a lack of financial transparency, and the environmental footprint of its annual meetings. As a reaction to criticism within Swiss society, the Swiss federal government decided in February 2021 to reduce its annual contributions to the WEF.The cost to be paid by companies for a delegate to the WEF were US$70,000 in the early 2000 years, and in 2022 were US$120,000.
History
The WEF was founded in 1971 by Klaus Schwab, a business professor at the University of Geneva. First named the European Management Forum, it changed its name to the World Economic Forum in 1987 and sought to broaden its vision to include providing a platform for resolving international conflicts.In February 1971, Schwab invited 450 executives from Western European firms to the first European Management Symposium held in the Davos Congress Centre under the patronage of the European Commission and European industrial associations, where Schwab sought to introduce European firms to American management practices. He then founded the WEF as a nonprofit organization based in Geneva and drew European business leaders to Davos for the annual meetings each January.The second European Management Forum, in 1972, was the first meeting at which one of the speakers at the forum was a head of government, President Pierre Werner of Luxembourg.Events in 1973, including the collapse of the Bretton Woods fixed-exchange rate mechanism and the Yom Kippur War, saw the annual meeting expand its focus from management to economic and social issues, and, for the first time, political leaders were invited to the annual meeting in January 1974.Through the forum's first decade, it maintained a playful atmosphere, with many members skiing and participating in evening events. Appraising the 1981 event, one attendee noted that "the forum offers a delightful vacation on the expense account."Political leaders soon began to use the annual meeting as venue for promoting their interests. The Davos Declaration was signed in 1988 by Greece and Turkey, helping them turn back from the brink of war. In 1992, South African President F. W. de Klerk met with Nelson Mandela and Chief Mangosuthu Buthelezi at the annual meeting, their first joint appearance outside South Africa. At the 1994 annual meeting, Israeli Foreign Minister Shimon Peres and PLO chairman Yasser Arafat reached a draft agreement on Gaza and Jericho.In October 2004, the World Economic Forum gained attention through the resignation of its CEO and executive director José María Figueres over the undeclared receipt of more than US$900,000 in consultancy fees from the French telecommunications firm Alcatel. Transparency International highlighted this incident in their Global Corruption Report two years later in 2006.In January 2006, the WEF published an article in its Global Agenda magazine titled "Boycott Israel", which was distributed to all 2,340 participants of the annual meeting. Following the publication, Klaus Schwab described the publication as "an unacceptable failure in the editorial process".In late 2015, the invitation was extended to include a North Korean delegation for the 2016 WEF, "in view of positive signs coming out of the country", the WEF organizers noted. North Korea has not been attending the WEF since 1998. The invitation was accepted. However, WEF revoked the invitation on 13 January 2016, after the 6 January 2016 North Korean nuclear test, and the country's attendance was made subject to "existing and possible forthcoming sanctions". Despite protests by North Korea calling the decision by the WEF managing board a "sudden and irresponsible" move, the WEF committee maintained the exclusion because "under these circumstances there would be no opportunity for international dialogue".In 2017, the WEF in Davos attracted considerable attention when, for the first time, a head of state from the People's Republic of China was present at the alpine resort. With the backdrop of Brexit, an incoming protectionist US administration and significant pressures on free trade zones and trade agreements, Paramount leader Xi Jinping defended the global economic scheme, and portrayed China as a responsible nation and a leader for environmental causes. He sharply rebuked the current populist movements that would introduce tariffs and hinder global commerce, warning that such protectionism could foster isolation and reduced economic opportunity.In 2018, Indian Prime Minister Narendra Modi gave the keynote speech, becoming the first head of government from India to deliver the inaugural keynote for the annual plenary at Davos. Modi highlighted global warming (climate change), terrorism and protectionism as the three major global challenges, and expressed confidence that they can be tackled with collective effort.In 2019, Brazilian President Jair Bolsonaro gave the keynote address at the plenary session of the conference. On his first international trip to Davos, he emphasized liberal economic policies despite his populist agenda, and attempted to reassure the world that Brazil is a protector of the rain forest while utilizing its resources for food production and export. He stated that "his government will seek to better integrate Brazil into the world by mainstreaming international best practices, such as those adopted and promoted by the OECD". Environmental concerns like extreme weather events, and the failure of climate change mitigation and adaptation were among the top-ranking global risks expressed by WEF attendees. On June 13, 2019, the WEF and the United Nations signed a "Strategic Partnership Framework" in order to "jointly accelerate the implementation of the 2030 Agenda for Sustainable Development."The 2021 World Economic Forum was due to be held from 17 to 20 August in Singapore. However, on 17 May the Forum was cancelled; with a new meeting to take place in the first half of 2022 instead with a final location and date to be determined later in 2021.In late December 2021, the World Economic Forum said in a release that pandemic conditions had made it extremely difficult to stage a global in-person meeting the following month; transmissibility of the SARS-CoV-2 Omicron variant and its impact on travel and mobility had made deferral necessary, with the meeting in Davos eventually rescheduled for 22 to 26 May 2022.Topics in the 2022 annual meeting included the Russian invasion of Ukraine, climate change, energy insecurity and inflation. Ukraine's president Volodymyr Zelenskyy gave a special address at the meeting, thanking the global community for its efforts but also calling for more support. The 2022 Forum was marked by the absence of a Russian delegation for the first time since 1991, which The Wall Street Journal described as signalling the "unraveling of globalization." The former Russia House was used to present Russia's war crimes.The 2023 annual meeting of the World Economic Forum took place in Davos, Switzerland from 16–20 January under the motto "Cooperation in a fragmented world".
Organization
Headquartered in Cologny, the WEF also has offices in New York, Beijing and Tokyo. In January 2015, it was designated an NGO with "other international body" status by the Swiss Federal Government under the Swiss Host-State Act.On 10 October 2016, the WEF announced the opening of its new Center for the Fourth Industrial Revolution in San Francisco. According to the WEF, the center will "serve as a platform for interaction, insight and impact on the scientific and technological changes that are changing the way we live, work and relate to one another".The World Economic Forum claims to be impartial and that it is not tied to any political, partisan, or national interests. Until 2012, it had observer status with the United Nations Economic and Social Council, when it was revoked; it is under the supervision of the Swiss Federal Council. The foundation's highest governance body is the foundation board.The managing board is chaired by the WEF's president, Børge Brende, and acts as the executive body of the World Economic Forum. Managing board members are Børge Brende, Julien Gattoni, Jeremy Jurgens, Adrian Monck, Sarita Nayyar, Olivier M. Schwab, Saadia Zahidi, and Alois Zwinggi.
Board of trustees
The WEF is chaired by founder and executive chairman Professor Klaus Schwab and is guided by a board of trustees that is made up of leaders from business, politics, academia and civil society. In 2010 the board was composed of: Josef Ackermann, Peter Brabeck-Letmathe, Kofi Annan, Victor L. L. Chu, Tony Blair, Michael Dell, Niall FitzGerald, Susan Hockfield, Orit Gadiesh, Christine Lagarde, Carlos Ghosn, Maurice Lévy, Rajat Gupta, Indra Nooyi, Peter D. Sutherland, Ivan Pictet, Heizō Takenaka, Ernesto Zedillo, Joseph P. Schoendorf and Queen Rania of Jordan. Members of the board of trustees (past or present) include: Mukesh Ambani, Marc Benioff, Peter Brabeck-Letmathe, Mark Carney, Laurence Fink, Chrystia Freeland, Orit Gadiesh, Fabiola Gianotti, Al Gore, Herman Gref, José Ángel Gurría, André Hoffmann, Ursula von der Leyen, Jack Ma, Yo-Yo Ma, Peter Maurer, Luis Alberto Moreno, Muriel Pénicaud, Queen Rania of Jordan, Leo Rafael Reif, David Rubenstein, Mark Schneider, Klaus Schwab, Tharman Shanmugaratnam, Jim Hagemann Snabe, Feike Sijbesma, Heizō Takenaka, Zhu Min.
Membership
The foundation is funded by its 1,000 member companies, typically global enterprises with more than five billion dollars in turnover (varying by industry and region). These enterprises rank among the top companies within their industry and/or country and play a leading role in shaping the future of their industry and/or region. Membership is stratified by the level of engagement with forum activities, with the level of membership fees increasing as participation in meetings, projects, and initiatives rises. In 2011, an annual membership cost $52,000 for an individual member, $263,000 for "Industry Partner" and $527,000 for "Strategic Partner". An admission fee costs $19,000 per person. In 2014, WEF raised annual fees by 20 percent, bringing the cost for "Strategic Partner" from CHF 500,000 ($523,000) to CHF 600,000 ($628,000).
Activities
Annual meeting in Davos
The flagship event of the World Economic Forum is the invitation-only annual meeting held at the end of January in Davos, Switzerland, bringing together chief executive officers from its 1,000 member companies, as well as selected politicians, representatives from academia, NGOs, religious leaders, and the media in an alpine environment. The winter discussions ostensibly focus around key issues of global concern (such as the globalization, capital markets, wealth management, international conflicts, environmental problems and their possible solutions). The participants also take part in role playing events, such as the Investment Heat Map. Informal winter meetings may have led to as many ideas and solutions as the official sessions.At the 2018 annual meeting, more than 3,000 participants from nearly 110 countries participated in over 400 sessions. Participation included more than 340 public figures, including more than 70 heads of state and government and 45 heads of international organizations; 230 media representatives and almost 40 cultural leaders were represented.As many as 500 journalists from online, print, radio, and television take part, with access to all sessions in the official program, some of which are also webcast. Not all the journalists are given access to all areas, however. This is reserved for white badge holders. "Davos runs an almost caste-like system of badges", according to BBC journalist Anthony Reuben. "A white badge means you're one of the delegates – you might be the chief executive of a company or the leader of a country (although that would also get you a little holographic sticker to add to your badge), or a senior journalist. An orange badge means you're just a run-of-the-mill working journalist." All plenary debates from the annual meeting also are available on YouTube while photographs are available on Flickr.
Individual participants
Some 3,000 individual participants joined the 2020 annual meeting in Davos. Countries with the most attendees include the United States (674 participants), the United Kingdom (270), Switzerland (159), Germany (137) and India (133). Among the attendees were heads of state or government, cabinet ministers, ambassadors, and heads or senior officials of international organizations, including: Sanna Marin (prime minister of Finland), Ursula von der Leyen (president of the European Commission), Christine Lagarde (ECB president), Greta Thunberg (climate activist), Ren Zhengfei (Huawei Technologies founder), Kristalina Georgieva (managing director of the IMF), Deepika Padukone (Bollywood actress), George Soros (investor) and Donald Trump (president of the United States).An analysis by The Economist from 2014 found that the vast majority of participants are male and more than 50 years old. Careers in business account for most of the participants' backgrounds (1,595 conference attendees), with the remaining seats shared between government (364), NGOs (246) and press (234). Academia, which had been the basis of the first annual conference in 1971, had been marginalised to the smallest participant group (183 attendees).
Corporate participants
Next to individual participants, the World Economic Forum maintains a dense network of corporate partners that can apply for different partnership ranks within the forum. For 2019, Bloomberg has identified a total of 436 listed corporates that participated in the annual meeting while measuring a stock underperformance by the Davos participants of around −10% versus the S&P 500 during the same year. Drivers are among others an overrepresentation of financial companies and an underrepresentation of fast-growing health care and information technology businesses at the conference. The Economist had found similar results in an earlier study, showing an underperformance of Davos participants against both the MSCI World Index and the S&P 500 between 2009 and 2014.
Summer annual meeting
In 2007, the foundation established the Annual Meeting of the New Champions (also called Summer Davos), held annually in China, alternating between Dalian and Tianjin, bringing together 1,500 participants from what the foundation calls Global Growth Companies, primarily from rapidly growing emerging countries such as China, India, Russia, Mexico, and Brazil, but also including quickly growing companies from developed countries. The meeting also engages with the next generation of global leaders from fast-growing regions and competitive cities, as well as technology pioneers from around the globe. The Premier of China has delivered a plenary address at each annual meeting.
Regional meetings
Every year regional meetings take place, enabling close contact among corporate business leaders, local government leaders, and NGOs. Meetings are held in Africa, East Asia, Latin America, and the Middle East. The mix of hosting countries varies from year to year, but consistently China and India have hosted throughout the decade since 2000.
Young Global Leaders
The group of Young Global Leaders consists of 800 people chosen by the WEF organizers as being representative of contemporary leadership. After five years of participation they are considered alumni. The program has received controversy when Schwab, the founder, admitted to "penetrat[ing]" governments with Young Global Leaders. He added that as of 2017 "more than half" of Justin Trudeau's Cabinet had been members of the program.
Social entrepreneurs
Since 2000, the WEF has been promoting models developed by those in close collaboration with the Schwab Foundation for Social Entrepreneurship, highlighting social entrepreneurship as a key element to advance societies and address social problems. Selected social entrepreneurs are invited to participate in the foundation's regional meetings and the annual meetings where they may meet chief executives and senior government officials. At the annual meeting 2003, for example, Jeroo Billimoria met with Roberto Blois, deputy secretary-general of the International Telecommunication Union, an encounter that produced a key partnership for her organization Child helpline international.
Research reports
The foundation also acts as a think tank, publishing a wide range of reports. In particular, "Strategic Insight Teams" focus on producing reports of relevance in the fields of competitiveness, global risks, and scenario thinking.
The "Competitiveness Team" produces a range of annual economic reports (first published in brackets): the Global Competitiveness Report (1979) measured competitiveness of countries and economies; The Global Information Technology Report (2001) assessed their competitiveness based on their IT readiness; the Global Gender Gap Report examined critical areas of inequality between men and women; the Global Risks Report (2006) assessed key global risks; the Global Travel and Tourism Report (2007) measured travel and tourism competitiveness; the Financial Development Report (2008) aimed to provide a comprehensive means for countries to establish benchmarks for various aspects of their financial systems and establish priorities for improvement; and the Global Enabling Trade Report (2008) presented a cross-country analysis of the large number of measures facilitating trade among nations.The "Risk Response Network" produces a yearly report assessing risks which are deemed to be within the scope of these teams, have cross-industry relevance, are uncertain, have the potential to cause upwards of US$10 billion in economic damage, have the potential to cause major human suffering, and which require a multi-stakeholder approach for mitigation.In 2020, the forum published a report entitled Nature Risk Rising: Why the Crisis Engulfing Nature Matters for Business and the Economy. In this report the forum estimated that approximately half of global GDP is highly or moderately dependent on nature (the same as IPBES's 2019 assessment report). The report also found that 1 dollar spent on nature restoration yields 9 dollars in economic benefits.
Initiatives
Health
The Global Health Initiative was launched by Kofi Annan at the annual meeting in 2002. The GHI's mission was to engage businesses in public-private partnerships to tackle HIV/AIDS, tuberculosis, malaria, and health systems.The Global Education Initiative (GEI), launched during the annual meeting in 2003, brought together international IT companies and governments in Jordan, Egypt, and India that has resulted in new personal computer hardware being available in their classrooms and more local teachers trained in e-learning. The GEI model, which is scalable and sustainable, now is being used as an educational blueprint in other countries including Rwanda.
On 19 January 2017 the Coalition for Epidemic Preparedness Innovations (CEPI), a global initiative to fight epidemics, was launched at WEF in Davos. The internationally funded initiative aims at securing vaccine supplies for global emergencies and pandemics, and to research new vaccines for tropical diseases, that are now more menacing. The project is funded by private and governmental donors, with an initial investment of US$460m from the governments of Germany, Japan and Norway, plus the Bill & Melinda Gates Foundation and the Wellcome Trust.
2020 meeting
Between 21 and 24 January 2020, at the early stages of the COVID-19 outbreak, CEPI met with leaders from Moderna to establish plans for a COVID-19 vaccine at the Davos gathering, with a total global case number of 274 and total loss of life the virus at 16.The WHO declared a global health emergency 6 days later.
Society
The Water Initiative brings together diverse stakeholders such as Alcan Inc., the Swiss Agency for Development and Cooperation, USAID India, UNDP India, Confederation of Indian Industry (CII), Government of Rajasthan, and the NEPAD Business Foundation to develop public-private partnerships on water management in South Africa and India.
In an effort to combat corruption, the Partnering Against Corruption Initiative (PACI) was launched by CEOs from the engineering and construction, energy and metals, and mining industries at the annual meeting in Davos during January 2004. PACI is a platform for peer exchange on practical experience and dilemma situations. Approximately 140 companies have joined the initiative.
Environment
In the beginning of the 21st century, the forum began to increasingly deal with environmental issues. In the Davos Manifesto 2020 it is said that a company among other:
"acts as a steward of the environmental and material universe for future generations. It consciously protects our biosphere and champions a circular, shared and regenerative economy."
"responsibly manages near-term, medium-term and long-term value creation in pursuit of sustainable shareholder returns that do not sacrifice the future for the present."
"is more than an economic unit generating wealth. It fulfils human and societal aspirations as part of the broader social system. Performance must be measured not only on the return to shareholders, but also on how it achieves its environmental, social and good governance objectives."The Environmental Initiative covers climate change and water issues. Under the Gleneagles Dialogue on Climate Change, the U.K. government asked the World Economic Forum at the G8 Summit in Gleneagles in 2005 to facilitate a dialogue with the business community to develop recommendations for reducing greenhouse gas emissions. This set of recommendations, endorsed by a global group of CEOs, was presented to leaders ahead of the G8 Summit in Toyako, Hokkaido, Japan held in July 2008.In 2016 WEF published an article in which it is said, that in some cases reducing consumption can increase well-being. In the article is mentioned that in Costa Rica the GDP is 4 times smaller than in many countries in Western Europe and North America, but people live longer and better. An American study shows that those whose income is higher than $75,000, do not necessarily have an increase in well-being. To better measure well-being, the New Economics Foundation's launched the Happy Planet Index.In January 2017, WEF launched the Platform for Accelerating the Circular Economy (PACE), which is a global public private partnership seeking to scale circular economy innovations. PACE is co-chaired by Frans van Houten (CEO of Philips), Naoko Ishii (CEO of the Global Environment Facility, and the head of United Nations Environment Programme (UNEP). The Ellen MacArthur Foundation, the International Resource Panel, Circle Economy, Chatham House, the Dutch National Institute for Public Health and the Environment, the United Nations Environment Programme and Accenture serve as knowledge partners, and the program is supported by the UK Department for Environment, Food and Rural Affairs, DSM, FrieslandCampina, Global Affairs Canada, the Dutch Ministry of Infrastructure and Water Management, Rabobank, Shell, SITRA, and Unilever.The Forum emphasized its 'Environment and Natural Resource Security Initiative' for the 2017 meeting to achieve inclusive economic growth and sustainable practices for global industries. With increasing limitations on world trade through national interests and trade barriers, the WEF has moved towards a more sensitive and socially-minded approach for global businesses with a focus on the reduction of carbon emissions in China and other large industrial nations.Also in 2017, WEF launched the Fourth Industrial Revolution (4IR) for the Earth Initiative, a collaboration among WEF, Stanford University and PwC, and funded through the Mava Foundation. In 2018, WEF announced that one project within this initiative was to be the Earth BioGenome Project, the aim of which is to sequence the genomes of every organism on Earth.The World Economic Forum is working to eliminate plastic pollution, stating that by 2050 it will consume 15% of the global carbon budget and will pass by its weight fishes in the world's oceans. One of the methods is to achieve circular economy.The theme of the 2020 World Economic Forum annual meeting was 'Stakeholders for a Cohesive and Sustainable World'. Climate change and sustainability were central themes of discussion. Many argued that GDP is failed to represent correctly the wellbeing and that fossil fuel subsidies should be stopped. Many of the participants said that a better capitalism is needed. Al Gore summarized the ideas in the conference as: "The version of capitalism we have today in our world must be reformed".In this meeting the World Economic Forum:
Launched the Trillion Tree Campaign an initiative aiming to "grow, restore and conserve 1 trillion trees over the next 10 years around the world – in a bid to restore biodiversity and help fight climate change". Donald Trump joined the initiative. The forum stated that: "Nature-based solutions – locking-up carbon in the world's forests, grasslands and wetlands – can provide up to one-third of the emissions reductions required by 2030 to meet the Paris Agreement targets," adding that the rest should come from the heavy industry, finance and transportation sectors. One of the targets is to unify existing reforestation projects
Discussed the issue of climate change and called to expanding renewable energy, energy efficiency change the patterns of consumption and remove carbon from the atmosphere. The forum claimed that the climate crisis will become a climate apocalypse if the temperature will rise by 2 degrees. The forum called to fulfill the commitments in Paris Agreement. Jennifer Morgan, the executive director of Greenpeace, said that as to the beginning of the forum, fossil fuels still get three times more money than climate solutions.At the 2021 annual meeting UNFCCC launched the 'UN Race-to-Zero Emissions Breakthroughs'. The aim of the campaign is to transform 20 sectors of the economy in order to achieve zero greenhouse gas emissions. At least 20% of each sector should take specific measures, and 10 sectors should be transformed before COP 26 in Glasgow. According to the organizers, 20% is a tipping point, after which the whole sector begins to irreversibly change.
Coronavirus and green recovery
In April 2020, the forum published an article that postulates that the COVID-19 pandemic is linked to the destruction of nature. The number of emerging diseases is rising and this rise is linked to deforestation and species loss. In the article, there are multiple examples of the degradation of ecological systems caused by humans. It is also says that half of the global GDP is moderately or largely dependent on nature. The article concludes that the recovery from the pandemic should be linked to nature recovery.The forum proposed a plan for a green recovery. The plan includes advancing circular economy. Among the mentioned methods, there is green building, sustainable transport, organic farming, urban open space, renewable energy and electric vehicles.
Global Future Councils
The Network of Global Future Councils meets annually in the United Arab Emirates and virtually several times a year. The second WEF annual meeting was held in Dubai in November 2017, when there were 35 distinct councils focused on a specific issue, industry or technology. In 2017 members met with representatives and partners of WEF's new Center for the Fourth Industrial Revolution. Ideas and proposals are taken forward for further discussion at the World Economic Forum Annual Meeting in Davos-Klosters in January.
Global Shapers Community
The Global Shapers Community (GSC), an initiative of World Economic Forum, selects young leaders below 30 years old based on their achievement and potential to be change agents in the world. Global Shapers develop and lead their city-based hubs to implement social justice projects that advance the mission of World Economic Forum. The GSC has over 10,000 members in 500+ hubs in 154 countries. Some critics see the WEF's increasing focus on activist areas such as environmental protection and social entrepreneurship as a strategy to disguise the true plutocratic goals of the organisation.
Project Divisions
Projects are divided into 17 areas of impact: Arts and Culture, Cities & Urbanization, Civic Participation, Climate Change, Covid-19 Response, Education, Entrepreneurship, Fourth Industrial Revolution, Gender equality, Global Health, Migration, Shaping the Future, Sustainable Development, Values, Water, #WeSeeEqual, and Workforce and Employment.In Sustainable Development, the community has launched the Shaping Fashion Initiative involving the hubs of Dusseldorf, Corrientes, Lahore, Davao, Milan, Lyon, Quito, Taipei, and others.In Entrepreneurship, Bucharest has hosted the Social Impact Award since 2009. It runs education and incubation programs in more than 20 countries in Europe, Africa, and Asia and has impacted 1000+ young social entrepreneurs aged 14–30 years old. In North America, New York has hosted the OneRise startup accelerator since 2021.
Future of work
The Future of Work task force was chaired by Linda Yaccarino. In regards to the future of work, the 2020 WEF set the goal of providing better jobs, access to higher quality education and skills to 1 billion people by 2030.
The Great Reset
In May 2020, the WEF and the Prince of Wales's Sustainable Markets Initiative launched "The Great Reset" project, a five-point plan to enhance sustainable economic growth following the global recession caused by the COVID-19 pandemic lockdowns. "The Great Reset" was to be the theme of WEF's annual meeting in August 2021.According to forum founder Schwab, the intention of the project is to reconsider the meaning of capitalism and capital. While not abandoning capitalism, he proposes to change and possibly move on from some aspects of it, including neoliberalism and free-market fundamentalism. The role of corporations, taxation and more should be reconsidered. International cooperation and trade should be defended and the Fourth Industrial Revolution also.The forum defines the system that it wants to create as "Stakeholder Capitalism". The forum supports trade unions.
Criticism
Physical protests
During the late 1990s, the WEF, as well as the G7, World Bank, World Trade Organization, and International Monetary Fund, came under heavy criticism by anti-globalization activists who claimed that capitalism and globalization were increasing poverty and destroying the environment. In 2000, about 10,000 demonstrators disrupted a regional WEF meeting in Melbourne, by obstructing the path of 200 delegates. Small demonstrations are held in Davos on most but not all years, organised by the local Green Party (see Anti-WEF protests in Switzerland, January 2003) to protest against what have been called the meetings of "fat cats in the snow", a tongue-in-cheek term used by rock singer Bono.After 2014, the physical protest movement against the World Economic Forum largely died down, and Swiss police noted a significant decline in attending protesters, 20 at most during the meeting in 2016. While protesters are still more numerous in large Swiss cities, the protest movement itself has undergone significant change. Around 150 Tibetans and Uighurs protested in Geneva and 400 Tibetans in Bern against the visit of China's paramount leader Xi Jinping for the 2017 meeting, with subsequent confrontations and arrests.
Growing gaps in wealth
A number of NGOs have used the World Economic Forum to highlight growing inequalities and wealth gaps, which they consider not to be addressed extensively enough or even to be fortified through institutions like the WEF. Winnie Byanyima, the executive director of the anti-poverty confederation Oxfam International co-chaired the 2015 meeting, where she presented a critical report of global wealth distribution based on statistical research by the Credit Suisse Research Institute. In this study, the richest 1% of people in the world own 48% of the world's wealth. At the 2019 meeting, she presented another report claiming that the gap between rich and poor has only increased. The report "Public Good or Private Wealth" stated that 2,200 billionaires worldwide saw their wealth grow by 12% while the poorest half saw its wealth fall by 11%. Oxfam calls for a global tax overhaul to increase and harmonise global tax rates for corporations and wealthy individuals."You'll own nothing and be happy" is a phrase adapted from an essay written by Ida Auken in 2016 for the WEF, pondering a future in which urban residents would rely on shared services for many expensive items such as appliances and vehicles. Shortly after its publication, a commentator for European Digital Rights criticized Auken's vision of centralized property ownership as a "benevolent dictatorship". During the COVID-19 pandemic, the phrase went viral, eliciting strongly negative reactions from mostly conservative but also some left-wing and unaffiliated commentators. Responding to viral social media posts based on the phrase, the WEF denied that it had a goal related to limiting ownership of private property.Rutger Bregman, a Dutch historian invited to a 2018 WEF panel on inequality, went viral when he suggested that the best way for the attendees to attack inequality was to stop avoiding taxes. Bregman described his motivation, saying "it feels like I’m at a firefighters’ conference and no one’s allowed to speak about water".
Formation of a detached elite
The formation of a detached elite, which is often co-labelled through the neologism "Davos Man", refers to a global group whose members view themselves as completely "international". The term refers to people who "have little need for national loyalty, view national boundaries as obstacles, and see national governments as residues from the past whose only useful function is to facilitate the elite's global operations" according to political scientist Samuel P. Huntington, who is credited with inventing the neologism. In his 2004 article "Dead Souls: The Denationalization of the American Elite", Huntington argues that this international perspective is a minority elitist position not shared by the nationalist majority of the people.The Transnational Institute describes the World Economic Forum's main purpose as being "to function as a socializing institution for the emerging global elite, globalization's "Mafiocracy" of bankers, industrialists, oligarchs, technocrats and politicians. They promote common ideas, and serve common interests: their own."In 2019, the Manager Magazin journalist Henrik Müller argued that the "Davos Man" had already decayed into different groups and camps. He sees three central drivers for this development:
Ideologically: the liberal western model is no longer considered a universal role model that other countries strive for (with China's digital totalitarianism or the traditional absolutism in the Persian Gulf as counter-proposals, all of which are represented by government members in Davos).
Socially: societies increasingly disintegrate into different groups, each of which evokes its own identity (e.g. embodied through the Brexit vote or congressional blockades in the USA).
Economically: the measured economic reality largely contradicts the established ideas of how the economy should actually work (despite economic upswings, wages and prices e.g. barely rise).
Public cost of security
Critics argue that the WEF, despite having reserves of several hundred million Swiss francs and paying its executives salaries of around 1 million Swiss francs per year, would not pay any federal tax and moreover allocate a part of its costs to the public. Following massive criticism from politicians and the Swiss civil society, the Swiss federal government decided in February 2021 to reduce its annual contributions to the WEF.As of 2018, the police and military expenditures carried by the federal government stood at 39 million Swiss francs. The Aargauer Zeitung argued in January 2020 that the additional cost borne by the Kanton Graubünden stand at CHF 9 million per year.The Swiss Green Party summarised their criticism within the Swiss National Council that the holding of the World Economic Forum has cost Swiss taxpayers hundreds of millions of Swiss francs over the past decades. In their view, it was however questionable to what extent the Swiss population or global community benefit from these expenditures.
Gender debate
Women have been broadly underrepresented at the WEF, according to some critics. The female participation rate at the WEF increased from 9% to 15% between 2001 and 2005. In 2016, 18% of the WEF attendees were female; this number increased to 21% in 2017, and 24% in 2020.Several women have since shared their personal impressions of the Davos meetings in media articles, highlighting that issues were more profound than "a quota at Davos for female leaders or a session on diversity and inclusion". The World Economic Forum has in this context filed legal complaints against at least three investigative articles by reporters Katie Gibbons and Billy Kenber that were published by the British newspaper The Times in March 2020.
Undemocratic decision making
According to the European Parliament's think tank, critics see the WEF as an instrument for political and business leaders to "take decisions without having to account to their electorate or shareholders".Since 2009, the WEF has been working on a project called the Global Redesign Initiative (GRI), which proposes a transition away from intergovernmental decision-making towards a system of multi-stakeholder governance. According to the Transnational Institute (TNI), the Forum is hence planning to replace a recognised democratic model with a model where a self-selected group of "stakeholders" make decisions on behalf of the people.Some critics have seen the WEF's attention to goals like environmental protection and social entrepreneurship as mere window dressing to disguise its true plutocratic nature and goals. In a Guardian opinion piece, Cas Mudde said that such plutocrats should not be the group to have control over the political agendas and decide which issues to focus on and how to support them. A writer in the German magazine Cicero saw the situation as academic, cultural, media and economic elites grasping for social power while disregarding political decision processes. A materially well-endowed milieu would in this context try to "cement its dominance of opinion and sedate ordinary people with maternalistic-paternalistic social benefits, so that they are not disturbed by the common people when they steer". The French Les Echos furthermore concludes that Davos "represents the exact values people rejected at the ballot box".
Lack of financial transparency
In 2017, the former Frankfurter Allgemeine Zeitung journalist Jürgen Dunsch criticized that financial reports of the WEF were not very transparent since neither income nor expenditures were broken down. In addition, he outlined that the foundation capital was not quantified while the apparently not insignificant profits would be reinvested.Recent annual reports published by the WEF include a more detailed breakdown of its financials and indicate revenues of CHF 349 million for the year 2019 with reserves of CHF 310 million and a foundation capital of CHF 34 million. There are no further details provided to what asset classes or individual names the WEF allocates its financial assets of CHF 261 million.The German newspaper Süddeutsche Zeitung criticised in this context that the WEF had turned into a "money printing machine", which is run like a family business and forms a comfortable way to make a living for its key personnel. The foundation's founder Klaus Schwab draws a salary of around one million Swiss francs per year.
Unclear selection criteria
In a request to the Swiss National Council, the Swiss Green Party criticised that invitations to the annual meeting and programmes of the World Economic Forum are issued according to unclear criteria. They highlight that "despots" such as the son of the former Libyan dictator Saif al-Islam al-Gaddafi had been invited to the WEF and even awarded membership in the club of "Young Global Leaders". Even after the beginning of the Arab spring in December 2010 and related violent uprisings against despot regimes, the WEF continued to invite Gaddafi to its annual meeting.
Environmental footprint of annual meetings
Critics emphasise that the annual meeting of the World Economic Forum is counterproductive when combating pressing problems of humanity such as the climate crisis. Even in 2020, participants travelled to the WEF annual meeting in Davos on around 1,300 private jets while the total emissions burden from transport and accommodation were enormous in their view.
Corporate capture of global and democratic institutions
The World Economic Forum's "Global Redesign" report suggests to create "public-private" United Nations (UN) in which selected agencies operate and steer global agendas under shared governance systems. It says that a globalised world is probably best managed by a coalition of multinational corporations, governments and civil society organizations (CSOs), which it expresses through initiatives like the "Great Reset" and the "Global Redesign".In September 2019, more than 400 civil society organizations and 40 international networks heavily criticised a partnership agreement between WEF and the United Nations and called on the UN Secretary-General to end it. They see such an agreement as a "disturbing corporate capture of the UN, which moved the world dangerously towards a privatised global governance". The Dutch Transnational Institute think tank summarises that we are increasingly entering a world where gatherings such as Davos are "a silent global coup d'état" to capture governance.
Non-accreditation of critical media outlets
In 2019, the Swiss newspaper WOZ received a refusal of its accreditation request for the annual meeting with the editors and subsequently accused the World Economic Forum of favoring specific media outlets. The newspaper highlighted that the WEF stated in its refusal message that it [the Forum] prefers media outlets it works with throughout the year. WOZ deputy head Yves Wegelin called this a strange idea of journalism because in "journalism you don't necessarily have to work with large corporations, but rather critique them".
Institutional initiatives
In addition to economic policy, the WEF's agenda is in recent years increasingly focusing on positively connoted activist topics such as environmental protection and social entrepreneurship, which critics see as a strategy to disguise the organisation's true plutocratic goals.In a December 2020 article by The Intercept, author Naomi Klein described that the WEF's initiatives like the "Great Reset" were simply a "coronavirus-themed rebranding" of things that the WEF was already doing and that it was an attempt by the rich to make themselves look good. In her opinion, "the Great Reset is merely the latest edition of this gilded tradition, barely distinguishable from earlier Davos Big Ideas.Similarly, in his review of COVID-19: The Great Reset, ethicist Steven Umbrello makes parallel critiques of the agenda. He says that the WEF "whitewash[es] a seemingly optimistic future post-Great Reset with buzz words like equity and sustainability" while it functionally jeopardizes those goals.A study published in the Journal of Consumer Research investigated the sociological impact of the WEF. It concluded that the WEF do not solve issues such as poverty, global warming, chronic illness, or debt. The Forum has, according to the study, simply shifted the burden for the solution of these problems from governments and business to "responsible consumers subjects: the green consumer, the health-conscious consumer, and the financially literate consumer."
Appropriation of global crises
In December 2021, the Catholic Cardinal and former Prefect of the Congregation for the Doctrine of the Faith (CDF) Gerhard Ludwig Müller criticised in a controversial interview that people like WEF founder Schwab were sitting "on the throne of their wealth" and were not touched by the everyday difficulties and sufferings people face e.g. due to the COVID-19 pandemic. On the contrary, such elites would see crises as an opportunity to push through their agendas. He particularly criticised the control such people would exercise on people and their embracement of areas such as transhumanism. The German Central Council of Jews condemned this criticism, which is also linked to Jewish financial investors, as antisemitic.On the other hand, the WEF has been criticized as "hypocritical" towards Palestinian human rights, when it rejected a petition from its own constituents to condemn Israel's aggression against Palestinians. WEF cited the need to remain "impartial" on the issue, however claims of hypocrisy came after it voluntarily condemned Russia's aggression against Ukraine months later.
Controversies
Davos municipality
In June 2021, WEF founder Klaus Schwab sharply criticised what he characterized as the "profiteering", "complacency" and "lack of commitment" by the municipality of Davos in relation to the annual meeting. He mentioned that the preparation of the COVID-related meeting in Singapore in 2021/2022 had created an alternative to its Swiss host and sees the chance that the annual meeting will stay in Davos between 40 and 70 per cent.
Usage of "Davos"
As there are many other international conferences nicknamed with "Davos" such as the "Davos of the Desert" event organised by Saudi Arabia's Future Investment Initiative Institute, the World Economic Forum objected to the use of "Davos" in such contexts for any event not organised by them. This particular statement was issued on 22 October 2018, a day before the opening of 2018 Future Investment Initiative (nicknamed "Davos in the desert") organised by the Public Investment Fund of Saudi Arabia.
Alternatives
Open Forum Davos
Since the annual meeting in January 2003 in Davos, an Open Forum Davos, which was co-organized by the Federation of Swiss Protestant Churches, is held concurrently with the Davos forum, opening up the debate about globalization to the general public. The Open Forum has been held in the local high school every year, featuring top politicians and business leaders. It is open to all members of the public free of charge.
Public Eye Awards
The Public Eye Awards have been held every year since 2000. It is a counter-event to the annual meeting of the World Economic Forum (WEF) in Davos. Public Eye Awards is a "public competition of the worst corporations in the world." In 2011, more than 50,000 people voted for companies that acted irresponsibly. At a ceremony at a Davos hotel, the "winners" in 2011 were named as Indonesian palm oil diesel maker, Neste Oil in Finland, and mining company AngloGold Ashanti in South Africa. According to Schweiz aktuell broadcast on 16 January 2015, a public presence during the WEF 2015, may not be guaranteed because the massively increased security in Davos. The Public Eye Award will be awarded for the last time in Davos: Public Eyes says Goodbye to Davos, confirmed by Rolf Marugg (now Landrats politician), by not directly engaged politicians, and by the police responsible.
See also
References
Sources
Bornstein, David (2007). How to Change the World – Social Entrepreneurs and the Power of New Ideas. Oxford University Press (New York City). ISBN 978-0-19-533476-0. 358 pages.
Kellerman, Barbara (1999). Reinventing Leadership – Making the Connection Between Politics and Business. State University of New York Press (Albany, New York). ISBN 978-0-7914-4071-1. 268 pages.
Moore, Mike (2003). A World Without Walls – Freedom, Development, Free Trade and Global Governance. Cambridge University Press (Cambridge, England; New York City). ISBN 978-0-521-82701-0. 292 pages.
Pigman, Geoffrey Allen (2007). The World Economic Forum – A Multi-Stakeholder Approach to Global Governance. Routledge (London, England; New York City). ISBN 978-0-415-70204-1. 175 pages.
Rothkopf, David J. (2008). Superclass – The Global Power Elite and the World They Are Making. Farrar, Straus and Giroux (New York City). ISBN 978-0-374-27210-4. 376 pages.
Schwab, Klaus M.; Kroos, Hein (1971). Moderne Unternehmensführung im Maschinenbau. Verein Dt. Maschinenbau-Anst. e.V. Maschinenbau-Verl (Frankfurt om Main, Germany). OCLC 256314575.
Wolf, Michael (1999). The Entertainment Economy – How Mega-Media Forces Are Transforming Our Lives. Random House (New York City). ISBN 978-0-8129-3042-9. 336 pages.
"Behind the Scenes at Davos" broadcast 14 February 2010 on 60 Minutes
'How to Open the World Economic Forum' – Matthias Lüfkens in Interview with 99FACES.tv
"Everybody's Business: Strengthening International Cooperation in a More Interdependent World" launched May 2010, Doha, Qatar
External links
Official website
WEF Board of Trustees
"Klaus Schwab and Prince Charles on why we need a Great Reset" at the World Economic Forum
Klaus Schwab in "A Conversation with Henry Kissinger on the World in 2017" at the World Economic Forum |
climate of the arctic | The climate of the Arctic is characterized by long, cold winters and short, cool summers. There is a large amount of variability in climate across the Arctic, but all regions experience extremes of solar radiation in both summer and winter. Some parts of the Arctic are covered by ice (sea ice, glacial ice, or snow) year-round, and nearly all parts of the Arctic experience long periods with some form of ice on the surface.
The Arctic consists of ocean that is largely surrounded by land. As such, the climate of much of the Arctic is moderated by the ocean water, which can never have a temperature below −2 °C (28 °F). In winter, this relatively warm water, even though covered by the polar ice pack, keeps the North Pole from being the coldest place in the Northern Hemisphere, and it is also part of the reason that Antarctica is so much colder than the Arctic. In summer, the presence of the nearby water keeps coastal areas from warming as much as they might otherwise.
Overview of the Arctic
There are different definitions of the Arctic. The most widely used definition, the area north of the Arctic Circle, where the sun does not set on the June Solstice, is used in astronomical and some geographical contexts. However the two most widely used definitions in the context of climate are the area north of the northern tree line, and the area in which the average summer temperature is less than 10 °C (50 °F), which are nearly coincident over most land areas (NSIDC).
This definition of the Arctic can be further divided into four different regions:
The Arctic Basin includes the Arctic Ocean within the average minimum extent of sea ice.
The Canadian Arctic Archipelago includes the large and small islands, except Greenland, on the Canadian side of the Arctic, and the waters between them.
The entire island of Greenland, although its ice sheet and ice-free coastal regions have different climatic conditions.
The Arctic waters that are not sea ice in late summer, including Hudson Bay, Baffin Bay, Ungava Bay, the Davis, Denmark, Hudson and Bering Straits, and the Labrador, Norwegian, (ice-free all year), Greenland, Baltic, Barents (southern part ice-free all year), Kara, Laptev, Chukchi, Okhotsk, sometimes Beaufort and Bering Seas.Moving inland from the coast over mainland North America and Eurasia, the moderating influence of the Arctic Ocean quickly diminishes, and the climate transitions from the Arctic to subarctic, generally, in less than 500 kilometres (310 miles), and often over a much shorter distance.
History of Arctic climate observation
Due to the lack of major population centres in the Arctic, weather and climate observations from the region tend to be widely spaced and of short duration compared to the midlatitudes and tropics. Though the Vikings explored parts of the Arctic over a millennium ago, and small numbers of people have been living along the Arctic coast for much longer, scientific knowledge about the region was slow to develop; the large islands of Severnaya Zemlya, just north of the Taymyr Peninsula on the Russian mainland, were not discovered until 1913, and not mapped until the early 1930s
Early European exploration
Much of the historical exploration in the Arctic was motivated by the search for the Northwest and Northeast Passages. Sixteenth- and seventeenth-century expeditions were largely driven by traders in search of these shortcuts between the Atlantic and the Pacific. These forays into the Arctic did not venture far from the North American and Eurasian coasts, and were unsuccessful at finding a navigable route through either passage.
National and commercial expeditions continued to expand the detail on maps of the Arctic through the eighteenth century, but largely neglected other scientific observations. Expeditions from the 1760s to the middle of the 19th century were also led astray by attempts to sail north because of the belief by many at the time that the ocean surrounding the North Pole was ice-free. These early explorations did provide a sense of the sea ice conditions in the Arctic and occasionally some other climate-related information.
By the early 19th century some expeditions were making a point of collecting more detailed meteorological, oceanographic, and geomagnetic observations, but they remained sporadic. Beginning in the 1850s regular meteorological observations became more common in many countries, and the British navy implemented a system of detailed observation. As a result, expeditions from the second half of the nineteenth century began to provide a picture of the Arctic climate.
Early European observing efforts
The first major effort by Europeans to study the meteorology of the Arctic was the First International Polar Year (IPY) in 1882 to 1883. Eleven nations provided support to establish twelve observing stations around the Arctic. The observations were not as widespread or long-lasting as would be needed to describe the climate in detail, but they provided the first cohesive look at the Arctic weather.
In 1884 the wreckage of the Briya, a ship abandoned three years earlier off Russia's eastern Arctic coast, was found on the coast of Greenland. This caused Fridtjof Nansen to realize that the sea ice was moving from the Siberian side of the Arctic to the Atlantic side. He decided to use this motion by freezing a specially designed ship, the Fram, into the sea ice and allowing it to be carried across the ocean. Meteorological observations were collected from the ship during its crossing from September 1893 to August 1896. This expedition also provided valuable insight into the circulation of the ice surface of the Arctic Ocean.
In the early 1930s the first significant meteorological studies were carried out on the interior of the Greenland ice sheet. These provided knowledge of perhaps the most extreme climate of the Arctic, and also the first suggestion that the ice sheet lies in a depression of the bedrock below (now known to be caused by the weight of the ice itself).
Fifty years after the first IPY, in 1932 to 1933, a second IPY was organized. This one was larger than the first, with 94 meteorological stations, but World War II delayed or prevented the publication of much of the data collected during it. Another significant moment in Arctic observing before World War II occurred in 1937 when the USSR established the first of over 30 North-Pole drifting stations. This station, like the later ones, was established on a thick ice floe and drifted for almost a year, its crew observing the atmosphere and ocean along the way.
Cold-War era observations
Following World War II, the Arctic, lying between the USSR and North America, became a front line of the Cold War, inadvertently and significantly furthering our understanding of its climate. Between 1947 and 1957, the United States and Canadian governments established a chain of stations along the Arctic coast known as the Distant Early Warning Line (DEWLINE) to provide warning of a Soviet nuclear attack. Many of these stations also collected meteorological data.
The Soviet Union was also interested in the Arctic and established a significant presence there by continuing the North-Pole drifting stations. This program operated continuously, with 30 stations in the Arctic from 1950 to 1991. These stations collected data that are valuable to this day for understanding the climate of the Arctic Basin. This map shows the location of Arctic research facilities during the mid-1970s and the tracks of drifting stations between 1958 and 1975.
Another benefit from the Cold War was the acquisition of observations from United States and Soviet naval voyages into the Arctic. In 1958 an American nuclear submarine, the Nautilus was the first ship to reach the North Pole. In the decades that followed submarines regularly roamed under the Arctic sea ice, collecting sonar observations of the ice thickness and extent as they went. These data became available after the Cold War, and have provided evidence of thinning of the Arctic sea ice. The Soviet navy also operated in the Arctic, including a sailing of the nuclear-powered ice breaker Arktika to the North Pole in 1977, the first time a surface ship reached the pole.
Scientific expeditions to the Arctic also became more common during the Cold-War decades, sometimes benefiting logistically or financially from the military interest. In 1966 the first deep ice core in Greenland was drilled at Camp Century, providing a glimpse of climate through the last ice age. This record was lengthened in the early 1990s when two deeper cores were taken from near the center of the Greenland Ice Sheet. Beginning in 1979 the Arctic Ocean Buoy Program (the International Arctic Buoy Program since 1991) has been collecting meteorological and ice-drift data across the Arctic Ocean with a network of 20 to 30 buoys.
Satellite era
The end of the Soviet Union in 1991 led to a dramatic decrease in regular observations from the Arctic. The Russian government ended the system of drifting North Pole stations, and closed many of the surface stations in the Russian Arctic. Likewise the United States and Canadian governments cut back on spending for Arctic observing as the perceived need for the DEWLINE declined. As a result, the most complete collection of surface observations from the Arctic is for the period 1960 to 1990.The extensive array of satellite-based remote-sensing instruments now in orbit has helped to replace some of the observations that were lost after the Cold War, and has provided coverage that was impossible without them. Routine satellite observations of the Arctic began in the early 1970s, expanding and improving ever since. A result of these observations is a thorough record of sea-ice extent in the Arctic since 1979; the decreasing extent seen in this record (NASA, NSIDC), and its possible link to anthropogenic global warming, has helped increase interest in the Arctic in recent years. Today's satellite instruments provide routine views of not only cloud, snow, and sea-ice conditions in the Arctic, but also of other, perhaps less-expected, variables, including surface and atmospheric temperatures, atmospheric moisture content, winds, and ozone concentration.
Civilian scientific research on the ground has certainly continued in the Arctic, and it is getting a boost from 2007 to 2009 as nations around the world increase spending on polar research as part of the third International Polar Year. During these two years thousands of scientists from over 60 nations will co-operate to carry out over 200 projects to learn about physical, biological, and social aspects of the Arctic and Antarctic (IPY).
Modern researchers in the Arctic also benefit from computer models. These pieces of software are sometimes relatively simple, but often become highly complex as scientists try to include more and more elements of the environment to make the results more realistic. The models, though imperfect, often provide valuable insight into climate-related questions that cannot be tested in the real world. They are also used to try to predict future climate and the effect that changes to the atmosphere caused by humans may have on the Arctic and beyond. Another interesting use of models has been to use them, along with historical data, to produce a best estimate of the weather conditions over the entire globe during the last 50 years, filling in regions where no observations were made (ECMWF). These reanalysis datasets help compensate for the lack of observations over the Arctic.
Solar radiation
Almost all of the energy available to the Earth's surface and atmosphere comes from the sun in the form of solar radiation (light from the sun, including invisible ultraviolet and infrared light). Variations in the amount of solar radiation reaching different parts of the Earth are a principal driver of global and regional climate. Latitude is the most important factor determining the yearly average amount of solar radiation reaching the top of the atmosphere; the incident solar radiation decreases smoothly from the Equator to the poles. Therefore, temperature tends to decrease with increasing latitude.
In addition the length of each day, which is determined by the season, has a significant impact on the climate. The 24-hour days found near the poles in summer result in a large daily-average solar flux reaching the top of the atmosphere in these regions. On the June solstice 36% more solar radiation reaches the top of the atmosphere over the course of the day at the North Pole than at the Equator. However, in the six months from the September equinox to March equinox the North Pole receives no sunlight.
The climate of the Arctic also depends on the amount of sunlight reaching the surface, and being absorbed by the surface. Variations in cloud cover can cause significant variations in the amount of solar radiation reaching the surface at locations with the same latitude. Differences in surface albedo due for example to presence or absence of snow and ice strongly affect the fraction of the solar radiation reaching the surface that is reflected rather than absorbed.
Winter
During the winter months of November through February, the sun remains very low in the sky in the Arctic or does not rise at all. Where it does rise, the days are short, and the sun's low position in the sky means that, even at noon, not much energy is reaching the surface. Furthermore, most of the small amount of solar radiation that reaches the surface is reflected away by the bright snow cover. Cold snow reflects between 70% and 90% of the solar radiation that reaches it, and snow covers most of the Arctic land and ice surface in winter. These factors result in a negligible input of solar energy to the Arctic in winter; the only things keeping the Arctic from continuously cooling all winter are the transport of warmer air and ocean water into the Arctic from the south and the transfer of heat from the subsurface land and ocean (both of which gain heat in summer and release it in winter) to the surface and atmosphere.
Spring
Arctic days lengthen rapidly in March and April, and the sun rises higher in the sky, both bringing more solar radiation to the Arctic than in winter. During these early months of Northern Hemisphere spring most of the Arctic is still experiencing winter conditions, but with the addition of sunlight. The continued low temperatures, and the persisting white snow cover, mean that this additional energy reaching the Arctic from the sun is slow to have a significant impact because it is mostly reflected away without warming the surface. By May, temperatures are rising, as 24-hour daylight reaches many areas, but most of the Arctic is still snow-covered, so the Arctic surface reflects more than 70% of the sun's energy that reaches it over all areas but the Norwegian Sea and southern Bering Sea, where the ocean is ice free, and some of the land areas adjacent to these seas, where the moderating influence of the open water helps melt the snow early.In most of the Arctic the significant snow melt begins in late May or sometime in June. This begins a feedback, as melting snow reflects less solar radiation (50% to 60%) than dry snow, allowing more energy to be absorbed and the melting to take place faster. As the snow disappears on land, the underlying surfaces absorb even more energy, and begin to warm rapidly.
Summer
At the North Pole on the June solstice, around 21 June, the sun circles at 23.5° above the horizon. This marks noon in the Pole's year-long day; from then until the September equinox, the sun will slowly approach nearer and nearer the horizon, offering less and less solar radiation to the Pole. This period of setting sun also roughly corresponds to summer in the Arctic.
As the Arctic continues receiving energy from the sun during this time, the land, which is mostly free of snow by now, can warm up on clear days when the wind is not coming from the cold ocean. Over the Arctic Ocean the snow cover on the sea ice disappears and ponds of melt water start to form on the sea ice, further reducing the amount of sunlight the ice reflects and helping more ice melt. Around the edges of the Arctic Ocean the ice will melt and break up, exposing the ocean water, which absorbs almost all of the solar radiation that reaches it, storing the energy in the water column. By July and August, most of the land is bare and absorbs more than 80% of the sun's energy that reaches the surface. Where sea ice remains, in the central Arctic Basin and the straits between the islands in the Canadian Archipelago, the many melt ponds and lack of snow cause about half of the sun's energy to be absorbed, but this mostly goes toward melting ice since the ice surface cannot warm above freezing.
Frequent cloud cover, exceeding 80% frequency over much of the Arctic Ocean in July, reduces the amount of solar radiation that reaches the surface by reflecting much of it before it gets to the surface. Unusual clear periods can lead to increased sea-ice melt or higher temperatures (NSIDC Archived December 23, 2007, at the Wayback Machine).
Greenland: The interior of Greenland differs from the rest of the Arctic. Low spring and summer cloud frequency and the high elevation, which reduces the amount of solar radiation absorbed or scattered by the atmosphere, combine to give this region the most incoming solar radiation at the surface out of anywhere in the Arctic. However, the high elevation, and corresponding lower temperatures, help keep the bright snow from melting, limiting the warming effect of all this solar radiation.
In the summer, when the snow melts, Inuit live in tent-like huts made out of animal skins stretched over a frame.
Autumn
In September and October the days get rapidly shorter, and in northern areas the sun disappears from the sky entirely. As the amount of solar radiation available to the surface rapidly decreases, the temperatures follow suit. The sea ice begins to refreeze, and eventually gets a fresh snow cover, causing it to reflect even more of the dwindling amount of sunlight reaching it. Likewise, in the beginning of September both the northern and southern land areas receive their winter snow cover, which combined with the reduced solar radiation at the surface, ensures an end to the warm days those areas may experience in summer. By November, winter is in full swing in most of the Arctic, and the small amount of solar radiation still reaching the region does not play a significant role in its climate.
Temperature
The Arctic is often perceived as a region stuck in a permanent deep freeze. While much of the region does experience very low temperatures, there is considerable variability with both location and season. Winter temperatures average below freezing over all of the Arctic except for small regions in the southern Norwegian and Bering Seas, which remain ice free throughout the winter. Average temperatures in summer are above freezing over all regions except the central Arctic Basin, where sea ice survives through the summer, and interior Greenland.
The maps on the right show the average temperature over the Arctic in January and July, generally the coldest and warmest months. These maps were made with data from the NCEP/NCAR Reanalysis, which incorporates available data into a computer model to create a consistent global data set. Neither the models nor the data are perfect, so these maps may differ from other estimates of surface temperatures; in particular, most Arctic climatologies show temperatures over the central Arctic Ocean in July averaging just below freezing, a few degrees lower than these maps show (USSR, 1985). An earlier climatology of temperatures in the Arctic, based entirely on available data, is shown in this map from the CIA Polar Regions Atlas.
Record low temperatures in the Northern Hemisphere
The World Meteorological Organization has recognized in 2020 a temperature of -69.6°C (-93.3°F) measured near the topographic summit of the Greenland Ice Sheet on 22 December 1991, as the lowest in the Northern Hemisphere. The record was measured at an automatic weather station and was uncovered after nearly 30 years.Among the coldest location in the Northern Hemisphere is also the interior of Russia's Far East, in the upper-right quadrant of the maps. This is due to the region's continental climate, far from the moderating influence of the ocean, and to the valleys in the region that can trap cold, dense air and create strong temperature inversions, where the temperature increases, rather than decreases, with height.
The lowest officially recorded temperatures in the Northern Hemisphere is −67.7 °C (−89.9 °F) which occurred in Oymyakon on 6 February 1933, as well as −67.8 °C (−90.0 °F) in Verkhoyansk on 5 and 7 February 1892, respectively. However, this region is not part of the Arctic because its continental climate also allows it to have warm summers, with an average July temperature of 15 °C (59 °F). In the figure below showing station climatologies, the plot for Yakutsk is representative of this part of the Far East; Yakutsk has a slightly less extreme climate than Verkhoyansk.
Arctic Basin
The Arctic Basin is typically covered by sea ice year round, which strongly influences its summer temperatures. It also experiences the longest period without sunlight of any part of the Arctic, and the longest period of continuous sunlight, though the frequent cloudiness in summer reduces the importance of this solar radiation.
Despite its location centered on the North Pole, and the long period of darkness this brings, this is not the coldest part of the Arctic. In winter, the heat transferred from the −2 °C (28 °F) water through cracks in the ice and areas of open water helps to moderate the climate some, keeping average winter temperatures around −30 to −35 °C (−22 to −31 °F). Minimum temperatures in this region in winter are around −50 °C (−58 °F).
In summer, the sea ice keeps the surface from warming above freezing. Sea ice is mostly fresh water since the salt is rejected by the ice as it forms, so the melting ice has a temperature of 0 °C (32 °F), and any extra energy from the sun goes to melting more ice, not to warming the surface. Air temperatures, at the standard measuring height of about 2 meters above the surface, can rise a few degrees above freezing between late May and September, though they tend to be within a degree of freezing, with very little variability during the height of the melt season.
In the figure above showing station climatologies, the lower-left plot, for NP 7–8, is representative of conditions over the Arctic Basin. This plot shows data from the Soviet North Pole drifting stations, numbers 7 and 8. It shows the average temperature in the coldest months is in the −30s, and the temperature rises rapidly from April to May; July is the warmest month, and the narrowing of the maximum and minimum temperature lines shows the temperature does not vary far from freezing in the middle of summer; from August through December the temperature drops steadily. The small daily temperature range (the length of the vertical bars) results from the fact that the sun's elevation above the horizon does not change much or at all in this region during one day.
Much of the winter variability in this region is due to clouds. Since there is no sunlight, the thermal radiation emitted by the atmosphere is one of this region's main sources of energy in winter. A cloudy sky can emit much more energy toward the surface than a clear sky, so when it is cloudy in winter, this region tends to be warm, and when it is clear, this region cools quickly.
Canadian Briya
In winter, the Canadian Archipelago experiences temperatures similar to those in the Arctic Basin, but in the summer months of June to August, the presence of so much land in this region allows it to warm more than the ice-covered Arctic Basin. In the station-climatology figure above, the plot for Resolute is typical of this region. The presence of the islands, most of which lose their snow cover in summer, allows the summer temperatures to rise well above freezing. The average high temperature in summer approaches 10 °C (50 °F), and the average low temperature in July is above freezing, though temperatures below freezing are observed every month of the year.
The straits between these islands often remain covered by sea ice throughout the summer. This ice acts to keep the surface temperature at freezing, just as it does over the Arctic Basin, so a location on a strait would likely have a summer climate more like the Arctic Basin, but with higher maximum temperatures because of winds off of the nearby warm islands.
Greenland
Climatically, Greenland is divided into two very separate regions: the coastal region, much of which is ice free, and the inland ice sheet. The Greenland Ice Sheet covers about 80% of Greenland, extending to the coast in places, and has an average elevation of 2,100 m (6,900 ft) and a maximum elevation of 3,200 m (10,500 ft). Much of the ice sheet remains below freezing all year, and it has the coldest climate of any part of the Arctic. Coastal areas can be affected by nearby open water, or by heat transfer through sea ice from the ocean, and many parts lose their snow cover in summer, allowing them to absorb more solar radiation and warm more than the interior.
Coastal regions on the northern half of Greenland experience winter temperatures similar to or slightly warmer than the Canadian Archipelago, with average January temperatures of −30 to −25 °C (−22 to −13 °F). These regions are slightly warmer than the Archipelago because of their closer proximity to areas of thin, first-year sea ice cover or to open ocean in the Baffin Bay and Greenland Sea.
The coastal regions in the southern part of the island are influenced more by open ocean water and by frequent passage of cyclones, both of which help to keep the temperature there from being as low as in the north. As a result of these influences, the average temperature in these areas in January is considerably higher, between about −20 to −4 °C (−4 to 25 °F).
The interior ice sheet escapes much of the influence of heat transfer from the ocean or from cyclones, and its high elevation also acts to give it a colder climate since temperatures tend to decrease with elevation. The result is winter temperatures that are lower than anywhere else in the Arctic, with average January temperatures of −45 to −30 °C (−49 to −22 °F), depending on location and on which data set is viewed. Minimum temperatures in winter over the higher parts of the ice sheet can drop below −60 °C (−76 °F)(CIA, 1978). In the station climatology figure above, the Centrale plot is representative of the high Greenland Ice Sheet.
In summer, the coastal regions of Greenland experience temperatures similar to the islands in the Canadian Archipelago, averaging just a few degrees above freezing in July, with slightly higher temperatures in the south and west than in the north and east. The interior ice sheet remains snow-covered throughout the summer, though significant portions do experience some snow melt. This snow cover, combined with the ice sheet's elevation, help to keep temperatures here lower, with July averages between −12 and 0 °C (10 and 32 °F). Along the coast, temperatures are kept from varying too much by the moderating influence of the nearby water or melting sea ice. In the interior, temperatures are kept from rising much above freezing because of the snow-covered surface but can drop to −30 °C (−22 °F) even in July. Temperatures above 20 °C are rare but do sometimes occur in the far south and south-west coastal areas.
Ice-free seas
Most Arctic seas are covered by ice for part of the year (see the map in the sea-ice section below); 'ice-free' here refers to those which are not covered year-round.
The only regions that remain ice-free throughout the year are the southern part of the Barents Sea and most of the Norwegian Sea. These have very small annual temperature variations; average winter temperatures are kept near or above the freezing point of sea water (about −2 °C (28 °F)) since the unfrozen ocean cannot have a temperature below that, and summer temperatures in the parts of these regions that are considered part of the Arctic average less than 10 °C (50 °F). During the 46-year period when weather records were kept on Shemya Island, in the southern Bering Sea, the average temperature of the coldest month (February) was −0.6 °C (30.9 °F) and that of the warmest month (August) was 9.7 °C (49.5 °F); temperatures never dropped below −17 °C (1 °F) or rose above 18 °C (64 °F); Western Regional Climate Center)
The rest of the seas have ice cover for some part of the winter and spring, but lose that ice during the summer. These regions have summer temperatures between about 0 and 8 °C (32 and 46 °F). The winter ice cover allows temperatures to drop much lower in these regions than in the regions that are ice-free all year. Over most of the seas that are ice-covered seasonally, winter temperatures average between about −30 and −15 °C (−22 and 5 °F). Those areas near the sea-ice edge will remain somewhat warmer due to the moderating influence of the nearby open water. In the station-climatology figure above, the plots for Point Barrow, Tiksi, Murmansk, and Isfjord are typical of land areas adjacent to seas that are ice-covered seasonally. The presence of the land allows temperatures to reach slightly more extreme values than the seas themselves.
An essentially ice-free Arctic may be a reality in the month of September, anywhere from 2050 to 2100.
Precipitation
Precipitation in most of the Arctic falls only as rain and snow. Over most areas snow is the dominant, or only, form of precipitation in winter, while both rain and snow fall in summer (Serreze and Barry 2005). The main exception to this general description is the high part of the Greenland Ice Sheet, which receives all of its precipitation as snow, in all seasons.
Accurate climatologies of precipitation amount are more difficult to compile for the Arctic than climatologies of other variables such as temperature and pressure. All variables are measured at relatively few stations in the Arctic, but precipitation observations are made more uncertain due to the difficulty in catching in a gauge all of the snow that falls. Typically some falling snow is kept from entering precipitation gauges by winds, causing an underreporting of precipitation amounts in regions that receive a large fraction of their precipitation as snowfall. Corrections are made to data to account for this uncaught precipitation, but they are not perfect and introduce some error into the climatologies (Serreze and Barry 2005).
The observations that are available show that precipitation amounts vary by about a factor of 10 across the Arctic, with some parts of the Arctic Basin and Canadian Archipelago receiving less than 150 mm (5.9 in) of precipitation annually, and parts of southeast Greenland receiving over 1,200 mm (47 in) annually. Most regions receive less than 500 mm (20 in) annually. For comparison, annual precipitation averaged over the whole planet is about 1,000 mm (39 in); see Precipitation). Unless otherwise noted, all precipitation amounts given in this article are liquid-equivalent amounts, meaning that frozen precipitation is melted before it is measured.
Arctic Basin
The Arctic Basin is one of the driest parts of the Arctic. Most of the Basin receives less than 250 mm (9.8 in) of precipitation per year, qualifying it as a desert. Smaller regions of the Arctic Basin just north of Svalbard and the Taymyr Peninsula receive up to about 400 mm (16 in) per year.Monthly precipitation totals over most of the Arctic Basin average about 15 mm (0.59 in) from November through May, and rise to 20 to 30 mm (0.79 to 1.18 in) in July, August, and September. The dry winters result from the low frequency of cyclones in the region during that time, and the region's distance from warm open water that could provide a source of moisture (Serreze and Barry 2005). Despite the low precipitation totals in winter, precipitation frequency is higher in January, when 25% to 35% of observations reported precipitation, than in July, when 20% to 25% of observations reported precipitation (Serreze and Barry 2005). Much of the precipitation reported in winter is very light, possibly diamond dust. The number of days with measurable precipitation (more than 0.1 mm [0.004 in] in a day) is slightly greater in July than in January (USSR 1985). Of January observations reporting precipitation, 95% to 99% of them indicate it was frozen. In July, 40% to 60% of observations reporting precipitation indicate it was frozen (Serreze and Barry 2005).
The parts of the Basin just north of Svalbard and the Taymyr Peninsula are exceptions to the general description just given. These regions receive many weakening cyclones from the North-Atlantic storm track, which is most active in winter. As a result, precipitation amounts over these parts of the basin are larger in winter than those given above. The warm air transported into these regions also mean that liquid precipitation is more common than over the rest of the Arctic Basin in both winter and summer.
Canadian Archipelago
Annual precipitation totals in the Canadian Archipelago increase dramatically from north to south. The northern islands receive similar amounts, with a similar annual cycle, to the central Arctic Basin. Over Baffin Island and the smaller islands around it, annual totals increase from just over 200 mm (7.9 in) in the north to about 500 mm (20 in) in the south, where cyclones from the North Atlantic are more frequent.
Greenland
Annual precipitation amounts given below for Greenland are from Figure 6.5 in Serreze and Barry (2005). Due to the scarcity of long-term weather records in Greenland, especially in the interior, this precipitation climatology was developed by analyzing the annual layers in the snow to determine annual snow accumulation (in liquid equivalent) and was modified on the coast with a model to account for the effects of the terrain on precipitation amounts.
The southern third of Greenland protrudes into the North-Atlantic storm track, a region frequently influenced by cyclones. These frequent cyclones lead to larger annual precipitation totals than over most of the Arctic. This is especially true near the coast, where the terrain rises from sea level to over 2,500 m (8,200 ft), enhancing precipitation due to orographic lift. The result is annual precipitation totals of 400 mm (16 in) over the southern interior to over 1,200 mm (47 in) near the southern and southeastern coasts. Some locations near these coasts where the terrain is particularly conducive to causing orographic lift receive up 2,200 mm (87 in) of precipitation per year. More precipitation falls in winter, when the storm track is most active, than in summer.
The west coast of the central third of Greenland is also influenced by some cyclones and orographic lift, and precipitation totals over the ice sheet slope near this coast are up to 600 mm (24 in) per year. The east coast of the central third of the island receives between 200 and 600 mm (7.9 and 23.6 in) of precipitation per year, with increasing amounts from north to south. Precipitation over the north coast is similar to that over the central Arctic Basin.
The interior of the central and northern Greenland Ice Sheet is the driest part of the Arctic. Annual totals here range from less than 100 to about 200 mm (4 to 8 in). This region is continuously below freezing, so all precipitation falls as snow, with more in summer than in the winter time. (USSR 1985).
Ice-free seas
The Chukchi, Laptev, and Kara Seas and Baffin Bay receive somewhat more precipitation than the Arctic Basin, with annual totals between 200 and 400 mm (7.9 and 15.7 in); annual cycles in the Chukchi and Laptev Seas and Baffin Bay are similar to those in the Arctic Basin, with more precipitation falling in summer than in winter, while the Kara Sea has a smaller annual cycle due to enhanced winter precipitation caused by cyclones from the North Atlantic storm track.The Labrador, Norwegian, Greenland, and Barents Seas and Denmark and Davis Straits are strongly influenced by the cyclones in the North Atlantic storm track, which is most active in winter. As a result, these regions receive more precipitation in winter than in summer. Annual precipitation totals increase quickly from about 400 mm (16 in) in the northern to about 1,400 mm (55 in) in the southern part of the region. Precipitation is frequent in winter, with measurable totals falling on an average of 20 days each January in the Norwegian Sea (USSR 1985). The Bering Sea is influenced by the North Pacific storm track, and has annual precipitation totals between 400 and 800 mm (16 and 31 in), also with a winter maximum.
Sea ice
Sea ice is frozen sea water that floats on the ocean's surface. It is the dominant surface type throughout the year in the Arctic Basin, and covers much of the ocean surface in the Arctic at some point during the year. The ice may be bare ice, or it may be covered by snow or ponds of melt water, depending on location and time of year. Sea ice is relatively thin, generally less than about 4 m (13 ft), with thicker ridges (NSIDC). NOAA's North Pole Web Cams having been tracking the Arctic summer sea ice transitions through spring thaw, summer melt ponds, and autumn freeze-up since the first webcam was deployed in 2002–present.
Sea ice is important to the climate and the ocean in a variety of ways. It reduces the transfer of heat from the ocean to the atmosphere; it causes less solar energy to be absorbed at the surface, and provides a surface on which snow can accumulate, which further decreases the absorption of solar energy; since salt is rejected from the ice as it forms, the ice increases the salinity of the ocean's surface water where it forms and decreases the salinity where it melts, both of which can affect the ocean's circulation.The map at right shows the areas covered by sea ice when it is at its maximum extent (March) and its minimum extent (September). This map was made in the 1970s, and the extent of sea ice has decreased since then (see below), but this still gives a reasonable overview. At its maximum extent, in March, sea ice covers about 15 million km2 (5.8 million sq mi) of the Northern Hemisphere, nearly as much area as the largest country, Russia.Winds and ocean currents cause the sea ice to move. The typical pattern of ice motion is shown on the map at right. On average, these motions carry sea ice from the Russian side of the Arctic Ocean into the Atlantic Ocean through the area east of Greenland, while they cause the ice on the North American side to rotate clockwise, sometimes for many years.
Wind
Wind speeds over the Arctic Basin and the western Canadian Archipelago average between 4 and 6 metres per second (14 and 22 kilometres per hour, 9 and 13 miles per hour) in all seasons. Stronger winds do occur in storms, often causing whiteout conditions, but they rarely exceed 25 m/s (90 km/h (56 mph) in these areas.During all seasons, the strongest average winds are found in the North-Atlantic seas, Baffin Bay, and Bering and Chukchi Seas, where cyclone activity is most common. On the Atlantic side, the winds are strongest in winter, averaging 7 to 12 m/s (25 to 43 km/h (16 to 27 mph), and weakest in summer, averaging 5 to 7 m/s (18 to 25 km/h (11 to 16 mph). On the Pacific side they average 6 to 9 m/s (22 to 32 km/h (14 to 20 mph) year round. Maximum wind speeds in the Atlantic region can approach 50 m/s (180 km/h (110 mph) in winter.
Changes in Arctic Climate
Past climates
As with the rest of the planet, the climate in the Arctic has changed throughout time. About 55 million years ago it is thought that parts of the Arctic supported subtropical ecosystems and that Arctic sea-surface temperatures rose to about 23 °C (73 °F) during the Paleocene–Eocene Thermal Maximum. In the more recent past, the planet has experienced a series of ice ages and interglacial periods over about the last 2 million years, with the last ice age reaching its maximum extent about 18,000 years ago and ending by about 10,000 years ago. During these ice ages, large areas of northern North America and Eurasia were covered by ice sheets similar to the one found today on Greenland; Arctic climate conditions would have extended much further south, and conditions in the present-day Arctic region were likely colder. Temperature proxies suggest that over the last 8000 years the climate has been stable, with globally averaged temperature variations of less than about 1 °C (34 °F); (see Paleoclimate).
Global warming
There are several reasons to expect that climate changes, from whatever cause, may be enhanced in the Arctic, relative to the mid-latitudes and tropics. First is the ice-albedo feedback, whereby an initial warming causes snow and ice to melt, exposing darker surfaces that absorb more sunlight, leading to more warming. Second, because colder air holds less water vapour than warmer air, in the Arctic, a greater fraction of any increase in radiation absorbed by the surface goes directly into warming the atmosphere, whereas in the tropics, a greater fraction goes into evaporation. Third, because the Arctic temperature structure inhibits vertical air motions, the depth of the atmospheric layer that has to warm in order to cause warming of near-surface air is much shallower in the Arctic than in the tropics. Fourth, a reduction in sea-ice extent will lead to more energy being transferred from the warm ocean to the atmosphere, enhancing the warming. Finally, changes in atmospheric and oceanic circulation patterns caused by a global temperature change may cause more heat to be transferred to the Arctic, enhancing Arctic warming.According to the Intergovernmental Panel on Climate Change (IPCC), "warming of the climate system is unequivocal", and the global-mean temperature has increased by 0.6 to 0.9 °C (1.1 to 1.6 °F) over the last century. This report also states that "most of the observed increase in global average temperatures since the mid-20th century is very likely [greater than 90% chance] due to the observed increase in anthropogenic greenhouse gas concentrations." The IPCC also indicate that, over the last 100 years, the annually averaged temperature in the Arctic has increased by almost twice as much as the global mean temperature has. In 2009, NASA reported that 45 percent or more of the observed warming in the Arctic since 1976 was likely a result of changes in tiny airborne particles called aerosols.Climate models predict that the temperature increase in the Arctic over the next century will continue to be about twice the global average temperature increase. By the end of the 21st century, the annual average temperature in the Arctic is predicted to increase by 2.8 to 7.8 °C (5.0 to 14.0 °F), with more warming in winter (4.3 to 11.4 °C (7.7 to 20.5 °F)) than in summer. Decreases in sea-ice extent and thickness are expected to continue over the next century, with some models predicting the Arctic Ocean will be free of sea ice in late summer by the mid to late part of the century.A study published in the journal Science in September 2009 determined that temperatures in the Arctic are higher presently than they have been at any time in the previous 2,000 years. Samples from ice cores, tree rings and lake sediments from 23 sites were used by the team, led by Darrell Kaufman of Northern Arizona University, to provide snapshots of the changing climate. Geologists were able to track the summer Arctic temperatures as far back as the time of the Romans by studying natural signals in the landscape. The results highlighted that for around 1,900 years temperatures steadily dropped, caused by precession of Earth's orbit that caused the planet to be slightly farther away from the Sun during summer in the Northern Hemisphere. These orbital changes led to a cold period known as the little ice age during the 17th, 18th and 19th centuries. However, during the last 100 years temperatures have been rising, despite the fact that the continued changes in Earth's orbit would have driven further cooling. The largest rises have occurred since 1950, with four of the five warmest decades in the last 2,000 years occurring between 1950 and 2000. The last decade was the warmest in the record.
See also
Notes
Bibliography
ACIA, 2004 Impacts of a Warming Arctic: Arctic Climate Impact Assessment. Cambridge University Press.
IPCC, 2007: Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change (Solomon, S., D. Qin, M. Manning, Z. Chen, M. Marquis, K.B. Averyt, M. Tignor and H.L. Miller (eds.)). Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA, 996 pp.
NOAA's annually updated Arctic Report Card tracks recent environmental changes.
National Aeronautics and Space Administration. Arctic Sea Ice Continues to Decline, Arctic Temperatures Continue to Rise In 2005. Accessed September 6, 2007.
National Snow and Ice Data Center. All About Sea Ice. Accessed October 19, 2007.
National Snow and Ice Data Center. Cryospheric Climate Indicators: Sea Ice Index. Accessed September 6, 2007.
National Snow and Ice Data Center. NSIDC Arctic Climatology and Meteorology Primer. Accessed August 19, 2007.
Przybylak, Rajmund, 2003: The Climate of the Arctic, Kluwer Academic Publishers, Norwell, MA, USA, 270 pp.
Serreze, Mark C.; Hurst, Ciaran M. (2000). "Representation of Mean Arctic Precipitation from NCEP–NCAR and ERA Reanalyses". Journal of Climate. 13 (1): 182–201. Bibcode:2000JCli...13..182S. doi:10.1175/1520-0442(2000)013<0182:ROMAPF>2.0.CO;2.
Serreze, Mark C. and Roger Graham Barry, 2005: The Arctic Climate System, Cambridge University Press, New York, 385 pp.
UNEP (United Nations Environment Programme), 2007: Global Outlook for Ice & Snow, Chapter 5.
United States Central Intelligence Agency, 1978: Polar Regions Atlas, National Foreign Assessment Center, Washington, DC, 66 pp.
USSR State Committee on Hydrometeorology and Environment, and The Arctic and Antarctic Research Institute (chief editor A.F. Treshnikov), 1985: Atlas Arktiki (Atlas of the Arctic), Central Administrative Board of Geodesy and Cartography of the Ministerial Council of the USSR, Moscow, 204 pp (in Russian with some English summaries). [Государственный Комитет СССР по Гидрометеорологии и Контролю Природной Среды, и Ордена Ленина Арктический и Антарктический Научно-Исследовательский Институт (главный редактор Трешников А.Ф.), 1985: Атлас Арктики, Главное Управление Геодезии и Картографии при Совете Министров СССР, Москва, 204 стр.]
External links
DAMOCLES, Developing Arctic Modeling and Observing Capabilities for Long-term Environmental Studies, Arctic Centre, University of Lapland European Union
Video on Climate Research in the Bering Sea
Arctic Theme Page – A comprehensive resource focused on the Arctic from NOAA
Arctic Change Detection – A near-realtime Arctic Change Indicator website NOAA
The Future of Arctic Climate and Global Impacts from NOAA
Collapsing "Coastlines". Science News. July 16, 2011; Vol. 180 #2.
"How Climate Change Is Growing Forests in the Arctic". Time. June 4, 2012.
Video of Ilulissat Glacier, Greenland – frontline retreats faster than before (4min 20s)
Further reading
"Arctic Ice Caps May Be More Prone to Melt; A new core pulled from Siberia reveals a 2.8-million-year history of warming and cooling". Scientific American. June 22, 2012. |
totalenergies | TotalEnergies SE is a French multinational integrated energy and petroleum company founded in 1924 and is one of the seven supermajor oil companies. Its businesses cover the entire oil and gas chain, from crude oil and natural gas exploration and production to power generation, transportation, refining, petroleum product marketing, and international crude oil and product trading. TotalEnergies is also a large-scale chemicals manufacturer.
TotalEnergies has its head office in the Tour Total in La Défense district in Courbevoie, west of Paris. The company is a component of the Euro Stoxx 50 stock market index. In the 2023 Forbes Global 2000, TotalEnergies was ranked as the 21st largest public company in the world.
History
1924–1985: Compagnie française des pétroles
The company was founded after World War I, when petrol was seen as vital in case of a new war with Germany. The then-French President Raymond Poincaré rejected the idea of forming a partnership with Royal Dutch Shell in favour of creating an entirely French oil company. At Poincaré's behest, Col. Ernest Mercier, with the support of 90 banks and companies, founded Total in 1924, as the Compagnie française des pétroles (CFP) (in English, the French Petroleum Company).
As per the agreement reached during the San Remo conference of 1920, the French state received the 25% share held by Deutsche Bank in the Turkish Petroleum Company (TPC) as part of the compensation for war damages caused by Germany during World War I. The French government's stake in TPC was transferred to CFP, and the Red Line agreement in 1928 rearranged the shareholding of CFP in TPC (later renamed the Iraq Petroleum Company in 1929) to 23.75%. The company from the start was regarded as a private sector company in view of its listing on the Paris Stock Exchange in 1929.
During the 1930s, the company was engaged in exploration and production, primarily from the Middle East. Its first refinery began operating in Normandy in 1933. After World War II, CFP engaged in oil exploration in Venezuela, Canada, and Africa while pursuing energy sources within France. Exploration in Algeria, then a French colony, began in 1946, with Algeria becoming a leading source of oil in the 1950s.In 1954, CFP introduced its downstream product – Total brand of gasoline in the continent of Africa and Europe.Total entered the United States in 1971 by acquiring Leonard Petroleum of Alma, Michigan and several Standard Oil of Indiana stations in Metro Detroit.In 1980, Total Petroleum (North America) Ltd., a company controlled 50% by CFP, bought the American refining and marketing assets of Vickers Petroleum as part of a sell-off by Esmark of its energy holdings. This purchase gave Total refining capacity, transportation, and a network of 350 service stations in 20 states.
1985–2003: Total CFP and rebranding to Total
Total's leadership had been aware of the deleterious effects of global warming since at least 1971; The company nevertheless openly denied the findings of climate science until the 1990s; Total also pursued a number of strategies to cover up the threat and contribution to the climate crisis.The company renamed itself Total CFP in 1985, to build on the popularity of its gasoline brand. Later in 1991, the name was changed to Total, when it became a public company listed on the New York Stock Exchange. In 1991, the French government held more than 30 percent of the company's stock but by 1996 had reduced its stake to less than 1 percent. In the period between 1990 and 1994, foreign ownership of the firm increased from 23 percent to 44 percent.
Meanwhile, Total continued to expand its retail presence in North America under several brand names. In 1989, Denver, Colorado–based Total Petroleum, Total CFP's North American unit, purchased 125 Road Runner retail locations from Texarkana, Texas–based Truman Arnold Companies. By 1993, Total Petroleum was operating 2,600 retail stores under the Vickers, Apco, Road Runner, and Total brands. That year, the company began remodeling and rebranding all of its North American gasoline and convenience stores to use the Total name. Four years later, Total sold its North American refining and retail operations to Ultramar Diamond Shamrock for $400 million in stock and $414 million in assumed debt.After Total's takeover of Petrofina of Belgium in 1999, it became known as Total Fina. Afterwards, it also acquired Elf Aquitaine. First named TotalFinaElf after the merger in 2000, its name reverted to Total in 2003. During that rebranding, the globe logo was unveiled.
2003–2021
In 2003, Total signed for a 30% stake in the gas exploration venture in the Kingdom of Saudi Arabia (KSA) – South Rub' al-Khali joint venture along with Royal Dutch Shell and Saudi Aramco. The stake was later bought out by its partners.
In 2006, Saudi Aramco and Total signed a MOU to develop the Jubail Refinery and Petrochemical project in Saudi Arabia which targeted 400,000 barrels per day (bpd). Two years later, the two companies officially established a joint venture called SAUDI ARAMCO TOTAL Refining and Petrochemical Company (SATORP)- in which a 62.5% stake was held by Saudi Aramco and the balance 37.5% held by TOTAL.Total withdrew in 2006 from all Iranian development work because of United Nations concerns that resulted in sanctions over possible weaponization of the Nuclear program of Iran.During the 2009–2010 Iraqi oil services contracts tender, a consortium led by CNPC (37.5%), which also included TOTAL (18.75%) and Petronas (18.75%) was awarded a production contract for the "Halfaya field" in the south of Iraq, which contains an estimated 4.1 billion barrels (650,000,000 m3) of oil.As of 2010, Total had over 96,000 employees and operated in more than 130 countries. In 2010, Total announced plans to pull out of the forecourt market in the United Kingdom.In 2012, Total announced it was selling its 20% stake and operating mandate in its Nigerian offshore project to a unit of China Petrochemical Corp for $2.5 billion.In 2013, Total started the operation at Kashagan with North Caspian Operating Company. It is the biggest discovery of oil reserves since 1968. In 2013, Total increased its stake in Novatek to 16.96%. In 2013, Total and its joint venture partner agreed to buy Chevron Corporation's retail distribution business in Pakistan for an undisclosed amount.In January 2014, Total became the first major oil and gas firm to acquire exploration rights for shale gas in the UK after it bought a 40 percent interest in two licences in the Gainsborough Trough area of northern England for $48 million. In July 2014, the company disclosed it was in talks to sell its LPG distribution business in France to Pennsylvania-based UGI Corporation for €450 million ($615 million).On 20 October 2014, at 23:57 MST, a Dassault Falcon 50 business jet heading to Paris caught fire and exploded during takeoff after colliding with a snow removal vehicle in Vnukovo International Airport, killing four, including three crew members and CEO of Total S.A. Christophe de Margerie on board. Alcohol presence was confirmed in the blood of the driver of the vehicle on the ground. Patrick Pouyanne, who was Total's Refining Chief at that time, was appointed as CEO, and also as chairman of Total in 2015.
In 2015, Total unveiled plans to cut 180 jobs in the United Kingdom, reduce refinery capacity and slow spending on North Sea fields after it fell to a $5.7bn final-quarter loss. The company said it would also sell off $5bn worth of assets worldwide and cut exploration costs by 30%.In 2016, Total signed a $224M deal to buy Lampiris, the third-largest Belgian supplier of gas and renewable energy to expand its gas and power distribution activities.In 2016, Total bought French battery maker Saft Groupe S.A. in a $1.1bn deal, to boost its development in renewable energy and electricity businesses.In 2016, Total agreed to acquire $2.2-billion in upstream and downstream assets from Petrobras as part of the firms' strategic alliance announced earlier that year. For Total, these new partnerships with Petrobras reinforce Total's position in Brazil through access to new fields in the Santos Basin while entering the gas value chain.
Between 2013 and 2017, Total organized the ARGOS Challenge, a robotic competition with the aim to develop robots for their oil and gas production sites. It was won by an Austrian-German team using a variant of the taurob tracker robot.In 2017, Total signed a deal for a total amount of $4.8b with Iran for the development and production of South Pars, the world's largest gas field. The deal was the first foreign investment in Iran since in the 2015 sanctions over Iran's nuclear weaponisation were lifted by the JCPOA.In 2017, Total announced the acquisition of Maersk Oil for $7.45 billion in a share and debt transaction. This deal positioned Total as the second operator in the North Sea.In 2017, Total signed an agreement with EREN Renewable energy to acquire an interest of 23% in EREN RE for an amount of €237.5 million.In November 2017, Total announced the launch on the French residential market of Total Spring, a natural gas and green power offering that is 10% cheaper than regulated tariffs. Total is thus pursuing its strategy of downstream integration in the gas and power value chain in Europe.In 2018, Total officially withdrew from the Iranian South Pars gas field because of sanctions pressure from the US.In 2019, Total announced the sale of a 30% stake in the Trapil pipeline network to crude oil storage operator Pisto SAS for €260 million. Later that year, Total signed deals to transfer 30% and 28.33% of its assets in Namibia's Block 2913B and Block 2912 respectively to QatarEnergy. The company will also transfer 40% of its existing 25% interests in the Orinduik and Kanuku blocks of Guyana and 25% interest in Blocks L11A, L11B, and L12 of Kenya to QatarEnergy.In 2020, the company announced its intention to cut 500 voluntary jobs in France.In 2021, Total left the American Petroleum Institute lobby, due to differences in the common vision of how to tackle the fight against climate change.In 2021, Total said that it had registered an income of $3 billion for the period of January–March, which is close to the levels registered before the pandemic.
2021–present: Rebranding to TotalEnergies
In 2021, the company announced a name change to TotalEnergies as an intended illustration of its investments in the production of green electricity. At the Ordinary and Extraordinary Shareholders’ Meeting in May of that year, shareholders approved the name change to TotalEnergies.In 2022, TotalEnergies announced it would end all operations in Myanmar, citing rampant human rights abuses and deteriorating rule of law since the 2021 Myanmar coup d'état and has also called for international sanctions targeting the oil and gas sector in the country, which is one of the main sources of revenue for Myanmar's government.As of 11 March 2022, Total was one of the only Western oil companies to continue operating in Russia after the Russian Invasion of Ukraine.In June 2022, TotalEnergies signed a partnership with QatarEnergy for the worlds largest LNG expansion project, the North Field East (NFE). Holding the largest stake, 6.25%, TotalEnergies will hold the equivalent of one of the four trains. In September 2022, an additional agreement was signed to include the North Field South (NFS) which is the second phase of the NFE. This gave TotalEnergies a stake of 9.375% of the 25% stakes available to international companies.On 30 March 2023, Total sold a shipment of LNG which it sourced from UAE to CNOOC on the Shanghai Petroleum and Natural Gas Exchange. It was reportedly the first trade to be settled in the renminbi (Chinese yuan) currency on the SHPGX.In July 2023, Iraq signed a $27 billion energy agreement with TotalEnergies to develop the country’s energy sector and boost output of oil, gas and renewables. Additionally, Indian Oil Corp, has signed liquefied natural gas (LNG) import deals with ADNOC LNG and TotalEnergies in the same month.In October 2023, TotalEnergies sold its Canadian operations to Suncor Energy for C$1.47 billion($1.07 billion). TotalEnergies has agreed to buy liquefied natural gas from Qatar for 27 years, cementing the European nation’s commitment to fossil fuels beyond 2050.
Financial data
Organization
Business segments
In 2016, Total set up a new organization to achieve its ambition to become a responsible energy major. It is composed of the following segments: Exploration & Production; Gas, Renewables & Power; Refining & Chemicals; Trading & Shipping; Marketing & Services; and Total Global Services.
In 2016 Total created two new corporate divisions: People & Social Responsibility (Human Resources; Health, Safety & Environment; the Security Division; and a new Civil Society Engagement Division) and Strategy & Innovation (Strategy & Climate Division, responsible for ensuring that strategy incorporates the 2 °C global warming scenario, Public Affairs, Audit, Research & Development, the Chief Digital Officer and the Senior Vice President Technology).
Head office
The company's headquarters is in the Tour Total in the La Défense district in Courbevoie, France, near Paris. The building was originally constructed between 1983 and 1985 for Elf Aquitaine; Total SA acquired the building after its merger with Elf in 2000.
Senior management
The present chairman and CEO of the company is Patrick Pouyanné (2014 to present). In 2015, Patricia Barbizet was named Lead Independent Director.
Group Performance Management Committee
The role of this committee is to examine, analyze and pilot the Group's safety, financial and business results. In addition to the members of the executive committee, this committee is composed of the managers in charge of the main business units of the group, as well as a limited number of senior vice presidents of functions at group and branch levels. Since 2016, the committee has included:
Executive committee
The executive committee is Total's primary decision-making organization. Since 2020, members of Total's executive committee have been:
Patrick Pouyanné, chairman and CEO.
Arnaud Breuillac, president of Exploration & Production.
Patrick de La Chevardière, chief financial officer.
Alexis Vovk, president Marketing & Services.
Philippe Sauquet, President, Gas, Renewables & Power and executive vice president, Strategy & Innovation
Namita Shah, executive vice president, People & Social Responsibility
Bernard Pinatel, President Refining & Chemicals
Operations
In May 2014, the company shelved its Joslyn North oil sands project in the Athabasca region of Alberta, Canada indefinitely, citing concerns about operating costs. An estimated $11 billion has been spent on the project, in which Total is the largest shareholder with 38.5%. Suncor Energy holds 36.75%, Occidental Petroleum owns 15% and Japan's Inpex has a 10% interest.Total is involved in 23 projects of exploration and production in Africa, Asia, Europe, North America, South America and Russia.
Investments
In 1937, Iraq Petroleum Company (IPC), 23.75 percent owned by Total, signed an oil concession agreement with the Sultan of Muscat. IPC offered financial support to raise an armed force that would assist the Sultan in occupying the interior region of Oman, an area that geologists believed to be rich in oil. This led to the 1954 outbreak of Jebel Akhdar War in Oman that lasted for more than 5 years.Total has been a significant investor in the Iranian energy sector since 1990. In 2017, Total and the National Iranian Oil Company (NIOC) signed a contract for the development and production of South Pars, the world's largest gas field. The project will have a production capacity of 2 billion cubic feet per day. The produced gas will supply the Iranian domestic market starting in 2021.During the European Union's sanctions against the military dictatorship Myanmar, Total is able to operate the Yadana natural gas pipeline from Burma to Thailand. Total is currently the subject of a lawsuit in French and Belgian courts for the condoning and use of the country's civilian slavery to construct the pipeline. The documentary 'Total Denial' shows the background of this project. The NGO Burma Campaign UK is currently campaigning against this project.
Acquisitions
In 2011, Total agreed to buy 60% of photovoltaics company SunPower for US$1.38 billion. By the 2013 annual reporting date, Total owned 64.65%.
In 2016, Total agreed to purchase French battery maker Saft Groupe S.A. for 1.1 billion euros.In 2016, Total signed a $224M deal to buy Lampiris, the third-largest Belgian supplier of gas and renewable energy to expand its gas and power distribution activities.In December 2016, Total acquired about 23% of Tellurian for an amount of 207 million dollars, to develop an integrated gas project.In 2017, Total announced it would buy Maersk Oil from A.P. Moller-Maersk in a deal expected to close in the first quarter of 2018.In 2018, Total announced it was buying 74% of the French electricity and gas provider Direct Énergie from their main stockholders, for 1.4 billion euros.In 2022, Total announced it had added 4GW to its renewable energy portfolio through the acquisition of the Austin-based company, Core Solar. The following month, Total entered an agreement with GIP to acquire a 50% stake in Clearway, one of the largest renewable energy owners in the United States. As part of the transaction, GIP took a 50% minus one share stake in SunPower.In 2023, TotalEnergies acquired three gas-fired power plants with a total capacity of 1.5 GW in Texas from TexGen for $635 million.
Western Sahara oil exploration
In 2001, Total signed a contract for oil-reconnaissance in areas offshore Western Sahara (near Dakhla), with the Moroccan "Office National de Recherches et d’Exploitations Petrolières" (ONAREP). In 2002, Hans Corell (the United Nations Under-Secretary-General for Legal Affairs) stated in a letter to the president of the Security Council that whenever the contracts are only for exploration they're not illegal, but if further exploration or exploitation are against the interests and wishes of the people of Western Sahara, they would be in violation of the principles of international law.
Finally, Total decided to not renew their license off Western Sahara.
Energy Deal with ADNOC
In a move to cope with the 2021-2022 global energy crisis, which started with the onset of the Covid-19 pandemic and aggravated with Russia’s 2022 invasion of Ukraine, France’s TotalEnergies and UAE’s ADNOC signed a strategic deal to partner on energy projects “for cooperation in the area of energy supplies”.
The deal was secured on the second day of the UAE leader Sheikh Mohamed bin Zayed Al-Nahyan’s visit to Paris in 2022. The visit marked the UAE president’s first overseas state visit since assuming the post earlier that year.
The deal was aimed at identifying and targeting potential joint investment projects in the UAE, France, and elsewhere in the sectors of renewables, hydrogen, and nuclear energy, as told by the French government in one of its statements. According to French President Emmanuel Macron’s aides, France had been eager to secure diesel supply from the UAE.The deal also received criticism from human rights groups that persisted Macron not to give the-then “crown prince a pass on the UAE’s atrocious human rights record”, as per the statement published by Human Rights Watch on its website.
Controversies
Environmental and safety records
In 1999, the Total SA company was fined €375,000 for the MV Erika oil spill that stretched 400 kilometers from La Rochelle to the western tip of Brittany. The company was only fined that amount because they were only partially liable because Total SA did not own the ship. The plaintiffs had sought more than $1.5 billion in damages. More than 100 groups and local governments joined in the suit. The Total company was fined just over $298,000. The majority of the money will go to the French government, several environmental groups, and various regional governments. The Total SA company was also fined $550,000 for the amount of marine pollution that came from it. After the oil spill they tried to restore their image and have opened a sea turtle conservation project in Masirah in recent years.Prior to the verdict in which Total was found guilty one of the counterparts in the incident, Malta Maritime Authority (MMA), was not to be tried for having any hand in the incident. In 2005, Total submitted a report to the Paris courts which stated that Total had gathered a group of experts which stated the tanker was corroded and that Total was responsible for it. The courts sought a second expert reviewing this information, which was turned down.In 2001, the AZF chemical plant exploded in Toulouse, France, while belonging to the Grande Paroisse branch of Total.In 2008, Total was required to pay €192 million in compensation to victims of the pollution caused by the sinking of the ship Erika. This was in addition to the €200 million that Total spent to help clean up the spill. The company appealed twice against the verdict, losing both times.
In 2016, Total was ranked as the second-best of 92 oil, gas, and mining companies on indigenous rights in the Arctic. According to the CDP Carbon Majors Report 2017, the company was one of the top 100 companies producing carbon emissions globally, responsible for .9% of global emissions from 1998 to 2015. In 2021, Total was ranked as the 2nd most environmentally responsible company out of 120 oil, gas, and mining companies involved in resource extraction north of the Arctic Circle in the Arctic Environmental Responsibility Index (AERI).According to a 2021 study, Total personnel were aware about the role that their products played in global warming as early as 1971, as well as throughout the 1980s. Despite this awareness, the company promoted doubt regarding the science of global warming by the late 1980s, and ultimately settled on a position in the late 1990s of publicly accepting climate science, while still promoting doubt and trying to delay climate action.
Bribery
Total has been accused of bribery on multiple occasions.
Total is being implicated in a bribe commission scandal which is currently emerging in Malta. It has emerged that Total had told Maltese agents that it would not be interested in doing business with them unless their team included George Farrugia, who is under investigation in the procurement scandal. George Farrugia has recently been given a presidential pardon in exchange for information about this scandal. Enemalta, Malta's energy supplier, swiftly barred Total and its agents, Trafigura from bidding and tenders. An investigation is currently underway and three people have been arraignedOn 16 December 2008, the managing director of the Italian division of Total, Lionel Levha, and ten other executives were arrested by the public Prosecutor's office of Potenza, Italy, for a corruption charge of €15 million to undertake the oilfield in Basilicata on contract. Also arrested was the local deputy of Partito Democratico Salvatore Margiotta and an Italian entrepreneur.In 2010, Total was accused of bribing Iraqi officials during former president Saddam Hussein's regime to secure oil supplies. A United Nations report later revealed that Iraqi officials had received bribes from oil companies to secure contracts worth over $10bn. On 26 February 2016, the Paris Court of Appeals considered Total guilty and ordered the company to pay a fine of €750,000 for corrupting Iraqi civil servants. The court's ruling overturns an earlier acquittal in the case.
In 2013, a case was settled that concerned charges that Total bribed an Iranian official with $60 million, which they documented as a "consulting charge," and which unfairly gave them access to Iran's Sirri A and Sirri E oil and gas fields. The bribery gave them a competitive advantage, earning them an estimated $150 million in profits. The Securities Exchange Commission and the Department of Justice settled the charges, expecting Total to pay $398 million.
2022 Russian invasion of Ukraine
Following the 2022 Russian invasion of Ukraine which began on February 24, many international, particularly Western companies pulled out of Russia. On March 1, TotalEnergies announced it "will no longer provide capital for new projects in Russia" but has retained ownership of its 19.4% stake in privately owned Novatek, 20% stake in the Yamal project and 10% stake in Arctic LNG 2. This has led to criticism as insufficient, particularly given complete divestment of other major Western energy companies, and the European Union announcement of becoming more energy independent from Russia.
Similarly in August 2022, an investigation by Global Witness showed that a Siberian gas field part-owned by TotalEnergies has been supplying a refinery, which is producing jet fuel for Russian warplanes. This contradicts Total´s claims that this was unrelated to Russian military operations in Ukraine.
Africa
In December 2022, the NGOs Friends of the Earth, Survie and four Ugandan NGOs sent the oil group Total to court and accused it of violating the law on the duty of vigilance of large French companies in terms of human rights and environment. The Tilenga Project, which TotalEnergies is undertaking in conjunction with China National Offshore Oil Corporation consists of drilling for oil in the Murchison Falls National Park, a habitat for diverse species of birds and animals. The project also involves building a pipeline from the site in land-locked Uganda to Tanga in Tanzania. Critics of the project are concerned that, since the proposed pipeline passes through Lake Victoria and close to a number of wildlife areas in Tanzania and Kenya, oil spills could have threaten the lake and could have adverse effects on the wildlife, some of which is endangered, in various national parks.
Automobile and motorcycle OEM partnerships
TotalEnergies is an official recommended fuel and lubricants for all prominent Renault–Nissan–Mitsubishi Alliance members, including Renault (shared with BP), Nissan (shared with ExxonMobil), Infiniti, Dacia, Alpine and Datsun, Kia Motors, three Stellantis marques (Citroën, Peugeot and DS), Honda (including Acura, shared with BP and ExxonMobil), Aston Martin, Mazda (shared with BP and its subsidiary Castrol), Sany and Tata Motors (shared with Petronas) for automobiles only as well as Peugeot Motocycles, Kawasaki (fuel only), Energica, and Honda for motorcycles only.
Sponsorship
Total has provided fuel and lubricants to professional auto racing teams.Total has been a longtime partner of Citroën Sport in the World Rally Championship, Dakar Rally and World Touring Car Championship. Sébastien Loeb won nine WRC drivers titles, whereas Ari Vatanen and Pierre Lartigue won four editions of the Dakar Rally.Total has been a partner of Peugeot Sport in Formula One from 1995 to 2000, the British Touring Car Championship in 1995 and 1996 and since 2001 in the World Rally Championship, Intercontinental Rally Challenge, 24 Hours of Le Mans, Intercontinental Le Mans Cup, Dakar Rally and Pikes Peak International Hill Climb. Total is also a partner of Peugeot Sport for its customer racing TCR Touring Car programme and its Le Mans Hypercar project in the FIA World Endurance Championship.Total was a partner of Renault Sport in Formula One from 2009 to 2016. Their logo appeared on the Red Bull Racing cars between 2009 and 2016, the Renault F1 cars in 2009, 2010 and 2016, and the Lotus F1 cars from 2011 to 2014. Total also partnered Caterham F1 Team in 2011–2014, Scuderia Toro Rosso in 2014–2015 and Williams F1 Team in 2012–2013.Also, Total was the title sponsor of the Copa Sudamericana football tournament in 2013 and 2014.In 2017, Total was appointed by FIA and ACO as official fuel supplier of the World Endurance Championship and 24 Hours of Le Mans from 2018–19 season onwards.Total is one of the official sponsors from 2013 to 2022 for one of the most popular and influential Mexican football teams, Club America.In terms of educational development, Total provides scholarships worth millions of euros annually to international students to study in France. These programs are mainly for master's degrees. Doctoral scholarships are also offered but in limited numbers. The students mainly come from Europe, Africa, Asia, and the Middle East where Total Operates. Students from Africa are mainly from Nigeria. The scholarship involves the payment of Tuition and a monthly allowance of 1400 Euros (2014 disbursement). The allowance is able to cater for feeding, transportation, and accommodation for the students. The drop in oil prices in 2015 has led to the reduction of the number of scholars.In 2016, Total secured an eight-year sponsorship package from the Confederation of African Football (CAF) to support 10 of its principal competitions. Total will start with the Africa Cup of Nations to be held in Gabon, therefore, renaming it Total Africa Cup of Nations.Following Total's purchase of Direct Énergie in the summer of 2018, the Direct Énergie cycling team changed its name the following year to Total Direct Énergie ahead of that year's edition of Paris–Roubaix. In 2021 the team changed its name again to Team TotalEnergies in time for that year's Tour de France.In 2019, the company's Chief Executive Officer, Patrick Pouyanne pledged that Total would make a €100 million contribution to the reconstruction of the Notre-Dame cathedral after it was extensively damaged in a fire.In 2020, the company confirmed a two-year sponsorship deal with CR Flamengo, being the first time a partner of a Brazilian football team.
See also
2005 Hertfordshire Oil Storage Terminal fire
2007 UK petrol contamination
Centre Scientifique et Technique Jean Féger, main technical and scientific research center for Total in Pau, France
ERAP
Fossil fuels lobby
Lindsey Oil Refinery
Notes
References
External links
Official website
Documents and clippings about TotalEnergies in the 20th Century Press Archives of the ZBW |
cold blob | The cold blob in the North Atlantic (also called the North Atlantic warming hole) describes a cold temperature anomaly of ocean surface waters, affecting the Atlantic Meridional Overturning Circulation (AMOC) which is part of the thermohaline circulation, possibly related to global warming-induced melting of the Greenland ice sheet.
General
AMOC is driven by ocean temperature and salinity differences. The major possible mechanism causing the cold ocean surface temperature anomaly is based on the fact that freshwater decreases ocean water salinity, and through this process prevents colder waters sinking. Observed freshwater increase originates probably from Greenland ice melt.
Research
2015 and earlier
Climate scientists Michael Mann of Penn State and Stefan Rahmstorf from the Potsdam Institute for Climate Impact Research suggested that the observed cold pattern during years of temperature records is a sign that the Atlantic Ocean's Meridional overturning circulation (AMOC) may be weakening. They published their findings, and concluded that the AMOC circulation shows exceptional slowdown in the last century, and that Greenland melt is a possible contributor. Tom Delworth of NOAA suggested that natural variability, which includes different modes, here namely the North Atlantic Oscillation and the Atlantic Multidecadal Oscillation through wind driven ocean temperatures are also a factor. A 2014 study by Jon Robson et al. from the University of Reading concluded about the anomaly, "...suggest that a substantial change in the AMOC is unfolding now." Another study by Didier Swingedouw concluded that the slowdown of AMOC in the 1970s may have been unprecedented over the last millennium.
2016
A study published in 2016, by researchers from the University of South Florida, Canada and the Netherlands, used GRACE satellite data to estimate freshwater flux from Greenland. They concluded that freshwater runoff is accelerating, and could eventually cause a disruption of AMOC in the future, which would affect Europe and North America.Another study published in 2016, found further evidence for a considerable impact from sea level rise for the U.S. East Coast. The study confirms earlier research findings which identified the region as a hotspot for rising seas, with a potential to divert 3–4 times higher than the global average sea level rise rate. The researchers attribute the possible increase to an ocean circulation mechanism called deep water formation, which is reduced due to AMOC slow down, leading to more warmer water pockets below the surface. Additionally, the study noted: "Our results suggest that higher carbon emission rates also contribute to increased [sea level rise] in this region compared to the global average".
Background
In 2005, British researchers noticed that the net flow of the northern Gulf Stream had decreased by about 30% since 1957. Coincidentally, scientists at Woods Hole had been measuring the freshening of the North Atlantic as Earth becomes warmer. Their findings suggested that precipitation increases in the high northern latitudes, and polar ice melts as a consequence. By flooding the northern seas with excessive fresh water, global warming could, in theory, divert the Gulf Stream waters that usually flow northward, past the British Isles and Norway, and cause them to instead circulate toward the equator. Were this to happen, Europe's climate would be seriously impacted.Don Chambers from the USF College of Marine Science mentioned, "The major effect of a slowing AMOC is expected to be cooler winters and summers around the North Atlantic, and small regional increases in sea level on the North American coast." James Hansen and Makiko Sato stated, "AMOC slowdown that causes cooling ~1°C and perhaps affects weather patterns is very different from an AMOC shutdown that cools the North Atlantic several degrees Celsius; the latter would have dramatic effects on storms and be irreversible on the century time scale." Downturn of the Atlantic meridional overturning circulation, has been tied to extreme regional sea level rise.
Measurements
Since 2004 the RAPID program monitors the ocean circulation.
See also
Abrupt climate change
The Blob (Pacific Ocean)
Deglaciation
Physical impacts of climate change
References
External links
Extended lecture by Stefan Rahmstorf about AMOC slowdown (May 27, 2016)
A Nasty Surprise in the Greenhouse (video about the shutdown of the thermohaline circulation, 2015)
Blizzard Jonas and the slowdown of the Gulf Stream System (RealClimate January 24, 2016) |
peat in finland | Finland is one of the last countries in the world still burning peat. Peat has high global warming emissions and high environmental concerns. It may be compared to brown coal (lignite) or worse than this lowest rank of coal. Peat is the most harmful energy source for global warming in Finland. According to IEA the Finnish subsidies for peat in 2007-2010 undermined the goal to reduce CO2 emissions and counteracted other environmental policies and The European Union emissions trading scheme.
"White peat" is used in greenhouses and livestock farming.
Energy use
In 2021 an estimated 2 million tonnes of peat was burnt in Finland. The EU is helping to fund a just transition away from this.
According to the national energy statistics the energy use of peat grew in Finland in the 1980s: 1975: 0.5 TWh and 1980 4,7 TWh. The share of peat energy was 19 TWh in 2005 and in the peak year 2007 28.4 TWh. In 2006 the peat energy provided 25.3 TWh which gave 6.2 TWh electricity, 6.1 TWh warming and 4.7 TWh industry heat.
Environment
Finland's environmental administration has a new project studying water pollution from the peat collection areas. Local people have complained of water pollution. The new studies show 20-fold differences between samples. The EIA counts only the largest particles that are only about 10% of total. The heavy rain showers have been previously ignored. According to evaluations one rain shower could bring as much particles as the rest of the summer.
Environmental accident
The accidental large water emission from peat collection area in 1981 in Lestijärvi was an environmental wakeup.
Neova
Neova (formerly Vapo Oy) is the major peat producer in Finland. In 1984 Vapo was 100% owned by Finnish state. Finnforest part of Metsä Group owned 33% of Vapo in 2002-2005 and 49.9% in 2005–2009. In 2009 ownership of Vapo was reconstructed to Suomen Energiavarat Oy, owned by EPV Energy (ex Etelä-Pohjanmaan Voima Oy) and several peat energy using companies. In June 2009 Metsäliitto (Finnforest) sold its share to Suomen Energiavarat Oy. In 2012 those responsible include several municipal energy companies e.g. Helsinki, Vantaa, Oulu, Rauma, Seinäjoki, Vaasa and Lahti.
Vapo has ongoing 100 applications to new peat collection areas.Environmental counsellor Antti Ylitalo has written public appeals to protect endangered natural bogs (moss, moor). Vapo made a disqualification appeal against Ylitalo claiming that his statements formed a conflict of interests in the official evaluations of Vapo's applications. Ylitalo felt that the disqualification appeal by Vapo infringed his freedom of speech.According to VTT (2005) the major producers were Vapo Oy (78%) sekä Turveruukki Oy (10%) followed by Fortum Power and Heat, Kuopion Energia, Alholmens Kraft and Vaskiluodon Voima.After the peat production Suo Oy, 100% owned by Vapo, cultivates reed canary grass in the bog. Suo owns 6,500 hectares (16,000 acres) agricultural land. Suo Oy received €673 000 agricultural support in 2010. This was the fifth biggest agricultural support in Finland. The company received support for energy plant cultivation. Vapo (Suo Oy) has received in total €10 million public financing in ten years for the reed canary grass cultivation used as energy with wood and peat.
VTT case 2010
VTT used to be a state financed research organization but today it gets a third of its funds from the state and two thirds from the companies. This may create conflicts of interest or the appearance thereof.
VTT principles for communication
In the summer 2010, one researcher specialist from VTT expressed his/her critics about the reliability of the electricity market report for the parliament. VTT gave him/her a warning since according to VTT the statement was given in the name of VTT and she/he was not qualified for it. This special researcher claims that he acted privately and revealed in general the contradictions of the electricity market report.
VTT: Energy tax and scientific freedom
The Finnish government ordered an energy evaluation report from VTT for energy tax reform which was published in 2010. VTT report studies the carbon tax for oil, gas, coal and peat. VTT did not give tax recommendations for peat based on its actual carbon load. The responsible director was Satu Helynen. The close relationships of Ms. Helynen and the peat industry associations was revealed.Satu Helynen denied a VTT scientist to write about peat in autumn 2010. According to the scientist the leaders of VTT tried to prevent them to express in public dissentient opinions about the peat energy tax. Helynen considered the writing onesided and demanded another perspective. The issue came public in August 2010 in a STT (Finnish News Service) interview, when a group of VTT scientists claimed anonyme VTT of pressure.
Satu Helynen and peat industry
Satu Helynen was a member of Enas board during 2005-2009: a company owned by VTT and two peat energy companies: Vapo and Jyväskylän Energia. Satu Helynen was the chairman of Finbio ry Association until 2009. Finbio was claimed to cooperate with the “Association of Finnish Peat Industries“ and “Peat Producers Association” lobbying members of parliament for peat.Satu Helynen was a member of the executive board of the International Peat Society (IPS) 2008–2012. IPS is a global non-profit organisation for all interests related to peat and peatlands, having scientific and corporate members in 42 countries worldwide and promoting Wise Use of peatlands. The executive board of the IPS consists of both university and company directors and is elected by an annual assembly of national member associations. In Finland, Suoseura acts as national committee of IPS. In addition to approximately 400 individual members from universities and other organisation; their corporate members are VTT, Vapo, Turveruukki and the Bioenergy Association of Finland.VTT does not publish or control any economic or board membership commitments of its scientists as at least the members of board of public companies have to do. In medicine publications and conferences the statements of commitments of scientists are normal. Mr. Kari Larjava, e.g. VTT expert in biogas and head of Ms. Helynen, did not know IPS but considered its membership of no relevant problem in regard to commitments.
New rules
In August 2010 VTT published new principles of its employees' freedom to make public statements. According to Olli Mäenpää, Professor of Administration Law at Helsinki University, the new principles of VTT are in conflict with Finnish constitutional freedom to express one's opinions."The new VTT rules deny the right to make any private statements, verbal or written, without permission from the leadership of VTT in the line of activities of VTT. The permission for statements will only be allowed when there is no conflict." In order to guarantee the consistence of research and reliability of the content, the scientists should prevent all criticism of the content of VTT publications publicly after the publications. The ethical rules oblige the research organisation to prevent activities that will mislead the public."Finnish law guarantees everyone the freedom of expression as private persons that no organization or person is allowed to restrict. Democratic society has no need to restrict free communication." This freedom covers peaceful communication excluding agitation in crimes or mental troublemaking. In practice, VTT and some other employers including Sanoma Oy (Helsingin Sanomat) have given outsiders the impression of willingness to restrict this freedom included in the human rights declaration of the United Nations.
It "makes your hair stand on end", said Professor Mäenpää. In legal proceedings every employer tends to lose the case. "In practice the attempts to restrict the basic human rights will label these organizations with a negative reputation that is strongly disapproved in public." The ability to question facts and assumptions and the evaluation of report argumentations are cornerstones of scientific discoveries, innovative ideas and sustainable development of society including its politics. Freedom of expression is a cornerstone of democracy and a cornerstone of international human rights declarations.
Definition
As of 2006, the Finnish Ministry of Trade and Industry (KTM) peat described peat as a "slowly renewable biomass fuel". The definition is from the report: Patrick Crill (USA), Ken Hargreaves (UK) and Mr. Atte Korhola (FIN) Turpeen asema Suomen kasvihuonekaasutaseissa (2000) ordered by KTM. This report has been criticised by several scientists. For example, environment experts and university academics Mr. Raimo Heikkilä (Oulu), Mr. Tapio Lindholm (Helsinki) and Mr. Heikki Simola (Joensuu) wrote an opponent review in 2007. In short: The use of peat fuel releases carbon that was bound long before the industrial revolution. The renewal of peat layers in thousands of years is insignificant compared to the carbon emissions of the peat fuel in regard to the climate change. It is highly questionable to claim that the human created problem would be compensated by a natural process elsewhere. For the greenhouse gas inventories the natural peat lands are basically neutral since storage of carbon equals release of methane. The agricultural land use will never restore the carbon permanently in the ecosystem equal to the original peat layers. There is no difference in the burning of peat or coal in respect to the climate warming. In 1996, the Intergovernmental Panel on Climate Change (IPCC) classified peat as a "solid fossil." Ten years later, they declined to do so again but also refused to classify it as biomass because its emissions are on par with fossil fuels.Professor Atte Korhola University of Helsinki is married to Eija-Riitta Korhola, conservative member of parliament (Kokoomus) and Member of the European Parliament since 2004. Ms. Eija-Riitta Korhola received campaign financing from the nuclear power industry in 2004 for the 2004 European Parliament election in Finland campaign. Many Finnish nuclear power companies are also peat industry companies.
International
The United Nations Environment and Sustainable Development World Commission Finland expressed in 1989 that Finland should not increase peat production in the 1990s based on its global warming emissions. Finland should discontinue promoting peat energy use and the draining of peat bogs abroad, for example in Indonesia, Brunei and Uganda.
Global warming
Peat and hard coal are the most harmful energy sources for global warming in Finland. According to VTT studies peat is often the most harmful source.
Use of peat as energy and land is responsible for a third of all Finnish climate change emissions. This includes energy use, agriculture and digging ditches. Digging ditches in peat forests is also one of the major reducers of biodiversity in Finland. According to Statistics Finland use of peat as energy created 8 million tons of CO2 emissions in 2018. This includes emissions from peat storage and peat production area. Digging ditches in peatland fields in Finland created 6 million tons of CO2 emissions annually. According to Statistics Finland. digging ditches in forest lands in Finland results in 7 million tons of CO2 emission annually.
Carbon storage
The storage of carbon in Finland is: bogs 6000 million tonnes, forest soil 1300 million tonnes and trees 800 million tonnes. According to senior lecturer Heikki Simola (University of East Finland) the carbon load from the ditches of bogs is 6000 tonnes pro person annually. In the south of Finland the majority of swamps have been ditched for the forest production. Kyoto protocol do not consider emissions from nature and the emissions from the land use changes.
Wildlife and biodiversity
25% of Finnish plants grow on bogs and 1/3 of Finnish birds live, at some stage, around the bog areas. Biologically there are 60-70 types of bogs. The Finnish language distinguishes the differences: ”letto, neva, räme, korpi, palsa”. More than half of the Finnish bog area had been drained by 2008. Peat collection is the most violent attack for the wild nature. During peat collection, bogs are drained and a layer of peat is stripped to a depth of several meters. Forest is often planted in the area after the peat collection. Regeneration is possible, but the complete regeneration of the bog will take tens of thousands of years. In the USA it was calculated in 2002 (Science 297 950 2002) that wetlands are economically more valuable than agricultural fields. The increase of bog biodiversity support from $6.5 billion to $45 billion can give $4400–5200 billion. economical benefit (in the world).
New investments
Peat is the most popular energy source in Finland for new energy investments 2005–2015. The new energy plants in Finland starting 2005-2015 have as energy source: peat 36% and hard coal 11%: combined: 47%. The major carbon dioxide emitting peat plants during 2005-15 are/will be (CO2 kt): PVO 2700 kt, Jyväskylän Energia 561 kt, Etelä-Pohjanmaan Voima Oy (EPV Energia) 374 kt, Kuopion Energia 186 kt, UPM Kymmene 135 kt and Vapo 69 kt. EPV Energy is partner in TVO nuclear plants and Jyväskylän and Kuopion Energia partners in Fennovoima nuclear plants in Finland.According to IEA country report the Finnish subsidies for peat undermine the goal to reduce CO2 emissions and counteracts other environmental policies and The European Union emissions trading scheme. IEA recommends to adhere to the timetable to phase out the peat subsidies in 2010. "To encourage sustained production of peat in the face of negative incentives from the European Union’s emissions trading scheme for greenhouse gases, Finland has put in place a premium tariff scheme to subsidise peat. The premium tariff is designed to directly counter the effect of the European Union’s emissions trading scheme."In April 2022 Neova said it would restart cutting for peat energy in Finland, due to the reduction in wood imports from Russia.
See also
Energy in Finland
== References == |
florida mangroves | The Florida mangroves ecoregion, of the mangrove forest biome, comprise an ecosystem along the coasts of the Florida peninsula, and the Florida Keys. Four major species of mangrove populate the region: red mangrove, black mangrove, white mangrove, and the buttonwood. The mangroves live in the coastal zones in the more tropical southern parts of Florida; mangroves are particularly vulnerable to frosts. Mangroves are important habitat as both fish nursery and brackish water habitats for birds and other coastal species.
Though climate change is expected to extend the mangrove range further north, sea level rise, extreme weather and other changes related to climate change may endanger existing mangrove populations. Other threats include development and other human disruption.
Florida's mangrove species
The Florida mangroves ecoregion includes three mangrove species:Rhizophora mangle — red mangroveRed mangroves are characterized by a dendritic network of aerial prop roots extending into the soil. This allows them to live in anaerobic conditions by providing gas exchange. They attain 82–125 feet in height in deltas and 26–33 feet along shoreline. The bark is gray on the outside with a red interior. These trees also have small white flowers that are wind pollinated with 10-12 inch long pencil shaped seeds.
Avicennia germinans — black mangroveBlack mangrove trees grow to a heights of 133 feet and average 66 feet. They are characterized by vertically erect aerating branches (pneumatophores) extending up to 20 cm above the soil. The bark is dark and scaly and the upper surface of the leaves is often covered with salt excreted by the plant. This tree has white flowers that are bilaterally symmetrical and pollinated by Hymenoptera; they are the source of mangrove honey. The seed is shaped and sized similar to a lima bean when germinated. Younger black mangrove trees are shade intolerant but become more shade tolerant as they mature.
Laguncularia racemosa — white mangroveWhite mangrove trees grow to 45 feet in height and up and tend to have a more erect form than the other species. They have erect, blunt-tipped pneumatophores that are used if they are growing in anaerobic conditions. The bark is white, relatively smooth and the leaves are oval shaped and flattened. Small yellowish flowers are located on the terminal ends of the branches. These may germinate into football shaped propagules. However this may not occur if they are in the northern part of their range.
Conocarpus erectus — buttonwood (one species that is variously classified as a mangrove or a mangrove associate)Buttonwoods grow 39 to 46 feet tall but do not produce a true propagule in Florida. Tiny brownish flowers are located at the terminal ends of the branches forming a seed cluster known as the button. These trees are able to grow in areas seldom inundated by tidal water. Two glands are located at the apex of the petiole (leaf stalk) and excrete excess salts and extrafloral nectar.
Zonation
All three mangrove species flower in the spring and early summer. Propagules fall from late summer through early autumn. These plants have differing adaptions to conditions along coasts, and are generally found in partially overlapping bands or zones, roughly parallel to the shoreline. The red mangrove grows closest to open water. It has multiple prop roots, which may help to stabilize the soil around its roots. Further inland is the black mangrove lacking prop roots, but does have
pneumatophores, which grow up from the roots to above the water level. The white mangrove grows further inland. It may have prop roots and/or pneumatophores, depending on conditions where it is growing. The buttonwood grows in shallow, brackish water, Florida swamps, or on dry land (the furthest inland).
Reproductive strategy
Mangroves have a unique reproductive strategy for a plant. Like mammals they are viviparious, bringing forth live young.
Instead of dormant seeds, they produce propagules that begin embryonic development while still attached to the tree and only release at the appropriate time into water. Once released from the tree they require various dispersal times or "obligate dispersal periods" (5–40 days depending upon the species) where the embryonic development continues. Once a favorable site is found there is an "obligate stranding period" before a tree emerges and begins to grow.
Distribution
Florida mangrove plant communities covered an estimated 430,000 to 540,000 acres (1,700 to 2,200 km2) in Florida in 1981. Ninety percent of the Florida mangroves are in southern Florida, in Collier, Lee, Miami-Dade and Monroe Counties.
Approximately 280,000 acres (1,100 km2) of mangrove forests are in the hands of the Federal, State and local governments, and of private, non-profit organizations. Most of those acres are in Everglades National Park. Mangroves cover a wide band all along the southern end of the Florida peninsula facing on Florida Bay, from Key Largo across to close to Flamingo, then inland behind the beaches and marl prairies of Cape Sable and all around Whitewater Bay. From Whitewater Bay, a broad band of mangroves extends up the Gulf coast to Marco Island, including the Ten Thousand Islands.
Mangroves also extend throughout the Florida Keys, although coverage has been reduced due to development. Florida Bay is dotted with small islands, which are often no more than mud flats or shoals more or less covered by mangroves. Biscayne Bay also has extensive mangroves, but the northern part of the Bay has been largely cleared of mangroves to make way for urban development. Mangrove coverage is limited elsewhere, with the largest areas in the Indian River Lagoon on the east coast, and the Caloosahatchee River, Pine Island Sound and Charlotte Harbor estuaries and Tampa Bay on the west coast.
Preferred climate
Mangroves are tropical plants, killed by freezing temperatures. These trees can range about halfway up the coast of the Florida peninsula due to mild winter climate and the moderating effect of the warm waters of the Gulf of Mexico on the west coast and the Gulf Stream and Atlantic Ocean on the east coast. The Florida mangrove community is found as far north as Cedar Key on the Gulf coast of Florida, and as far north as the Ponce de Leon Inlet on the Atlantic coast of Florida. Black mangroves can regrow from roots after being killed back by a freeze, and are found by themselves a little further north, to Jacksonville on the east coast and along the Florida Panhandle on the Gulf coast. Most of Florida is sub-tropical, making it not ideal for mangroves, so the trees tend to be shorter and the leaves smaller in northern and central Florida than in tropical regions. In deep south Florida and the Florida Keys, the tropical climate allows mangroves to grow larger due to being frost free.
Habitat destruction
Human activity has impacted the mangrove ecoregion in Florida. While the coverage of mangroves at the end of the 20th century is estimated to have decreased only 5% from a century earlier, some localities have seen severe reductions. The Lake Worth Lagoon lost 87% of its mangroves in the second half of the 20th century, leaving a remnant of just 276 acres (1.12 km2). Tampa Bay, home to the busy Port of Tampa, lost over 44% of its wetlands, including mangroves and salt marshes, during the 20th century. Three-quarters of the wetlands along the Indian River Lagoon, including mangroves, were impounded for mosquito control during the 20th century. As of 2001, natural water flow was being restored to some of the wetlands.
Associated fauna and flora
Fish
The Florida mangrove system is an important habitat for many species. It provides nursery grounds for young fish, crustaceans and mollusks, and for sport and commercial purposes. Many fish feed in the mangrove forests, including snook (Centropomus undecimalis), gray or mangrove snapper (Lutjanus griseus), schoolmaster snapper (Lutjanus apodus), tarpon, jack, sheepshead, red drum, hardhead silverside (Atherinomorus stipes), juvenile blue angelfish (Holocanthus bermudensis), juvenile porkfish (Anisotremus virginicus), lined seahorse (Hippocampus erectus), great barracuda (Sphryaena barracuda), scrawled cowfish (Lactophrys quadricornis) and permit (Trachinotus falcatus), as well as shrimp and clams. An estimated 75% of the game fish and 90% of the commercial fish species in south Florida depend on the mangrove system.
Birds
The branches of mangroves serve as roosts and rookeries for coastal and wading birds, such as the brown pelican (Oelicanus occidentalis), roseate spoonbill (Ajajia ajaia), frigatebird (Fregata magnificans), double-crested cormorant (Phalacrocorax carbo), belted kingfisher (Megaceryle alcyon), brown noddy (Anous stolidus), great white heron and Wurdemann's heron, color phases of the great blue heron (Adrea herodias), osprey (Pandion haliaetus), snowy egret (Egretta thula), green heron (Butorides striatus), reddish egret (Dichromanassa rufescens) and greater yellowlegs (Tringa melanoleuca).
Endangered species
Florida mangroves are also home to the following endangered species:
Smalltooth sawfish (Pristis pectinata)
American crocodile (Crocodylus acutus)
Hawksbill sea turtle (Eretmochelys imbricata)
Atlantic ridley sea turtle (Lepidochelys kempii)
Eastern indigo snake (Drymarchon corais)
Atlantic saltmarsh snake (Nerodia clarkii taeniata)
Southern bald eagle (Haliaeetus leucocephalus leucocephalus)
Peregrine falcon (Falco columbarius)
Barbados yellow warbler (Dendroica petechia petechia)
Key deer (Odocoileus virginianus clavium)
West Indian manatee (Trichechus manatus)
Other fauna
Above the water mangroves also shelter and support snails, crabs, and spiders. Below the water's surface, often encrusted on the mangrove roots, are sponges, anemones, corals, oysters, tunicates, mussels, starfish, crabs, and Florida spiny lobsters (Panulirus argus).
Flora
The mangrove branches and trunks support various epiphytes(?). Below the water, spaces protected by splayed mangrove roots can shelter seagrasses.
Effects of climate change
Climate change is a complex issue with numerous variables. The exact severity (such as how much global temperatures will increase) is impossible to predict. The effects of climate change on a species are even more difficult to discern. Despite the intricacy, scientists have formulated several hypotheses of the effects of climate change on the mangroves of southern Florida. The overall hypothesis is that mangroves are vulnerable to climate change, which will affect this ecosystem via three main mechanisms: sea level rise, decreased cold weather events, and increased storm severity. A rise in sea level is expected to affect the range of mangroves, the decrease in cold weather events will allow the range of mangroves to shift further north, and the increase in the severity of storms is anticipated to change the species composition and morphology of the mangroves.
Sea level rise
Between 1870 and 2004, the current sea level rise has been approximately 8 inches total, or 1.46 mm/yr. and studies show that mangroves in southern Florida expanded their territories 3.3 km inland since the 1940s. However, this expansion inland is often at the expense of freshwater marsh/swamp habitats. As climate change continues, this could potentially negatively affect wildlife resources that depend upon freshwater habitats over mangrove habitats, such as the Everglades. The figure at the right shows projections of mangrove distributions under low (15 cm), moderate (45 cm), and severe (95 cm) sea rise scenarios by the year 2100. The IPCC Fifth Assessment Report which was finalized in 2014 is now predicting 52–98 cm sea level rise by 2100. In addition, this report has often been criticized as underestimating the severity of climate change making it even more likely for the moderate (45 cm) or severe (95 cm) sea rise scenarios to occur. Despite the fact mangroves are currently keeping pace with sea level rise, at rates greater than 2.3mm/yr there is potential for mangrove ecosystem failure. This failure is perhaps inevitable for mangroves inhabiting low-lying islands which will be inundated. Sea level rise is expected to accelerate in the future and there is some indication already of this beginning to occur. However, there are examples from the past in which mangroves have both collapsed and survived at rates greater than 2.3mm/yr. Mangroves that are on continental coasts instead of low-lying islands experience reduced vulnerability and have greater opportunities to occupy new habitat.
Temperature shifts
Southern Florida's mangroves are a tropical species that are limited to the tip of the Florida peninsula and the Florida Keys due to a climate preference. The upper portion of Florida falls into a sub-tropical climate hindering mangrove growth there due to cold weather events such as freezing. Twenty-eight years of satellite imagery has shown that mangroves have shifted their range northward in Florida in response to less harsh winters and less frequent cold events. This is an issue apart from sea level rise which will cause mangroves to move inland even though both are caused by climate change.
Increased storm severity
With climate change hurricanes in southern Florida are projected to become more severe causing mangrove populations to be shorter, of smaller diameter, and contain a higher proportion of red mangrove species. Mangroves could be further threatened by storms if the return time of major storms exceeds reestablishment. In addition, mangroves have been shown to reduce the flow pressure of water surges associated with tsunamis, hurricanes, etc. and by doing so protect coastlines. The loss of mangroves could therefore be detrimental to coastal communities exposed to increased storm surges.
Ways to promote resilience
Due to the potential for the acceleration of sea level rise and increased storm severity in the future due to climate change, mangroves of southern Florida may be in jeopardy. This has implications not only for mangrove forests but also the freshwater habitats they encroach upon and the humans and other animals that depend upon both these ecosystem resources and protection. While there is little local managers can do to prevent large scale changes such as sea rise and increased storm severity, according to the International Union for Conservation of Nature and The Nature Conservancy there are ten strategies land managers can do to increase viability and promote resilience.
These are:
Apply risk-spreading strategies to address the uncertainties of climate change. (A range of mangrove habitats should be protected to capture different community types to ensure replenishment following disasters.)
Identify and protect critical areas that are naturally positioned to survive climate change.
Manage human stresses on mangroves (such as waste, sediment, and nutrient runoff from urban areas and human development).
Establish greenbelts and buffer zones to allow for mangrove migration in response to sea-level rise, and to reduce impacts from adjacent land-use practices.
Restore degraded areas that have demonstrated resistance or resilience to climate change.
Understand and preserve connectivity between mangroves and sources of freshwater and sediment, and between mangroves and their associated habitats like coral reefs and seagrasses (mangroves provide services to coral reef and seagrass systems so coupling them and preserving them together helps the other ecosystem succeed).
Establish baseline data and monitor the response of mangroves to climate change.
Implement adaptive strategies to compensate for changes in species ranges and environmental conditions (have flexible management plans).
Develop alternative livelihoods for mangrove dependent communities as a means to reduce mangrove destruction (charcoal production using coconut shells instead of mangroves, and mangrove honey production).
Build partnerships with a variety of stakeholders to generate the necessary finances and support to respond to the impacts of climate change.
See also
Ecological values of mangroves
List of mangrove ecoregions
Mangrove tree distribution
Coastal biogeomorphology
== References == |
methane clathrate | Methane clathrate (CH4·5.75H2O) or (8CH4·46H2O), also called methane hydrate, hydromethane, methane ice, fire ice, natural gas hydrate, or gas hydrate, is a solid clathrate compound (more specifically, a clathrate hydrate) in which a large amount of methane is trapped within a crystal structure of water, forming a solid similar to ice. Originally thought to occur only in the outer regions of the Solar System, where temperatures are low and water ice is common, significant deposits of methane clathrate have been found under sediments on the ocean floors of the Earth. Methane hydrate is formed when hydrogen-bonded water and methane gas come into contact at high pressures and low temperatures in oceans.
Methane clathrates are common constituents of the shallow marine geosphere and they occur in deep sedimentary structures and form outcrops on the ocean floor. Methane hydrates are believed to form by the precipitation or crystallisation of methane migrating from deep along geological faults. Precipitation occurs when the methane comes in contact with water within the sea bed subject to temperature and pressure. In 2008, research on Antarctic Vostok Station and EPICA Dome C ice cores revealed that methane clathrates were also present in deep Antarctic ice cores and record a history of atmospheric methane concentrations, dating to 800,000 years ago. The ice-core methane clathrate record is a primary source of data for global warming research, along with oxygen and carbon dioxide.
Methane clathrates used to be considered as a potential source of abrupt climate change, following the clathrate gun hypothesis. In this scenario, heating causes catastrosphic melting and breakdown of primarily undersea hydrates, leading to a massive release of methane and accelerating warming. Current research shows that hydrates react very slowly to warming, and that it's very difficult for methane to reach the atmosphere after dissociation. Some active seeps instead act as a minor carbon sink, because with the majority of methane dissolved underwater and encouraging methanotroph communities, the area around the seep also becomes more suitable for phytoplankton. As the result, methane hydrates are no longer considered one of the tipping points in the climate system, and according to the IPCC Sixth Assessment Report, no "detectable" impact on the global temperatures will occur in this century through this mechanism. Over several millennia, a more substantial 0.4–0.5 °C (0.72–0.90 °F) response may still be seen.
General
Methane hydrates were discovered in Russia in the 1960s, and studies for extracting gas from it emerged at the beginning of the 21st century.
Structure and composition
The nominal methane clathrate hydrate composition is (CH4)4(H2O)23, or 1 mole of methane for every 5.75 moles of water, corresponding to 13.4% methane by mass, although the actual composition is dependent on how many methane molecules fit into the various cage structures of the water lattice. The observed density is around 0.9 g/cm3, which means that methane hydrate will float to the surface of the sea or of a lake unless it is bound in place by being formed in or anchored to sediment. One litre of fully saturated methane clathrate solid would therefore contain about 120 grams of methane (or around 169 litres of methane gas at 0 °C and 1 atm), or one cubic metre of methane clathrate releases about 160 cubic metres of gas.Methane forms a "structure-I" hydrate with two dodecahedral (12 vertices, thus 12 water molecules) and six tetradecahedral (14 water molecules) water cages per unit cell. (Because of sharing of water molecules between cages, there are only 46 water molecules per unit cell.) This compares with a hydration number of 20 for methane in aqueous solution. A methane clathrate MAS NMR spectrum recorded at 275 K and 3.1 MPa shows a peak for each cage type and a separate peak for gas phase methane. In 2003, a clay-methane hydrate intercalate was synthesized in which a methane hydrate complex was introduced at the interlayer of a sodium-rich montmorillonite clay. The upper temperature stability of this phase is similar to that of structure-I hydrate.
Natural deposits
Methane clathrates are restricted to the shallow lithosphere (i.e. < 2,000 m depth). Furthermore, necessary conditions are found only in either continental sedimentary rocks in polar regions where average surface temperatures are less than 0 °C; or in oceanic sediment at water depths greater than 300 m where the bottom water temperature is around 2 °C. In addition, deep fresh water lakes may host gas hydrates as well, e.g. the fresh water Lake Baikal, Siberia. Continental deposits have been located in Siberia and Alaska in sandstone and siltstone beds at less than 800 m depth. Oceanic deposits seem to be widespread in the continental shelf (see Fig.) and can occur within the sediments at depth or close to the sediment-water interface. They may cap even larger deposits of gaseous methane.
Oceanic
Methane hydrate can occur in various forms like massive, dispersed within pore spaces, nodules, veins/fractures/faults, and layered horizons. Generally, it is found unstable at standard pressure and temperature conditions, and 1 m3 of methane hydrate upon dissociation yields about 164 m3 of methane and 0.87 m3 of freshwater. There are two distinct types of oceanic deposits. The most common is dominated (> 99%) by methane contained in a structure I clathrate and generally found at depth in the sediment. Here, the methane is isotopically light (δ13C < −60‰), which indicates that it is derived from the microbial reduction of CO2. The clathrates in these deep deposits are thought to have formed in situ from the microbially produced methane since the δ13C values of clathrate and surrounding dissolved methane are similar. However, it is also thought that freshwater used in the pressurization of oil and gas wells in permafrost and along the continental shelves worldwide combines with natural methane to form clathrate at depth and pressure since methane hydrates are more stable in freshwater than in saltwater. Local variations may be widespread since the act of forming hydrate, which extracts pure water from saline formation waters, can often lead to local and potentially significant increases in formation water salinity. Hydrates normally exclude the salt in the pore fluid from which it forms. Thus, they exhibit high electric resistivity like ice, and sediments containing hydrates have higher resistivity than sediments without gas hydrates (Judge [67]).: 9 These deposits are located within a mid-depth zone around 300–500 m thick in the sediments (the gas hydrate stability zone, or GHSZ) where they coexist with methane dissolved in the fresh, not salt, pore-waters. Above this zone methane is only present in its dissolved form at concentrations that decrease towards the sediment surface. Below it, methane is gaseous. At Blake Ridge on the Atlantic continental rise, the GHSZ started at 190 m depth and continued to 450 m, where it reached equilibrium with the gaseous phase. Measurements indicated that methane occupied 0-9% by volume in the GHSZ, and ~12% in the gaseous zone.In the less common second type found near the sediment surface, some samples have a higher proportion of longer-chain hydrocarbons (< 99% methane) contained in a structure II clathrate. Carbon from this type of clathrate is isotopically heavier (δ13C is −29 to −57 ‰) and is thought to have migrated upwards from deep sediments, where methane was formed by thermal decomposition of organic matter. Examples of this type of deposit have been found in the Gulf of Mexico and the Caspian Sea.Some deposits have characteristics intermediate between the microbially and thermally sourced types and are considered formed from a mixture of the two.
The methane in gas hydrates is dominantly generated by microbial consortia degrading organic matter in low oxygen environments, with the methane itself produced by methanogenic archaea. Organic matter in the uppermost few centimeters of sediments is first attacked by aerobic bacteria, generating CO2, which escapes from the sediments into the water column. Below this region of aerobic activity, anaerobic processes take over, including, successively with depth, the microbial reduction of nitrite/nitrate, metal oxides, and then sulfates are reduced to sulfides. Finally, methanogenesis becomes a dominant pathway for organic carbon remineralization.
If the sedimentation rate is low (about 1 cm/yr), the organic carbon content is low (about 1% ), and oxygen is abundant, aerobic bacteria can use up all the organic matter in the sediments faster than oxygen is depleted, so lower-energy electron acceptors are not used. But where sedimentation rates and the organic carbon content are high, which is typically the case on continental shelves and beneath western boundary current upwelling zones, the pore water in the sediments becomes anoxic at depths of only a few centimeters or less. In such organic-rich marine sediments, sulfate becomes the most important terminal electron acceptor due to its high concentration in seawater. However, it too is depleted by a depth of centimeters to meters. Below this, methane is produced. This production of methane is a rather complicated process, requiring a highly reducing environment (Eh −350 to −450 mV) and a pH between 6 and 8, as well as a complex syntrophic, consortia of different varieties of archaea and bacteria. However, it is only archaea that actually emit methane.
In some regions (e.g., Gulf of Mexico, Joetsu Basin) methane in clathrates may be at least partially derive from thermal degradation of organic matter (e.g. petroleum generation), with oil even forming an exotic component within the hydrate itself that can be recovered when the hydrate is disassociated. The methane in clathrates typically has a biogenic isotopic signature and highly variable δ13C (−40 to −100‰), with an approximate average of about −65‰ . Below the zone of solid clathrates, large volumes of methane may form bubbles of free gas in the sediments.The presence of clathrates at a given site can often be determined by observation of a "bottom simulating reflector" (BSR), which is a seismic reflection at the sediment to clathrate stability zone interface caused by the unequal densities of normal sediments and those laced with clathrates.
Gas hydrate pingos have been discovered in the Arctic oceans Barents sea. Methane is bubbling from these dome-like structures, with some of these gas flares extending close to the sea surface.
Reservoir size
The size of the oceanic methane clathrate reservoir is poorly known, and estimates of its size decreased by roughly an order of magnitude per decade since it was first recognized that clathrates could exist in the oceans during the 1960s and 1970s. The highest estimates (e.g. 3×1018 m3) were based on the assumption that fully dense clathrates could litter the entire floor of the deep ocean. Improvements in our understanding of clathrate chemistry and sedimentology have revealed that hydrates form in only a narrow range of depths (continental shelves), at only some locations in the range of depths where they could occur (10-30% of the Gas hydrate stability zone), and typically are found at low concentrations (0.9–1.5% by volume) at sites where they do occur. Recent estimates constrained by direct sampling suggest the global inventory occupies between 1×1015 and 5×1015 cubic metres (0.24 and 1.2 million cubic miles). This estimate, corresponding to 500–2500 gigatonnes carbon (Gt C), is smaller than the 5000 Gt C estimated for all other geo-organic fuel reserves but substantially larger than the ~230 Gt C estimated for other natural gas sources. The permafrost reservoir has been estimated at about 400 Gt C in the Arctic, but no estimates have been made of possible Antarctic reservoirs. These are large amounts. In comparison, the total carbon in the atmosphere is around 800 gigatons (see Carbon: Occurrence).
These modern estimates are notably smaller than the 10,000 to 11,000 Gt C (2×1016 m3) proposed by previous researchers as a reason to consider clathrates to be a geo-organic fuel resource (MacDonald 1990, Kvenvolden 1998). Lower abundances of clathrates do not rule out their economic potential, but a lower total volume and apparently low concentration at most sites does suggest that only a limited percentage of clathrates deposits may provide an economically viable resource.
Continental
Methane clathrates in continental rocks are trapped in beds of sandstone or siltstone at depths of less than 800 m. Sampling indicates they are formed from a mix of thermally and microbially derived gas from which the heavier hydrocarbons were later selectively removed. These occur in Alaska, Siberia, and Northern Canada.
In 2008, Canadian and Japanese researchers extracted a constant stream of natural gas from a test project at the Mallik gas hydrate site in the Mackenzie River delta. This was the second such drilling at Mallik: the first took place in 2002 and used heat to release methane. In the 2008 experiment, researchers were able to extract gas by lowering the pressure, without heating, requiring significantly less energy. The Mallik gas hydrate field was first discovered by Imperial Oil in 1971–1972.
Commercial use
Economic deposits of hydrate are termed natural gas hydrate (NGH) and store 164 m3 of methane, 0.8 m3 water in 1 m3 hydrate. Most NGH is found beneath the seafloor (95%) where it exists in thermodynamic equilibrium. The sedimentary methane hydrate reservoir probably contains 2–10 times the currently known reserves of conventional natural gas, as of 2013. This represents a potentially important future source of hydrocarbon fuel. However, in the majority of sites deposits are thought to be too dispersed for economic extraction. Other problems facing commercial exploitation are detection of viable reserves and development of the technology for extracting methane gas from the hydrate deposits.
In August 2006, China announced plans to spend 800 million yuan (US$100 million) over the next 10 years to study natural gas hydrates. A potentially economic reserve in the Gulf of Mexico may contain approximately 100 billion cubic metres (3.5×10^12 cu ft) of gas. Bjørn Kvamme and Arne Graue at the Institute for Physics and technology at the University of Bergen have developed a method for injecting CO2 into hydrates and reversing the process; thereby extracting CH4 by direct exchange. The University of Bergen's method is being field tested by ConocoPhillips and state-owned Japan Oil, Gas and Metals National Corporation (JOGMEC), and partially funded by the U.S. Department of Energy. The project has already reached injection phase and was analyzing resulting data by March 12, 2012.On March 12, 2013, JOGMEC researchers announced that they had successfully extracted natural gas from frozen methane hydrate. In order to extract the gas, specialized equipment was used to drill into and depressurize the hydrate deposits, causing the methane to separate from the ice. The gas was then collected and piped to surface where it was ignited to prove its presence. According to an industry spokesperson, "It [was] the world's first offshore experiment producing gas from methane hydrate". Previously, gas had been extracted from onshore deposits, but never from offshore deposits which are much more common. The hydrate field from which the gas was extracted is located 50 kilometres (31 mi) from central Japan in the Nankai Trough, 300 metres (980 ft) under the sea. A spokesperson for JOGMEC remarked "Japan could finally have an energy source to call its own". Marine geologist Mikio Satoh remarked "Now we know that extraction is possible. The next step is to see how far Japan can get costs down to make the technology economically viable." Japan estimates that there are at least 1.1 trillion cubic meters of methane trapped in the Nankai Trough, enough to meet the country's needs for more than ten years.Both Japan and China announced in May 2017 a breakthrough for mining methane clathrates, when they extracted methane from hydrates in the South China Sea. China described the result as a breakthrough; Praveen Linga from the Department of Chemical and Biomolecular Engineering at the National University of Singapore agreed "Compared with the results we have seen from Japanese research, the Chinese scientists have managed to extract much more gas in their efforts". Industry consensus is that commercial-scale production remains years away.
Environmental concerns
Experts caution that environmental impacts are still being investigated and that methane—a greenhouse gas with around 25 times as much global warming potential over a 100-year period (GWP100) as carbon dioxide—could potentially escape into the atmosphere if something goes wrong. Furthermore, while cleaner than coal, burning natural gas also creates carbon dioxide emissions.
Hydrates in natural gas processing
Routine operations
Methane clathrates (hydrates) are also commonly formed during natural gas production operations, when liquid water is condensed in the presence of methane at high pressure. It is known that larger hydrocarbon molecules like ethane and propane can also form hydrates, although longer molecules (butanes, pentanes) cannot fit into the water cage structure and tend to destabilise the formation of hydrates.
Once formed, hydrates can block pipeline and processing equipment. They are generally then removed by reducing the pressure, heating them, or dissolving them by chemical means (methanol is commonly used). Care must be taken to ensure that the removal of the hydrates is carefully controlled, because of the potential for the hydrate to undergo a phase transition from the solid hydrate to release water and gaseous methane at a high rate when the pressure is reduced. The rapid release of methane gas in a closed system can result in a rapid increase in pressure.It is generally preferable to prevent hydrates from forming or blocking equipment. This is commonly achieved by removing water, or by the addition of ethylene glycol (MEG) or methanol, which act to depress the temperature at which hydrates will form. In recent years, development of other forms of hydrate inhibitors have been developed, like Kinetic Hydrate Inhibitors (increasing the required sub-cooling which hydrates require to form, at the expense of increased hydrate formation rate) and anti-agglomerates, which do not prevent hydrates forming, but do prevent them sticking together to block equipment.
Effect of hydrate phase transition during deep water drilling
When drilling in oil- and gas-bearing formations submerged in deep water, the reservoir gas may flow into the well bore and form gas hydrates owing to the low temperatures and high pressures found during deep water drilling. The gas hydrates may then flow upward with drilling mud or other discharged fluids. When the hydrates rise, the pressure in the annulus decreases and the hydrates dissociate into gas and water. The rapid gas expansion ejects fluid from the well, reducing the pressure further, which leads to more hydrate dissociation and further fluid ejection. The resulting violent expulsion of fluid from the annulus is one potential cause or contributor to the "kick". (Kicks, which can cause blowouts, typically do not involve hydrates: see Blowout: formation kick).
Measures which reduce the risk of hydrate formation include:
High flow-rates, which limit the time for hydrate formation in a volume of fluid, thereby reducing the kick potential.
Careful measuring of line flow to detect incipient hydrate plugging.
Additional care in measuring when gas production rates are low and the possibility of hydrate formation is higher than at relatively high gas flow rates.
Monitoring of well casing after it is "shut in" (isolated) may indicate hydrate formation. Following "shut in", the pressure rises while gas diffuses through the reservoir to the bore hole; the rate of pressure rise exhibit a reduced rate of increase while hydrates are forming.
Additions of energy (e.g., the energy released by setting cement used in well completion) can raise the temperature and convert hydrates to gas, producing a "kick".
Blowout recovery
At sufficient depths, methane complexes directly with water to form methane hydrates, as was observed during the Deepwater Horizon oil spill in 2010. BP engineers developed and deployed a subsea oil recovery system over oil spilling from a deepwater oil well 5,000 feet (1,500 m) below sea level to capture escaping oil. This involved placing a 125-tonne (276,000 lb) dome over the largest of the well leaks and piping it to a storage vessel on the surface. This option had the potential to collect some 85% of the leaking oil but was previously untested at such depths. BP deployed the system on May 7–8, but it failed due to buildup of methane clathrate inside the dome; with its low density of approximately 0.9 g/cm3 the methane hydrates accumulated in the dome, adding buoyancy and obstructing flow.
Methane clathrates and climate change
Natural gas hydrates for gas storage and transportation
Since methane clathrates are stable at a higher temperature than liquefied natural gas (LNG) (−20 vs −162 °C), there is some interest in converting natural gas into clathrates (Solidified Natural Gas or SNG) rather than liquifying it when transporting it by seagoing vessels. A significant advantage would be that the production of natural gas hydrate (NGH) from natural gas at the terminal would require a smaller refrigeration plant and less energy than LNG would. Offsetting this, for 100 tonnes of methane transported, 750 tonnes of methane hydrate would have to be transported; since this would require a ship of 7.5 times greater displacement, or require more ships, it is unlikely to prove economically feasible.. Recently, methane hydrate has received considerable interest for large scale stationary storage application due to the very mild storage conditions with the inclusion of tetrahydrofuran (THF) as a co-guest. With the inclusion of tetrahydrofuran, though there is a slight reduction in the gas storage capacity, the hydrates have been demonstrated to be stable for several months in a recent study at −2 °C and atmospheric pressure. A recent study has demonstrated that SNG can be formed directly with seawater instead of pure water in combination with THF.
See also
Future energy development
Long-term effects of global warming
The Swarm (Schätzing novel)
Unconventional (oil & gas) reservoir
Notes
References
External links
Are there deposits of methane under the sea? Will global warming release the methane to the atmosphere? Archived 2008-04-30 at the Wayback Machine (2007)
Methane seeps from Arctic sea bed (BBC)
Bubbles of warming, beneath the ice (LA Times 2009)
online calculator : hydrate formation conditions with different EOSs
Research
Centre for Arctic Gas Hydrate, Environment and Climate (CAGE)
Center for Hydrate Research
USGS Geological Research Activities with U.S. Minerals Management Service - Methane Gas Hydrates
Carbon Neutral Methane Energy Production from Hydrate Deposits (Columbia University)
Video
USGS Gas Hydrates Lab (2012)
Ancient Methane Explosions Created Ocean Craters (2017) |
united states senate environment and public works subcommittee on private sector and consumer solutions to global warming and wildlife protection | The United States Senate Environment and Public Works Subcommittee on Private Sector and Consumer Solutions to Global Warming and Wildlife Protection was one of six subcommittees of the Senate Committee on Environment and Public Works during the 110th Congress. The subcommittee's jurisdiction included:
Global warming
Fisheries and wildlife, including the Fish and Wildlife Service
Endangered Species Act (ESA)
National Wildlife RefugesThe subcommittee was formerly known as the Subcommittee on Fisheries, Wildlife, and Water, but was renamed during committee organization of the 110th Congress when global warming was added to its oversight responsibilities. It served as a counterpart to the new Subcommittee on Public Sector Solutions to Global Warming, Oversight, and Children's Health Protection.
The subcommittee was chaired by Senator Joe Lieberman of Connecticut, and the ranking member was Senator John Warner of Virginia.
Members, 110th Congress
See also
References
"Private Sector and Consumer Solutions to Global Warming and Wildlife Protection". United States Senate Environment and Public Works. Retrieved 17 December 2010. |
phenology | Phenology is the study of periodic events in biological life cycles and how these are influenced by seasonal and interannual variations in climate, as well as habitat factors (such as elevation).Examples include the date of emergence of leaves and flowers, the first flight of butterflies, the first appearance of migratory birds, the date of leaf colouring and fall in deciduous trees, the dates of egg-laying of birds and amphibia, or the timing of the developmental cycles of temperate-zone honey bee colonies. In the scientific literature on ecology, the term is used more generally to indicate the time frame for any seasonal biological phenomena, including the dates of last appearance (e.g., the seasonal phenology of a species may be from April through September).
Because many such phenomena are very sensitive to small variations in climate, especially to temperature, phenological records can be a useful proxy for temperature in historical climatology, especially in the study of climate change and global warming. For example, viticultural records of grape harvests in Europe have been used to reconstruct a record of summer growing season temperatures going back more than 500 years.
In addition to providing a longer historical baseline than instrumental measurements, phenological observations provide high temporal resolution of ongoing changes related to global warming.
Etymology
The word is derived from the Greek φαίνω (phainō), "to show, to bring to light, make to appear" + λόγος (logos), amongst others "study, discourse, reasoning" and indicates that phenology has been principally concerned with the dates of first occurrence of biological events in their annual cycle.
The term was first used by Charles François Antoine Morren, a professor of botany at the University of Liège (Belgium). Morren was a student of Adolphe Quetelet. Quetelet made plant phenological observations at the Royal Observatory of Belgium in Brussels. He is considered "one of 19th century trendsetters in these matters." In 1839 he started his first observations and created a network over Belgium and Europe that reached a total of about 80 stations in the period 1840–1870.
Morren participated in 1842 and 1843 in Quetelets 'Observations of Periodical Phenomena' (Observations des Phénomènes périodiques), and at first suggested to mention the observations concerning botanical phenomena 'anthochronological observations'. That term had already been used in 1840 by Carl Joseph Kreutzer.
But 16 December 1849 Morren used the term 'phenology' for the first time in a public lecture at the Royal Academy of Science, Letters and Fine Arts of Belgium in Brussels, to describe “the specific science which has the goal to know the ‘’manifestation of life ruled by the time’’.”It would take four more years before Morren first published “phenological memories”.
That the term was not really common in the decades to follow may be shown by an article in The Zoologist of 1899. The article describes an ornithological meeting in Sarajevo, where 'questions of Phaenology' were discussed. A footnote by the Editor, William Lucas Distant, says: “This word is seldom used, and we have been informed by a very high authority that it may be defined as "Observational Biology," and as applied to birds, as it is here, may be taken to mean the study or science of observations on the appearance of birds.”
Records
Historical
Observations of phenological events have provided indications of the progress of the natural calendar since ancient agricultural times. Many cultures have traditional phenological proverbs and sayings which indicate a time for action: "When the sloe tree is white as a sheet, sow your barley whether it be dry or wet" or attempt to forecast future climate: "If oak's before ash, you're in for a splash. If ash before oak, you're in for a soak". But the indications can be pretty unreliable, as an alternative version of the rhyme shows: "If the oak is out before the ash, 'Twill be a summer of wet and splash; If the ash is out before the oak, 'Twill be a summer of fire and smoke." Theoretically, though, these are not mutually exclusive, as one forecasts immediate conditions and one forecasts future conditions.
The North American Bird Phenology Program at USGS Patuxent Wildlife Research Center (PWRC) is in possession of a collection of millions of bird arrival and departure date records for over 870 species across North America, dating between 1880 and 1970. This program, originally started by Wells W. Cooke, involved over 3,000 observers including many notable naturalists of the time. The program ran for 90 years and came to a close in 1970 when other programs starting up at PWRC took precedence. The program was again started in 2009 to digitize the collection of records and now with the help of citizens worldwide, each record is being transcribed into a database which will be publicly accessible for use.
The English naturalists Gilbert White and William Markwick reported the seasonal events of more than 400 plant and animal species, Gilbert White in Selborne, Hampshire and William Markwick in Battle, Sussex over a 25-year period between 1768 and 1793. The data, reported in White's Natural History and Antiquities of Selborne are reported as the earliest and latest dates for each event over 25 years; so annual changes cannot therefore be determined.
In Japan and China the time of blossoming of cherry and peach trees is associated with ancient festivals and some of these dates can be traced back to the eighth century. Such historical records may, in principle, be capable of providing estimates of climate at dates before instrumental records became available. For example, records of the harvest dates of the pinot noir grape in Burgundy have been used in an attempt to reconstruct spring–summer temperatures from 1370 to 2003; the reconstructed values during 1787–2000 have a correlation with Paris instrumental data of about 0.75.
Modern
Great Britain
Robert Marsham, the founding father of modern phenological recording, was a wealthy landowner who kept systematic records of "Indications of spring" on his estate at Stratton Strawless, Norfolk, from 1736. These took the form of dates of the first occurrence of events such as flowering, bud burst, emergence or flight of an insect. Generations of Marsham's family maintained consistent records of the same events or "phenophases" over unprecedentedly long periods of time, eventually ending with the death of Mary Marsham in 1958, so that trends can be observed and related to long-term climate records. The data show significant variation in dates which broadly correspond with warm and cold years. Between 1850 and 1950 a long-term trend of gradual climate warming is observable, and during this same period the Marsham record of oak-leafing dates tended to become earlier.After 1960 the rate of warming accelerated, and this is mirrored by increasing earliness of oak leafing, recorded in the data collected by Jean Combes in Surrey. Over the past 250 years, the first leafing date of oak appears to have advanced by about 8 days, corresponding to overall warming on the order of 1.5 °C in the same period.
Towards the end of the 19th century the recording of the appearance and development of plants and animals became a national pastime, and between 1891 and 1948 the Royal Meteorological Society (RMS) organised a programme of phenological recording across the British Isles. Up to 600 observers submitted returns in some years, with numbers averaging a few hundred. During this period 11 main plant phenophases were consistently recorded over the 58 years from 1891 to 1948, and a further 14 phenophases were recorded for the 20 years between 1929 and 1948. The returns were summarised each year in the Quarterly Journal of the RMS as The Phenological Reports. Jeffree (1960) summarised the 58 years of data, which show that flowering dates could be as many as 21 days early and as many as 34 days late, with extreme earliness greatest in summer-flowering species, and extreme lateness in spring-flowering species. In all 25 species, the timings of all phenological events are significantly related to temperature, indicating that phenological events are likely to get earlier as climate warms.
The Phenological Reports ended suddenly in 1948 after 58 years, and Britain remained without a national recording scheme for almost 50 years, just at a time when climate change was becoming evident. During this period, individual dedicated observers made important contributions. The naturalist and author Richard Fitter recorded the First Flowering Date (FFD) of 557 species of British flowering plants in Oxfordshire between about 1954 and 1990. Writing in Science in 2002, Richard Fitter and his son Alistair Fitter found that "the average FFD of 385 British plant species has advanced by 4.5 days during the past decade compared with the previous four decades." They note that FFD is sensitive to temperature, as is generally agreed, that "150 to 200 species may be flowering on average 15 days earlier in Britain now than in the very recent past" and that these earlier FFDs will have "profound ecosystem and evolutionary consequences". In Scotland, David Grisenthwaite meticulously recorded the dates he mowed his lawn since 1984. His first cut of the year was 13 days earlier in 2004 than in 1984, and his last cut was 17 days later, providing evidence for an earlier onset of spring and a warmer climate in general.National recording was resumed by Tim Sparks in 1998 and, from 2000, has been led by citizen science project Nature's Calendar [2], run by the Woodland Trust and the Centre for Ecology and Hydrology. Latest research shows that oak bud burst has advanced more than 11 days since the 19th century and that resident and migrant birds are unable to keep up with this change.
Continental Europe
In Europe, phenological networks are operated in several countries, e.g. Germany's national meteorological service operates a very dense network with approx. 1200 observers, the majority of them on a voluntary basis. The Pan European Phenology (PEP) project is a database that collects phenological data from European countries. Currently 32 European meteorological services and project partners from across Europe have joined and supplied data.In Geneva, Switzerland, the opening of the first leaf of an official chestnut tree (a horse chestnut) has been observed and recorded since 1818, thus forming the oldest set of records of phenological events in Switzerland. This task is conducted by the secretary of the Grand Council of Geneva (the local parliament), and the opening of the first leaf is announced publicly as indicating the beginning of the Spring. Data show a trend during the 20th century towards an opening that happens earlier and earlier.
Other countries
There is a USA National Phenology Network [3] in which both professional scientists and lay recorders participate.
Many other countries such as Canada (Alberta Plantwatch [4] and Saskatchewan PlantWatch), China and Australia
also have phenological programs.
In eastern North America, almanacs are traditionally used for information on action phenology (in agriculture), taking into account the astronomical positions at the time.
William Felker has studied phenology in Ohio, US, since 1973 and now publishes "Poor Will's Almanack", a phenological almanac for farmers (not to be confused with a late 18th-century almanac by the same name).
In the Amazon rainforests of South America, the timing of leaf production and abscission has been linked to rhythms in gross primary production at several sites. Early in their lifespan, leaves reach a peak in their capacity for photosynthesis, and in tropical evergreen forests of some regions of the Amazon basin (particularly regions with long dry seasons), many trees produce more young leaves in the dry season, seasonally increasing the photosynthetic capacity of the forest.
Airborne sensors
Recent technological advances in studying the earth from space have resulted in a new field of phenological research that is concerned with observing the phenology of whole ecosystems and stands of vegetation on a global scale using proxy approaches. These methods complement the traditional phenological methods which recorded the first occurrences of individual species and phenophases.
The most successful of these approaches is based on tracking the temporal change of a Vegetation Index (like Normalized Difference Vegetation Index(NDVI)). NDVI makes use of the vegetation's typical low reflection in the red (red energy is mostly absorbed by growing plants for Photosynthesis) and strong reflection in the Near Infrared (Infrared energy is mostly reflected by plants due to their cellular structure). Due to its robustness and simplicity, NDVI has become one of the most popular remote sensing based products. Typically, a vegetation index is constructed in such a way that the attenuated reflected sunlight energy (1% to 30% of incident sunlight) is amplified by ratio-ing red and NIR following this equation:
N
D
V
I
=
N
I
R
−
r
e
d
N
I
R
+
r
e
d
{\displaystyle \mathrm {NDVI} ={\mathrm {NIR} -\mathrm {red} \over \mathrm {NIR} +\mathrm {red} }}
The evolution of the vegetation index through time, depicted by the graph above, exhibits a strong correlation with the typical green vegetation growth stages (emergence, vigor/growth, maturity, and harvest/senescence). These temporal curves are analyzed to extract useful parameters about the vegetation growing season (start of season, end of season, length of growing season, etc.). Other growing season parameters could potentially be extracted, and global maps of any of these growing season parameters could then be constructed and used in all sorts of climatic change studies.
A noteworthy example of the use of remote sensing based phenology is the work of Ranga Myneni from Boston University. This work showed an apparent increase in vegetation productivity that most likely resulted from the increase in temperature and lengthening of the growing season in the boreal forest. Another example based on the MODIS enhanced vegetation index (EVI) reported by Alfredo Huete at the University of Arizona and colleagues showed that the Amazon Rainforest, as opposed to the long-held view of a monotonous growing season or growth only during the wet rainy season, does in fact exhibit growth spurts during the dry season.However, these phenological parameters are only an approximation of the true biological growth stages. This is mainly due to the limitation of current space-based remote sensing, especially the spatial resolution, and the nature of vegetation index. A pixel in an image does not contain a pure target (like a tree, a shrub, etc.) but contains a mixture of whatever intersected the sensor's field of view.
Phenological mismatch
Most species, including both plants and animals, interact with one another within ecosystems and habitats, known as biological interactions. These interactions (whether it be plant-plant, animal-animal, predator-prey or plant-animal interactions) can be vital to the success and survival of populations and therefore species.
Many species experience changes in life cycle development, migration or in some other process/behavior at different times in the season than previous patterns depict due to warming temperatures. Phenological mismatches, where interacting species change the timing of regularly repeated phases in their life cycles at different rates, creates a mismatch in interaction timing and therefore negatively harming the interaction. Mismatches can occur in many different biological interactions, including between species in one trophic level (intratrophic interactions) (i.e. plant-plant), between different trophic levels (intertrophic interactions) (i.e. plant-animal) or through creating competition (intraguild interactions). For example, if a plant species blooms its flowers earlier than previous years, but the pollinators that feed on and pollinate this flower does not arrive or grow earlier as well, then a phenological mismatch has occurred. This results in the plant population declining as there are no pollinators to aid in their reproductive success. Another example includes the interaction between plant species, where the presence of one specie aids in the pollination of another through attraction of pollinators. However, if these plant species develop at mismatched times, this interaction will be negatively affected and therefore the plant species that relies on the other will be harmed.
Phenological mismatches means the loss of many biological interactions and therefore ecosystem functions are also at risk of being negatively affected or lost all together. Phenological mismatches will effect species and ecosystems food webs, reproduction success, resource availability, population and community dynamics in future generations, and therefore evolutionary processes and overall biodiversity.
See also
Citizen science
Nature Detectives
Season creep
Growing degree-day
Biological life cycle
References
Sources
Demarée, Gaston R; Rutishauser, This (2011). "From "Periodical Observations" to "Anthochronology" and "Phenology" – the scientific debate between Adolphe Quetelet and Charles Morren on the origin of the word "Phenology"" (PDF). International Journal of Biometeorology. 55 (6): 753–761. Bibcode:2011IJBm...55..753D. doi:10.1007/s00484-011-0442-5. PMID 21713602. S2CID 1486224. Archived from the original (PDF) on 2020-12-04. Retrieved 2018-11-20.
External links
North American Bird Phenology Program Citizen science program to digitize bird phenology records
Project Budburst Citizen Science for Plant Phenology in the USA
USA National Phenology Network Citizen science and research network observations on phenology in the USA
AMC's Mountain Watch Archived 2016-04-14 at the Wayback Machine Citizen science and phenology monitoring in the Appalachian mountains
Pan European Phenology Project PEP725 European open access database with plant phenology data sets for science, research and education
UK Nature's Calendar UK Phenology network
DWD phenology website Information on the plant phenological network operated by Germany's national meteorological service (DWD)
Nature's Calendar Ireland Archived 2021-01-28 at the Wayback Machine Spring Watch & Autumn Watch
Naturewatch: A Canadian Phenology project
Spring Alive Project Phenological survey on birds for children
Moj Popek Citizen Science for Plant Phenology in Slovenia
Observatoire des Saisons French Phenology network
Phenology Video produced by Wisconsin Public Television
Austrian phenological network run by ZAMG |
environmental issues with coral reefs | Human activities have substantial impact on coral reefs, contributing to their worldwide decline.[1] Damaging activities encompass coral mining, pollution (both organic and non-organic), overfishing, blast fishing, as well as the excavation of canals and access points to islands and bays. Additional threats comprise disease, destructive fishing practices, and the warming of oceans.[2] Furthermore, the ocean's function as a carbon dioxide sink, alterations in the atmosphere, ultraviolet light, ocean acidification, viral infections, the repercussions of dust storms transporting agents to distant reefs, pollutants, and algal blooms represent some of the factors exerting influence on coral reefs. Importantly, the jeopardy faced by coral reefs extends far beyond coastal regions. The ramifications of climate change, notably global warming, induce an elevation in ocean temperatures that triggers coral bleaching—a potentially lethal phenomenon for coral ecosystems.
Scientists estimate that over next 20 years, about 70 to 90% of all coral reefs will disappear. With primary causes being warming ocean waters, ocean acidity, and pollution. In 2008, a worldwide study estimated that 19% of the existing area of coral reefs had already been lost. Only 46% of the world's reefs could be currently regarded as in good health and about 60% of the world's reefs may be at risk due to destructive, human-related activities. The threat to the health of reefs is particularly strong in Southeast Asia, where 80% of reefs are endangered. By the 2030s, 90% of reefs are expected to be at risk from both human activities and climate change; by 2050, it is predicted that all coral reefs will be in danger.
Issues
Competition
In the Caribbean Sea and tropical Pacific Ocean, direct contact between coral and common seaweeds causes bleaching and death of coral tissue via allelopathic competition. The lipid-soluble extracts of seaweeds that harmed coral tissues, also produced rapid bleaching. At these sites, bleaching and mortality was limited to areas of direct contact with seaweed or their extracts. The seaweed then expanded to occupy the dead coral's habitat. However, as of 2009, only 4% of coral reefs worldwide had more than 50% algal coverage which means that there are no recent global trend towards algal dominance over coral reefs.Competitive seaweed and other algae thrive in nutrient-rich waters in the absence of sufficient herbivorous predators. Herbivores include fish such as parrotfish, the urchin Diadema antillarum, surgeonfishes, tangs and unicornfishes.
Predation
Overfishing, particularly selective overfishing, can unbalance coral ecosystems by encouraging the excessive growth of coral predators. Predators that eat living coral, such as the crown-of-thorns starfish, are called corallivores. Coral reefs are built from stony coral, which evolved with large amounts of the wax cetyl palmitate in their tissues. Most predators find this wax indigestible. The crown-of-thorns starfish is a large (up to one meter) starfish protected by long, venomous spines. Its enzyme system dissolves the wax in stony corals, and allows the starfish to feed on the living animal. Starfish face predators of their own, such as the giant triton sea snail. However, the giant triton is valued for its shell and has been over fished. As a result, crown-of-thorns starfish populations can periodically grow unchecked, devastating reefs.
Fishing practices
Although some marine aquarium fish species can reproduce in aquaria (such as Pomacentridae), most (95%) are collected from coral reefs. Intense harvesting, especially in maritime Southeast Asia (including Indonesia and the Philippines), damages the reefs. This is aggravated by destructive fishing practices, such as cyanide and blast fishing. Most (80–90%) aquarium fish from the Philippines are captured with sodium cyanide. This toxic chemical is dissolved in sea water and released into areas where fish shelter. It narcotizes the fish, which are then easily captured. However, most fish collected with cyanide die a few months later from liver damage. Moreover, many non-marketable specimens die in the process. It is estimated that 4,000 or more Filipino fish collectors have used over 1,000,000 kilograms (2,200,000 lb) of cyanide on Philippine reefs alone, about 150,000 kg per year. A major catalyst of cyanide fishing is poverty within fishing communities. In countries like the Philippines that regularly employ cyanide, more than thirty percent of the population lives below the poverty line.Dynamite fishing is another destructive method for gathering fish. Sticks of dynamite, grenades, or homemade explosives are detonated in the water. This method of fishing kills the fish within the main blast area, along with many unwanted reef animals. The blast also kills the corals in the area, eliminating the reef's structure, destroying habitat for the remaining fish and other animals important for reef health. Muro-ami is the destructive practice of covering reefs with nets and dropping large stones onto the reef to produce a flight response among the fish. The stones break and kill the coral. Muro-ami was generally outlawed in the 1980s.Fishing gear damages reefs via direct physical contact with the reef structure and substrate. They are typically made of synthetic materials that do not deteriorate in the ocean, causing a lasting effect on the ecosystem and reefs. Gill nets, fish traps, and anchors break branching coral and cause coral death through entanglement. When fishermen drop lines by coral reefs, the lines entangle the coral. The fisher cuts the line and abandons it, leaving it attached to the reef. The discarded lines abrade coral polyps and upper tissue layers. Corals are able to recover from small lesions, but larger and recurrent damage complicates recovery.
Bottom dragging gear such as beach seines can damage corals by abrasion and fracturing. A beach seine is a long net about 150 meters (490 ft) with a mesh size of 3 centimeters (1.2 in) and a weighted line to hold the net down while it is dragged across the substrate and is one of the most destructive types of fishing gear on Kenya's reefs.Bottom trawling in deep oceans destroys cold-water and deep-sea corals. Historically, industrial fishers avoided coral because their nets would get caught on the reefs. In the 1980s, "rock-hopper" trawls attached large tires and rollers to allow the nets to roll over rough surfaces. Fifty-five percent of Alaskan cold-water coral that was damaged by one pass from a bottom trawl had not recovered a year later. Northeast Atlantic reefs bear scars up to 4 kilometers (2.5 mi) long. In Southern Australia, 90 percent of the surfaces on coral seamounts are now bare rock. Even in the Great Barrier Reef World Heritage Area, seafloor trawling for prawns and scallops is causing localized extinction of some coral species.
"With increased human population and improved storage and transport systems, the scale of human impacts on reefs has grown exponentially. For example, markets for fish and other natural resources have become global, supplying demand for reef resources."
Marine pollution
Reefs in close proximity to human populations are subject to poor water quality from land- and marine-based sources. In 2006 studies suggested that approximately 80 percent of ocean pollution originates from activities on land. Pollution arrives from land via runoff, the wind and "injection" (deliberate introduction, e.g., drainpipes).
Runoff brings with it sediment from erosion and land-clearing, nutrients and pesticides from agriculture, wastewater, industrial effluent and miscellaneous material such as petroleum residue and trash that storms wash away. Some pollutants consume oxygen and lead to eutrophication, killing coral and other reef inhabitants.An increasing fraction of the global population lives in coastal areas. Without appropriate precautions, development (e.g., buildings and paved roads) increases the fraction of rainfall and other water sources that enter the ocean as runoff by decreasing the land's ability to absorb it.Pollution can introduce pathogens. For example, Aspergillus sydowii has been associated with a disease in sea fans, and Serratia marcescens, has been linked to the coral disease white pox.Reefs near human populations can be faced with local stresses, including poor water quality from land-based sources of pollution. Copper, a common industrial pollutant has been shown to interfere with the life history and development of coral polyps.
In addition to runoff, wind blows material into the ocean. This material may be local or from other regions. For example, dust from the Sahara moves to the Caribbean and Florida. Dust also blows from the Gobi and Taklamakan deserts across Korea, Japan, and the Northern Pacific to the Hawaiian Islands. Since 1970, dust deposits have grown due to drought periods in Africa. Dust transport to the Caribbean and Florida varies from year to year with greater flux during positive phases of the North Atlantic Oscillation. The USGS links dust events to reduced health of coral reefs across the Caribbean and Florida, primarily since the 1970s. Dust from the 1883 eruption of Krakatoa in Indonesia appeared in the annular bands of the reef-building coral Montastraea annularis from the Florida Reeftract.Sediment smothers corals and interferes with their ability to feed and reproduce. Pesticides can interfere with coral reproduction and growth. There are studies that present evidence that chemicals in sunscreens contribute to coral bleaching by lowering the resistance of zooxanthellae to viruses, though these studies showed significant flaws in methodology and did not attempt to replicate the complex environment found in coral reefs.
Nutrient pollution
Nutrient pollution, particularly nitrogen and phosphorus can cause eutrophication, upsetting the balance of the reef by enhancing algal growth and crowding out corals. This nutrient–rich water can enable blooms of fleshy algae and phytoplankton to thrive off coasts. These blooms can create hypoxic conditions by using all available oxygen. Biologically available nitrogen (nitrate plus ammonia) needs to be below 1.0 micromole per liter (less than 0.014 parts per million of nitrogen), and biologically available phosphorus (orthophosphate plus dissolved organic phosphorus) needs to be below 0.1 micromole per liter (less than 0.003 parts per million of phosphorus). In addition concentrations of chlorophyll (in the microscopic plants called phytoplankton) needs to be below 0.5 parts per billion. Both plants also obscure sunlight, killing both fish and coral. High nitrate levels are specifically toxic to corals, while phosphates slow down skeletal growth.
Excess nutrients can intensify existing disease, including potentially doubling the spread of Aspergillosis, a fungal infection that kills soft corals such as sea fans, and increasing yellow band disease, a bacterial infection that kills reef-building hard corals by fifty percent.
Air pollution
A study released in April 2013 has shown that air pollution can also stunt the growth of coral reefs; researchers from Australia, Panama and the UK used coral records (between 1880 and 2000) from the western Caribbean to show the threat of factors such as coal-burning coal and volcanic eruptions. The researchers state that the study signifies the first time that the relationship between air pollution and coral reefs has been elucidated, while former chair of the Great Barrier Reef Marine Park Authority Ian McPhail referred to the report as "fascinating" upon the public release of its findings.
Marine debris
Marine debris is defined as any persistent solid material that is manufactured or processed and directly or indirectly, intentionally or unintentionally, disposed of or abandoned into the marine environment or the Great Lakes. Debris may arrive directly from a ship or indirectly when washed out to sea via rivers, streams, and storm drains. Human-made items tend to be the most harmful such as plastics (from bags to balloons, hard hats to fishing line), glass, metal, rubber (millions of waste tires), and even entire vessels.Plastic debris can kill and harm multiple reef species. Corals and coral reefs are at higher risk because they are immobile, meaning that when the water quality or other changes in their habitat occur, corals cannot move
to a different place; they must adapt or they will not survive. There are two different classes of plastics, macro and microplastics and both types can cause damage in a number of ways. For example, macroplastics such as derelict (abandoned) fishing nets and other gear—often called "ghost nets" can still catch fish and other marine life and kill those organisms and break or damage reefs. Large items such as abandoned fishing nets are known as macroplastics whereas microplastics are plastic fragments that are typically less than or equal to 5 mm in length and have primarily been found to cause damage to coral reefs though corals ingesting these plastic fragments.Fig. 1. Video frame sequence of capture and ingestion of a microplastic particle by a polyp of Astroides calycularis. Obtained from Savinelli et al. (2020)]] Some researchers have found that ingestion of microlastics harms coral, and subsequently coral reefs, because ingesting these fragments reduced coral food intake as well as coral fitness since corals waste a lot of time and energy handling the plastic particles. Unfortunately, even remote reef systems suffer the effects of marine debris, especially if it is plastic pollution. Reefs in the Northwestern Hawaiian Islands are particularly prone to the accumulation of marine debris because of their central location in the North Pacific Gyre. Fortunately, there are solutions to protect corals and coral reefs against the harmful effects of plastic pollution. However, since little to no research exists regarding specific medicinal ways to help corals recover from plastic exposure, the best solution is to not let plastics enter the marine environment at all. This can be accomplished through a number of ways, some of which are already being enacted. For example, there are measures to ban microplastics from products like cosmetics and toothpaste as well as measures that demand for products that contain microplastics to be labeled as such so as to reduce their consumption. Additionally, newer and better detection methods are needed for microplastics and they must be installed at waste water treatment facilities to prevent these particles from entering the marine environment and causing damage to marine life, especially coral reefs. Many people are realizing the problem of plastic pollution and other marine debris though, and have taken steps to mitigate it. For example, from 2000 to 2006, NOAA and partners removed over 500 tons of marine debris from the reefs in the Northwestern Hawaiian Islands.Cigarette butts also damage aquatic life. In order to avoid cigarette butt litter, some solutions have been proposed, including possibly banning cigarette filters and implementing a deposit system for e-cigarette pods.
Dredging
Dredging operations are sometimes completed by cutting a path through a coral reef, directly destroying the reef structure and killing any organisms that live on it. Operations that directly destroy coral are often intended to deepen or otherwise enlarge shipping channels or canals, due to the fact that in many areas, removal of coral requires a permit, making it more cost-effective and simple to avoid coral reefs if possible.
Dredging also releases plumes of suspended sediment, which can settle on coral reefs, damaging them by starving them of food and sunlight. Continued exposure to dredging spoil has been shown to increase rates of diseases such as white syndrome, bleaching and sediment necrosis among others. A study conducted in the Montebello and Barrow Islands showed that the number of coral colonies with signs of poor health more than doubled in transects with high exposure to dredging sediment plumes.
Sunscreen
Sunscreen can enter the ocean indirectly through wastewater systems when it is washed off and from swimmers and divers or directly if the sunscreen comes off people when in the ocean. Some 14,000 tons of sunscreen ends up in the ocean each year, with 4000 to 6000 tons entering reef areas annually. There is an estimate that 90% of snorkeling and diving tourism is concentrated on 10% of the world's coral reefs, meaning that popular reefs are especially vulnerable to sunscreen exposure. Certain formulations of sunscreen are a serious danger to coral health. The common sunscreen ingredient oxybenzone causes coral bleaching and has an impact on other marine fauna. In addition to oxybenzone, there are other sunscreen ingredients, known as chemical UV filters, that can also be harmful to corals and coral reefs and other marine life. They are: Benzophenone-1, Benzophenone-8, OD-PABA, 4-Methylbenzylidene camphor, 3-Benzylidene camphor, nano-Titanium dioxide, nano-Zinc oxide, Octinoxate, and Octocrylene.In Akumal, Mexico, visitors are warned not to use sunscreen and are kept out of some areas to prevent damage to the coral. In several other tourist destinations, authorities recommend the use of sunscreens prepared with the naturally occurring chemicals titanium dioxide or zinc oxide, or suggest the use of clothing rather than chemicals to screen the skin from the sun. The city of Miami Beach, Florida rejected calls for a ban on sunscreen in 2019 due to lack of evidence. In 2020, Palau enacted a ban on sunscreen and skincare products containing 10 chemicals including oxybenzone. The US state of Hawaii enacted a similar ban which came into effect in 2021. Care should be taken to protect both the marine environment and your skin, as sun exposure causes 90% of premature aging and could cause skin cancer, and it is possible to do both.
Climate change
Rising sea levels due to climate change requires coral to grow so the coral can stay close enough to the surface to continue the process of photosynthesis. Water temperature changes or disease of the coral can induce coral bleaching, as happened during the 1998 and 2004 El Niño years, in which sea surface temperatures rose well above normal, bleaching and killing many reefs. Bleaching may be caused by different triggers, including high sea surface temperature (SST), pollution, or other diseases. SST coupled with high irradiance (light intensity), triggers the loss of zooxanthellae, a symbiotic single cell algae that gives the coral its color and the coral's dinoflagellate pigmentation, which turns the coral white when it is expelled, which can kill the coral. Zooxanthellae provide up to 90% of their hosts' energy supply. Healthy reefs can often recover from bleaching if water temperatures cool. However, recovery may not be possible if CO2 levels rise to 500 ppm because concentrations of carbonate ions may then be too low. In summary, ocean warming is the primary cause of mass coral bleaching and mortality (very high confidence), which, together with ocean acidification, deteriorates the balance between coral reef construction and erosion (high confidence).Warming seawater may also welcome the emerging problem of coral disease. Weakened by warm water, coral is much more prone to diseases including black band disease, white band disease and skeletal eroding band. If global temperatures increase by 2 °C during the twenty-first century, coral may not be able to adapt quickly enough to survive.Warming seawater is also expected to cause migrations in fish populations to compensate for the change. This puts coral reefs and their associated species at risk of invasion and may cause their extinction if they are unable to compete with the invading populations.A 2010 report by the Institute of Physics predicts that unless the national targets set by the Copenhagen Accord are amended to eliminate loopholes, then by 2100 global temperatures could rise by 4.2 °C and result in an end to coral reefs. Even a temperature rise of just 2 °C, currently very likely to happen in the next 50 years (so by 2068 A.D.), there would be a more than 99% chance that tropical corals would be eradicated.Warm-water coral reef ecosystems house one-quarter of the marine biodiversity and provide services in the form of food, income and shoreline protection to coastal communities around the world. These ecosystems are threatened by climate and non-climate drivers, especially ocean warming, MHWs, ocean acidification, SLR, tropical cyclones, fisheries/overharvesting, land-based pollution, disease spread and destructive shoreline practices. Warm-water coral reefs face near-term threats to their survival, but research on observed and projected impacts is very advanced.Anthropogenic climate change has exposed ocean and coastal ecosystems to conditions that are unprecedented over millennia (high confidence2 15 ), and this has greatly impacted life in the ocean and 16 along its coasts (very high confidence).
Ocean acidification
Ocean acidification results from increases in atmospheric carbon dioxide. Oceans absorb around one–third of the increase. The dissolved gas reacts with the water to form carbonic acid, and thus acidifies the ocean. This decreasing pH is another issue for coral reefs.Ocean surface pH is estimated to have decreased from about 8.25 to 8.14 since the beginning of the industrial era, and a further drop of 0.3–0.4 units is expected. This drop has made it so the amount of hydrogen ions have increased by 30%. Before the industrial age the conditions for calcium carbonate production were typically stable in surface waters since the carbonate ion is at supersaturated concentrations. However, as the ionic concentration falls, carbonate becomes under-saturated, making calcium carbonate structures vulnerable to dissolution. Corals experience reduced calcification or enhanced dissolution when exposed to elevated CO2. This causes the skeletons of the corals to weaken, or even not be made at all. Ocean acidification may also have an effect of 'gender discrimination' as spawning female corals are significantly more susceptible to the negative effects of ocean acidification than spawning male coral Bamboo coral is a deep water coral which produces growth rings similar to trees. The growth rings illustrate growth rate changes as deep sea conditions change, including changes due to ocean acidification. Specimens as old as 4,000 years have given scientists "4,000 years worth of information about what has been going on in the deep ocean interior".
Rising carbon dioxide levels could confuse brain signaling in fish. In 2012, researchers reported on their results after studying the behavior of baby clown and damselfishes for several years in water with elevated levels of dissolved carbon dioxide, in line with what may exist by the end of the century. They found that the higher carbon dioxide disrupted a key brain receptor in the fish, interfering with neurotransmitter functions. The damaged central nervous systems affected fish behavior and diminishing their sensory capacity to a point "likely to impair their chances of survival". The fishes were less able to locate reefs by smell or "detect the warning smell of a predator fish". Nor could they hear the sounds made by other reef fish, compromising their ability to locate safe reefs and avoid dangerous ones. They also lost their usual tendencies to turn to the left or right, damaging their ability to school with other fish.
Although previous experiments found several detrimental effects on coral fish behavior from projected end-of-21st-century ocean acidification, a 2020 replication study found that "end-of-century ocean acidification levels have negligible effects on [three] important behaviors of coral reef fishes" and with "data simulations, [showed] that the large effect sizes and small within-group variances that have been reported in several previous studies are highly improbable". In 2021 it emerged that allegations of some of the previous studies being fraudulent have been raised. Furthermore, effect sizes of studies assessing ocean acidification effects on fish behavior have declined dramatically over a decade of research on this topic, with effects appearing negligible since 2015.
Ocean deoxygenation
There has been a severe increase in mass mortality events associated with low oxygen causing mass hypoxia with the majority having been in the last 2 decades. The rise in water temperature leads to an increase in oxygen demand and the increase for ocean deoxygenation which causes these large coral reef dead zones. For many coral reefs, the response to this hypoxia is very dependent on the magnitude and duration of the deoxygenation. The symptoms can be anywhere from reduced photosynthesis and calcification to bleaching. Hypoxia can have indirect effects like the abundance of algae and spread of coral diseases in the ecosystems. While coral is unable to handle such low levels of oxygen, algae is quite tolerant. Because of this, in interaction zones between algae and coral, increased hypoxia will cause more coral death and higher spread of algae. The increase mass coral dead zones is reinforced by the spread of coral diseases. Coral diseases can spread easily when there are high concentrations of sulfide and hypoxic conditions. Due to the loop of hypoxia and coral reef mortality, the fish and other marine life that inhabit the coral reefs have a change in behavioral in response to the hypoxia. Some fish will go upwards to find more oxygenated water, and some enter a phase of metabolic and ventilatory depression. Invertebrates migrate out of their homes to the surface of substratum or move to the tips of arborescent coral colonies.Around 6 million people, the majority who live in developing countries, depend on coral reef fisheries. These mass die-offs due to extreme hypoxic events can have severe impacts on reef fish populations. Coral reef ecosystems offer a variety of essential ecosystem services including shoreline protection, nitrogen fixation, and waste assimilation, and tourism opportunities. The continued decline of oxygen in oceans on coral reefs is concerning because it takes many years (decades) to repair and regrow corals.
Disease
Disease is a serious threat to many coral species. The diseases of coral may consist of bacterial, viral, fungal, or parasitic infections. Due to stressors like climate change and pollution, coral can become more vulnerable to diseases. Some examples of coral disease are Vibrio, white syndrome, white band, rapid wasting disease, and many more. These diseases have different effects on the corals, ranging from damaging and killing individual corals to wiping out entire reefs.In the Caribbean, white band disease is one of the primary causes for the death of over eighty percent of Staghorn and Elkhorn coral (Reef Resilience). It is a disease that can destroy miles of coral reef fast.
A disease such as white plague can spread over a coral colony by a half an inch a day. By the time the disease has fully taken over the colony, it leaves behind a dead skeleton. Dead standing coral structures are what most people see after disease has taken over a reef.
Recently, the Florida Reef Tract in the United States has been plagued by a stony coral tissue loss disease. The disease was first identified in 2014 and as of 2018 has been reported in every part of the reef except the lower Florida Keys and the Dry Tortugas. The cause of the disease is unknown but is thought to be caused by bacteria and be transmitted through direct contact and water circulation. This disease event is unique due to its large geographic range, extended duration, rapid progression, high rates of mortality and the number of species affected.
Recreational diving
During the 20th century recreational scuba diving was considered to have generally low environmental impact, and was consequently one of the activities permitted in most marine protected areas. Since the 1970s diving has changed from an elite activity to a more accessible recreation, marketed to a very wide demographic. To some extent better equipment has been substituted for more rigorous training, and the reduction in perceived risk has shortened minimum training requirements by several training agencies. Training has concentrated on an acceptable risk to the diver, and paid less attention to the environment. The increase in the popularity of diving and in tourist access to sensitive ecological systems has led to the recognition that the activity can have significant environmental consequences.Scuba diving has grown in popularity during the 21st century, as is shown by the number of certifications issued worldwide, which has increased to about 23 million by 2016 at about one million per year. Scuba diving tourism is a growth industry, and it is necessary to consider environmental sustainability, as the expanding impact of divers can adversely affect the marine environment in several ways, and the impact also depends on the specific environment. Tropical coral reefs are more easily damaged by poor diving skills than some temperate reefs, where the environment is more robust due to rougher sea conditions and fewer fragile, slow-growing organisms. The same pleasant sea conditions that allow development of relatively delicate and highly diverse ecologies also attract the greatest number of tourists, including divers who dive infrequently, exclusively on vacation and never fully develop the skills to dive in an environmentally friendly way. Low impact diving training has been shown to be effective in reducing diver contact to more sustainable levels. Experience appears to be the most important factor in explaining divers' underwater behaviour, followed by their attitude towards diving and the environment, and personality type.
Other issues
Within the last 20 years, once-prolific seagrass meadows and mangrove forests, which absorb massive amounts of nutrients and sediment, have been destroyed. Both the loss of wetlands, mangrove habitats and seagrass meadows affect the water quality of inshore reefs.Coral mining is another threat. Both small scale harvesting by villagers and industrial scale mining by companies are serious threats. Mining is usually done to produce construction material which is valued as much as 50% cheaper than other rocks, such as from quarries. The rocks are ground and mixed with other materials, like cement to make concrete. Ancient coral used for construction is known as coral rag. Building directly on the reef also takes its toll, altering water circulation and the tides which bring the nutrients to the reef. The pressing reason for building on reefs is simply lack of space, and because of this, some of the areas with heavily mined coral reefs have still not been able to recover. Another pressing issue is coral collecting. There are bountiful amounts of coral that are deemed so beautiful that they are often collected. The collected coral are used to make a handful of things, including jewelry and home decorations. The breakage of coral branches is unhealthy for the reefs; therefore, tourists and those who purchase such items contribute greatly to the already devastating coral reefs and climate change.
Boats and ships require access points into bays and islands to load and unload cargo and people. For this, parts of reefs are often chopped away to clear a path. Negative consequences can include altered water circulation and altered tidal patterns which can disrupt the reef's nutrient supply; sometimes destroying a great part of the reef. Fishing vessels and other large boats occasionally run aground on a reef. Two types of damage can result. Collision damage occurs when a coral reef is crushed and split by a vessel's hull into multiple fragments. Scarring occurs when boat propellers tear off the live coral and expose the skeleton. The physical damage can be noticed as striations. Mooring causes damage which can be reduced by using mooring buoys. Buoys can attach to the seafloor using concrete blocks as weights or by penetrating the seafloor, which further reduces damage. Also, reef docks can be used to move over goods from large, seagoing vessels to small, flat-bottomed vessels.
Coral in Taiwan is being threatened by the influx of human population growth. Since 2007, several local environmental groups conducted research and found that much of the coral populations are being affected by untreated sewage, an influx of tourists taking corals for souvenirs, without fully understanding the destructive impact on the coral's ecological system. Researchers reported to the Taiwanese government that many coral populations have turned black in the southeast coast of Taiwan. Potentially, this could lead to loss of food supply, medicinal sources and tourism due to the breakdown of the food chain.
Oil
Causes and Effects of Oil Spills
The causes for oils spills can be separated into 2 categories: natural and anthropogenic causes.Natural causes can be from oil that leaks out from the ocean floor into the water; erosion of the seafloor; or even climate change. The amount that naturally seeps into the ocean is 181 million gallons, which varies yearly.
Anthropogenic causes involve human activities and is how most oil enters the ocean. The ways oil spills anthropogenically in the ocean are because of drilling rigs, pipelines, refineries, and wars. Anthropogenic spills are more harmful than naturals spills, as unlike natural spills, they leak about 210 million gallons of petroleum each year. Also, anthropogenic spills cause abrupt changes to ecosystems with long-term effects and even longer remediations.When oil spills occur, the affects can be felt in an area for decades and can cause massive damage to the aquatic life. For aquatic plant life an oil spill could affect how light, and oxygen is available for photosynthesis.
Two other examples of the many ways oil harms wildlife are in the form of oil toxicity and fouling. Oil toxicity affects the wildlife when the toxic compounds oil is made up of enters the body doing damage to the internal organs, and eventually causes death. Fouling is when oil harms wildlife via coating itself on an animal or plant physically.Oil Impacts on Coral Reef Communities
Oil pollution is hazardous to living marine habitats due to its toxic constituents. Oil spills occur due to natural seepage and during activities such as transportation and handling. These spills harm the marine and coastal wildlife. When the organisms have become exposed to these oil spills, it can lead them to suffer from skin irritation, decreased immunity and gastrointestinal damage.When oil floats above the coral reef, it will have no effect on the coral below, it is when the oil starts to sink to the ocean floor when it becomes a problem. The problem is the physical effect from the oil-sediment particle which has been found to be less harmful than if the coral came in contact with the toxic oil.When the oil comes into contact with corals, not only the reef system will be affected but fish, crabs and many more marine invertebrates. Just a few drops of oil can cause coral reef fish to make poor decisions. Oil will impact the thinking of the coral reef fish in a way that could be dangerous to the fish and the coral reef where they choose their home.It can negatively affect their growth, survival, settlement behaviors, and increases predation. It has been found that larval fish who have been exposed to oil will eventually have heart issues and physical irregularities later in life.
Oil Impacts on Coral Life and Symbiotic Relationships
Evidence for the damaging effects of oil spills on coral reef structures can be seen at a reef site a few kilometers southwest of the Macondo well. Coral at this site, which has been covered in crude oil chemicals and brown flocculent particles, were found dying just seven months after the Deepwater Horizon eruption.
Gorgonian octocorals (soft coral communities) are highly susceptible to damage from oil spills. This is due to the structure and function of their polyps, which are specialized in filtering tiny particles from the water.Corals have a complex relationship with many different prokaryotic organisms, including probiotic microorganisms that protect the corals from harmful environmental pollutants. However, research has shown that oil spills damage these organisms, and weaken their ability to protect reef structures in the presence of oil pollution.Oil Clean up Methods
Booms are floating barricades that are placed in an oil spreading area that restrict the movement of floating oil. Booms are often utilized alongside skimmers, which are sponges and oil absorbent ropes that collect oil from the water. Moreover, insitu-burning and chemical dispersion can be utilized during an oil spillage. Insitu-burning refers to burning oil that has been collected to one location with a fire-resistant containment boom, however, the combustion from insitu-burning does not fully remove the oil but instead breaks it down into different chemicals which can negatively affect marine reefs.Chemical dispersants consist of emulsifiers and solvents that break oil into small droplets and are the most common form of oil removal, however, these can reduce corals resilience to environmental stressors. Moreover, chemical dispersants can physically harm coral species when exposed. Dispersants have been also utilized to clean oil spills, however, they harm early stages of coral and reduces the settlement on reef systems and have since been banned. However, there is still one formulation of dispersants used, the Corexit 9427.Microbial biosurfactants can be utilized to reduce the damage to reef ecosystems as an eco-friendly method, however, there are limitations to their effect. This method is still being studied and is not a certain method of oil clean up.
Threatened species
The global standard for recording threatened marine species is the IUCN Red List of Threatened Species. This list is the foundation for marine conservation priorities worldwide. A species is listed in the threatened category if it is considered to be critically endangered, endangered, or vulnerable. Other categories are near threatened and data deficient. By 2008, the IUCN had assessed all 845 known reef-building corals species, marking 27% as Threatened, 20% as near threatened, and 17% as data deficient.The coral triangle (Indo-Malay-Philippine archipelago) region has the highest number of reef-building coral species in threatened category as well as the highest coral species diversity. The loss of coral reef ecosystems will have devastating effects on many marine species, as well as on people that depend on reef resources for their livelihoods.
Issues by region
Australia
The Great Barrier Reef is the world's largest coral reef system. The reef is located in the Coral Sea and a large part of the reef is protected by the Great Barrier Reef Marine Park. Particular environmental pressures include surface runoff, salinity fluctuations, climate change, cyclic crown-of-thorns outbreaks, overfishing, and spills or improper ballast discharge. According to the 2014 report of the Government of Australia's Great Barrier Reef Marine Park Authority (GBRMPA), climate change is the most significant environmental threat to the Great Barrier Reef. As of 2018, 50% of the coral on the Great Barrier Reef has been lost.
Southeast Asia
Southeast Asian coral reefs are at risk from damaging fishing practices (such as cyanide and blast fishing), overfishing, sedimentation, pollution and bleaching. Activities including education, regulation and the establishment of marine protected areas help protect these reefs.
Indonesia
Indonesia is home to one-third of the world's coral reefs, with coral that covers nearly 85,000 square kilometres (33,000 sq mi) and is home to one-quarter of its fish species. Indonesia's coral reefs are located in the heart of the Coral Triangle and have fallen victim to destructive fishing, tourism and bleaching. Data from LIPI in 1998 found that only 7 percent is in excellent condition, 24 percent is in good condition and approximately 69 percent is in poor-to-fair condition. According to one source, Indonesia will lose 70 percent of its coral reef by 2050 if restoration action does not occur.
Philippines
In 2007, Reef Check, the world's largest reef conservation organization, stated that only 5% of Philippines 27,000 square kilometres (10,000 sq mi) of coral reef are in "excellent condition": Tubbataha Reef, Marine Park in Palawan, Apo Island in Negros Oriental, Apo Reef in Puerto Galera, Mindoro, and Verde Island Passage off Batangas. Philippine coral reefs is Asia's second largest.
Taiwan
Coral reefs in Taiwan are being threatened by human population growth. Many corals are affected by untreated sewage and souvenir-hunting tourists, not knowing that this practice destroys habitat and causes disease. Many corals have turned black from disease off Taiwan's southeast coast.
Caribbean
Coral disease was first recognized as a threat to Caribbean reefs in 1972 when black band disease was discovered. Since then diseases have been occurring with higher frequency.It has been estimated that 50% of the Caribbean sea coral cover has disappeared since the 1960s. According to a United Nations Environment Program report, the Caribbean coral reefs might face extirpation in next 20 years due to population expansion along the coast lines, overfishing, the pollution of coastal areas, global warming, and invasive species.In 2005, the Caribbean lost about 50% of its reef in one year due to coral bleaching. The warm water from Puerto Rico and the Virgin Islands travelled south to cause this coral bleaching.
Jamaica
Jamaica is the third largest Caribbean island. The Caribbean's coral reefs will cease to exist in 20 years if a conservation effort is not made. In 2005, 34 percent of Jamaica's coral reefs were bleached due to rising sea temperatures. Jamaica's coral reefs are also threatened by overfishing, pollution, natural disasters, and reef mining. In 2009, researchers concluded that many of the corals are recovering very slowly.
United States
Southeastern Florida's reef track is 300 miles long. Florida's coral reefs are currently undergoing an unprecedented stony coral tissue loss disease. The disease covers a large geographic range and affects many species of coral.In January 2019, science divers confirmed that the outbreak of stony coral tissue that extends south and west of Key West. In December 2018, Disease was spotted at Maryland Shoals, near the Saddlebunch Keys. By mid January 5 more sites between American Shoal and Eastern Dry Rocks were confirmed diseased.Puerto Rico is home to over 5,000 square kilometers of shallow coral reef ecosystems. Puerto Rico's coral reefs and associated ecosystems have an average economic value of nearly $1.1 billion per year.The U.S. Virgin Islands’ coral reefs and associated ecosystems have an average economic value of $187 million per year.
Pacific
United States
Hawaii's coral reefs (e.g. French Frigate Shoals) are a major factor in Hawaii's $800 million a year marine tourism and are being affected negatively by coral bleaching and increased sea surface temperatures, which in turn leads to coral reef diseases. The first large-scale coral bleaching occurred in 1996 and in 2004 it was found that the sea surface temperatures had been steadily increasing and if this pattern continues, bleaching events will occur more frequently and severely.
See also
Marine cloud brightening
References
"Reef Resilience: Coral Reef Resilience to Crown of Thorns Starfish and Coral Bleaching". livingoceansfoundation.org. Khaled bin Sultan Living Oceans Foundation. 16 November 2016.
Further reading
Barber, Charles V. and Vaughan R. Pratt. 1998. Poison and Profit: Cyanide Fishing in the Indo-Pacific. Environment, Heldref Publications.
Martin, Glen. 2002. "The depths of destruction Dynamite fishing ravages Philippines' precious coral reefs". San Francisco Chronicle, 30 May 2002
Rinkevich, Baruch (1 April 2014). "Rebuilding coral reefs: does active reef restoration lead to sustainable reefs?". Current Opinion in Environmental Sustainability. 7: 28–36. Bibcode:2014COES....7...28R. doi:10.1016/j.cosust.2013.11.018.
External links
Hoegh-Guldberg, Ove (1999). "Climate Change: Coral Reefs on the Edge". Global Change Institute, University of Queensland. Archived from the original on 2010-06-14.
NOAA Report: The State of Coral Reef Ecosystems of the United States and Pacific Freely Associated States: 2008
A special report on the plight of the planet's coral reefs—and how you can help—from Mother Jones magazine |
wildlife observation | Wildlife observation is the practice of noting the occurrence or abundance of animal species at a specific location and time, either for research purposes or recreation. Common examples of this type of activity are bird watching and whale watching.
The process of scientific wildlife observation includes the reporting of what (diagnosis of the species), where (geographical location), when (date and time), who (details about observer), and why (reason for observation, or explanations for occurrence). Wildlife observation can be performed if the animals are alive, with the most notable example being face-to-face observation and live cameras, or are dead, with the primary example being the notifying of where roadkill has occurred. This outlines the basic information needed to collect data for a wildlife observation; which can also contribute to scientific investigations of distribution, habitat relations, trends, and movement of wildlife species.
Wildlife observation allows for the study of organisms with minimal disturbance to their ecosystem depending on the type of method or equipment used. The use of equipment such as unmanned aerial vehicles (UAVs), more commonly known as drones, may disturb and cause negative impacts on wildlife. Specialized equipment can be used to collect more accurate data.
History
Wildlife observation is believed to have traced its origins to the rule of Charles II of England when it was first instituted in 1675 at the Royal Observatory in present-day Greenwich, part of London. In modern times, it has practiced as an observance of wildlife species monitored in areas of vast wilderness.
Importance
Through wildlife observation, there are many important details that can be discovered about the environment. For instance, if a fisher in Taiwan discovers that a certain species of fish he/she frequently catches is becoming rarer and rarer, there might be a substantial issue in the water that fisher is fishing in. It could be that there is a new predator in the water that has changed the animal food chain, a source of pollution, or perhaps even a larger problem. Regardless of the reason, this process of observing animals can help identify potential issues before they become severe problems in the world.
Additionally, through animal observation, those who participate are also actively participating in the conservation of animal life. Oftentimes, the two subjects go hand-in-hand with one another because through the observation of animals, individuals are also discovering what issues animals around the world are currently facing and if there are any ways to put up a fight against them. With more observation, fewer species of animals will become extinct.
Research
Before one can get started observing wildlife and helping the environment, it is important to research the animal they are choosing to observe. If one simply went into the observation process and skipped the crucial process of obtaining knowledge about the animals, it would be difficult for them to determine if anything was out of the ordinary. Before observing, it would be wise to find out simple information about the animal such as:
What the animal eats - Is the animal a carnivore, herbivore, or omnivore?
What animals prey on the animal?
Where does the animal live?
Is the animal dangerous?
Is it endangered?
Does the animal travel in packs or alone?
What are the animal's sleeping habits?
Projects and programs devoted to wildlife observation
Projects
There are a variety of projects and websites devoted to wildlife observations. One of the most common projects are for bird observations (for example: e-bird). For those who enjoy bird watching, there are a variety of ways one can contribute to this type of wildlife observation. The National Wildlife Refuge System has volunteer opportunities, citizen science projects, and if one is limiting in time; could purchase a Federal Duck Stamp that donates money to the wildlife refuge lands. In the past few years, websites dedicated to reporting wildlife across broad taxonomic ranges have become available. For example, the California Roadkill Observation System provides a mechanism for citizen-scientists in California to report wildlife species killed by vehicles. The Maine Audubon Wildlife Road Watch allows reporting of observations of both dead and live animals along roads. A more recent addition to wildlife observation tools are the web sites that facilitate uploading and management of images from remote wildlife cameras. For example, the Smithsonian Institution supports the eMammal and Smithsonian Wild programs, which provide a mechanism for volunteer deployment of wildlife cameras around the world. Similarly, the Wildlife Observer Network hosts over a dozen wildlife-camera projects from around the world, providing tools and a database to manage photographs and camera networks.
Monitoring programs
Monitoring programs for wildlife utilize new and easier ways to monitor animal species for citizen scientists and research scientists alike. One such monitoring device is the automated recorder. Automated recorders are a reliable way to monitor species such as bird, bats, and amphibians as they provide ability to save and independently identify a specific animal call. The automated recorder analyzes the sounds of the species to identify the species and how many there are. It was found that using the automated recorders produced larger quantity and even more quality data when compared with traditional, point-count data recording. While providing better quality, it also provides a permanent record of the census which can be continually reviewed for any potential bias. This monitoring device can improve wildlife observation and potentially save more animals. Using this device can allow for continued tracking of populations, continued censusing of individuals within a species, and allow for faster population size estimates.
Live watching and observation
Birdwatching
One of the most popular forms of wildlife observation, birdwatching, is typically performed as a recreational pleasure. Those looking to birdwatch typically travel into a forest or other wooded area with a pair of binoculars in hand to aid the process. Birdwatching has become all the more important with the amount of deforestation that has been occurring in the world. Birds are arguably the most important factor in the balance of environmental systems: "They pollinate plants, disperse seeds, scavenge carcasses and recycle nutrients back into the earth." A decrease in the total number of birds would cause destruction to much of the environmental system. The plants and trees around the world would die at an alarming rate which would, in turn, set off a chain reaction that would cause many other animals to die due to the environment change and habitat loss.
One of the ways that birdwatching has an effect on the environment as a whole is that through consistent birdwatching, an observer would be able to identify whether they are seeing less of a certain species of bird. If this happens, there typically is a reasons for the occurrence, whether it be because of an increase in pollution in the area or possibly an increase in the population of predators. If a watcher were to take notice of a change in what they typically see, they could notify the city or park and allow them to investigate into the cause a bit further. Through this action, birdwatchers are preserving the future for both animal and human life.
Subsequently, by taking children birdwatching it is allowing the future generation to understand the importance of animal observation. If children learn at a young age how the environmental system works and that all life is intertwined, the world will be in much better hands. These children will be the ones pioneer conservation movements and attempt to protect the habit for all animals.
Livestreams
Live streams of animal exhibits at various zoos and aquariums across the United States have also become extremely popular. The Tennessee Aquarium has a webcam that allows online viewers to take a look into the happening so their Secret Reef exhibit which consists of reef fish, sharks, and a rescued green sea turtle.Perhaps the most popular animals cams in the United States though come from, naturally, the largest zoo in the United States: The San Diego Zoo. The San Diego Zoo features eight live cams on their website – A panda, elephant, ape, penguin, polar bear, tiger, condor, and koala. The purpose of the live streams is to help educate the public about the behaviors of several different animals and to entertain those who might not be able to travel to a zoo.The other notable zoos that have webcams are the National Zoo, Woodland Park Zoo, Houston Zoo, and Atlanta Zoo.
Additionally, the Smithsonian National Museum of Natural History has opened a butterfly and plant observation pavilion. Visitors walk into a large tent and experience a one-of-a-kind situation in which hundreds of rare butterflies from all across the world are inches from their faces.
Collecting data
As is the case with a majority of subjects, one of the best and most effective ways to observe live animals is through data collection. This process can be done through a livestream or in the wild but it is more useful if the data is collected on animals that are in currently in the wild. The ways the data can be collected are endless and it really just depends on what purpose an individual has as to what data would be the most useful.
For example, if someone is interested in how deer interact with other animals in a certain location, it would be beneficial for them to take notes and record all of the animals that are native to the area where the deer are located. From there, they can describe any scenarios in which the deer had a positive or negative interaction with the other species of animals. In this instance, it would not really be helpful for the observer to collect data pertaining to the types of food the deer eat because the study is only focusing on the interaction amongst animals.
Another example of how collecting data on wildlife would be useful is keeping track of the total number of a certain species exists within a forest. Naturally, it will be impossible to get a definitive number but if an accurate approximation can be made, it could be beneficial in determining if there has been a random increase or decrease in the population. If there is an increase, it could be due to a change in the species migration habits and if there is a decrease, it could be due to an external factor such as pollution or introduction of a new predator.
Deceased wildlife observation
Online systems and mobile apps
Many states have already begun to set up websites and systems for the public. The main purpose behind the movement is so that they can notify other individuals about road-killed wildlife. If enough people fill out the forms located on the websites, the government will become notified that there have been occurrences of a loss of animal life and will take the steps required to prevent it. Typically, the step that is taken is the posting of a wildlife crossing sign that, in turn, allows the public to know where there are common animal crossings. Maine and California are the states that have been the pioneers of this movement and this process has become particularly important on heavily traveled roads as no one would like endanger the animals or themselves.Currently, there is an app (available on both iPhone and Android devices) made specifically for the purpose of identifying road-kill called “Mobile Mapper.” The app is a partner of the HerpMapper website. The purpose of the website is to use the user recorded observations for research and conservation purposes.On average, the cost of repairing a car that has been damaged by a deer or other medium to large-sized animal is $2,000. Even though there is no way that accidents involving animals can completely be prevented, placing more signs about possible animal crossings zones would cause drivers to drive more carefully and therefore have fewer accidents. Economically, this means that more families will be saving money and it could be used in a different way to help contributed to society as a whole.
Issues leading to the extinction of animals
Climate change
Climate change is one of the most heavily discussed topics around the world today, both politically and scientifically. The climate that Earth is currently experiencing has been steadily changing over time due to both natural causes and human exploitation. Climate change has the potential to be detrimental to wildlife across the world, whether that be through rising sea levels, changes in temperatures through the years, or deforestation. These are just a few of the examples of the contributing factors to climate change.
Climate change is not something that citizens can entirely prevent from happening even if they wanted to. There are many natural causes such as volcanic activity and the Earth's orbit around the Sun that are strong contributing factors to the phenomena. There are, however, prevention measures that can be taken to prevent climate change from happening as quickly. The primary way to prevent climate change is for society to reduce the amount of greenhouse gases that are present in the atmosphere. This can be done through the improving of energy efficiency in many buildings, the stoppage of deforestation so more carbon dioxide can be removed from the atmosphere, and mode switching.
Rising sea levels
One of the more notable effects climate change has on the environment is the rising of sea levels around the world. Over the past 100 years, the sea level has risen approximately 1.8 millimeters each year. The steady rise in sea levels can be attributed to the steadily increasing temperatures the Earth faces each year which causes the ice caps and glaciers to melt. This increase in sea level is detrimental to the coastal ecosystems that exist around the world.
The increase in sea level causes flooding on coastal wetlands, where certain animals will be unable to survive due to saltwater inundation. The increase in the total amount of saltwater present in these wetlands could prove to be problematic for many species. While some may simply have to migrate to other areas, smaller ecosystems within the wetlands could be destroyed which, once again, influences the animal food chain.Polar bears are animals that are specifically affected through the process of rising sea levels. Living near the Arctic region, polar bears find their food on ice caps and sheets of ice. As these sheets continue to become fewer in quantity, it is predicted that the polar bears will have a difficult time sustaining life and that by the year 2050, there could be less than 20,000 on Earth.
Coral reefs are the primary ecosystem that would be affected through a continuing increase in the sea level:"The coral reef ecosystem is adapted to thrive within certain temperature and sea level range. Corals live in a symbiotic relationship with photosynthetic zooxanthellae. Zooxanthellae need the sunlight in order to produce the nutrients necessary for the coral. Sea level rise may cause a decrease in solar radiation at the sea surface level, affecting the ability of photosynthetic zooxanthellae to produce nutrients for the coral, whereas, a sudden exposure of the coral reef to the atmosphere due to a low tide event may induce coral bleaching."
The loss of coral would have a subsequent effect on the total number of fish that exist within these ecosystems. In the Indo-Pacific coral reefs alone, there are in-between 4000 and 5000 different species of fish that have a relationship with the species of coral. Specifically, the numerous different species of butterfly fish that feed on coral within these reefs would be affected if the coral was unable to live due to an increase in sea level. Referring back to the food chain topic, this would then subsequently but directly affect species of snappers, eels, and sharks that use butterfly fish as a primary source of food. If the snappers cannot find any butterfly fish to eat because the butterfly fish are dying due to the lack of coral, it means that the snapper population will decrease as well.
The rising of sea level has the possibility to be catastrophic to the coastal ecosystems.
Pollution
Pollution is another crucial threat to animal life, and human life, across the world. Every form of pollution has an effect on wildlife, whether it be through the air, water, or ground. While sometimes the origin and form of pollution is visible and easy to determine, other times it can be a mystery as to what exactly is causing the death of animals. Through constant and consistent observation of habitat analysis, humans can help prevent the loss of animal life by recognizing the early signs of pollution before the problem becomes too large.
Ocean and water pollution
Pollution can enter bodies of water in many different ways - Through toxic runoff from pesticides and fertilizers, containers of oil and other hazardous materials falling off of ships, or just from debris from humans that has not been picked up. No matter what the form of pollution is, the effects water pollution has on animal life can be drastic. For example, the BP oil spill which occurred in 2010 impacted over 82,000 birds, 6,000 sea turtles, approximately 26,000 marine animals, and hundreds of thousands of fish.While the observation of how animal life was and has been affected by this spill is unique and definitely on the larger scale, it still represents an accurate depiction of how observation can be crucial to animal lives. For example, by observing that a certain species of sea turtle was affected by the oil spill, zoologists and their teams would be able to determine the effects the loss of that sea turtle would have.Another prominent example is how if one day a fisherman goes to a lake that he/she frequently visits and notices two or three dead fish on the surface. Knowing that that frequently does not happen, the fisherman tells his local city officials and park rangers about the occurrence and they find out that a farmer has been using a new pesticide that runs off into the lake. By simply observing what is common and what is not, the effects of some water pollution can be stopped before becoming too severe.
Air pollution
Air pollution is commonly associated with the image of billowing clouds of smoke rising into the sky from a large factory. While the fumes and smoke previously stated definitely is a prominent form of air pollution, it is not the only one. Air pollution can come from the emission of cars, smoking, and other sources. Air pollution does not just affect birds though, like one may have thought. Air pollution affects mammals, birds, reptiles, and any other organism that requires oxygen to live. Frequently, if there is any highly dangerous air pollution, the animal observation process will be rather simple: There will be an abundance of dead animals located near the vicinity of the pollution.
The primary concern of air pollution is how widespread the pollution can become in a short period of time. Acid rain is one of the largest forms of pollution today. The issue with acid rain is that it affects literally every living organism it comes in contact with, whether that be trees in a forest, water in an ocean or lake, or the skin of humans and animals. Typically, acid rain is a combination of sulfur dioxide and nitrogen oxides that are emitted from factories. If it is not controlled in a timely manner, it could lead to loss of life due to the dangerous nature of the composition of the rain.
Deforestation
Deforestation has become one of the most prevalent issues environmentally. With a continuously growing population and not having the space to contain all the humans on Earth, forests are frequently the first areas that are cleared to make more room. According to National Geographic, forests still cover approximately 30 percent of the land on Earth but each year large portions are cleared.With deforestation, there are numerous subsequent side effects. Most notably, the clearing of entire forests (in some instances) destroys the habitat for hundreds of species of animals and 70 percent of the animals that reside in the forest will die as a result. Additionally, deforestation causes a reduction in the total canopy cover which leads to more extreme temperature swings on the ground level because there are no branches and leaves to catch the sun's rays.The way to combat the severe effects on the loss of animal life would be to stop cutting trees and forests down. While this is unlikely and almost impossible to happen, there is another solution: the partial removal of forests. Removing only portions of the forest keeps the environment of the entire forest intact which allows the animals to adapt to their surroundings. Additionally, it is recommended that for every tree that is cut down another one be planted elsewhere in the forest.
Economic effects of animal observation
Costs of observation
Typically, the costs of animal observation are minuscule. As previously stated, animal observation can be done on a small or large scale; it just depends what goal an individual has in mind. For example, animal observation can be performed in the backyard of a house or at a local state park at no charge. All one would have to do is take a notepad, phone, or other device to write down their data and observations. On a larger scale, animal observation could be performed at an animal reserve, where the associated costs would be those associated with keeping the animals happy inside the reserve.
While it is impossible to pinpoint exactly how much the zoos across the world spend on live streaming, it is estimated to be in the $1,000 range for every camera that is set up.
Costs that observation prevents
Referring back to the example from the "Deceased Wildlife Observation" section, it becomes apparent how animal observation can save families and the government money. With the average cost of repairing a car that has damage from a large sized animal being $2,000, families and the government could save money by making the public aware that they should proceed with caution in areas where animals have been hit.
Additionally, approximately $44 million of the $4.3 billion spent on water purity is spent each year on protecting aquatic species from nutrient pollution. It is encouraging that the government is willing to spend the money to help save animals' lives, sometimes the effects of the pollution take effect before they are able to stop them entirely. One million seabirds and hundred thousand aquatic mammals and fish that are killed as a result of water pollution each year and that has its economic effects, both directly and indirectly.Directly, the loss of aquatic mammals and fish has a direct impact on the sales of food. The EPA estimated recently that the effects of pollution cost the fishing industry tens of millions of dollars in sales. Indirectly, the loss of birds causes humans to spend more money on pest control because the food chain is out of order. The small rodents and insects that some birds prey upon are no longer being killed if the birds die. This means more of these pests find their ways into homes which causes more people to call exterminators, therefore setting off a chain reaction. The exterminators then must use insecticides to kill the animals which can have harmful runoff into the ground and local water systems, instead of allowing it to be done naturally by the animal food chain.
See also
Animal migration tracking
Human bycatch
Wildlife photography
References
External links
Media related to Wildlife observation at Wikimedia Commons |
termination shock (novel) | Termination Shock is a science fiction novel by American writer Neal Stephenson, published in 2021. The book is set in a near-future when climate change has significantly altered human society and follows the attempts of a solar geoengineering scheme. The novel focuses on the geopolitical and social consequences of the rogue fix for climate change, themes common in the growing climate fiction genre.
Plot
The book is about a solar geoengineering project conceived by a Texas oil-industry billionaire named T.R. Schmidt. Schmidt builds a launcher on the Texas-Mexico border to fire sulfur into the air, a form of stratospheric aerosol injection intended to cool the planet by reflecting sunlight into space. This technique replicates the effects of volcanic eruptions that inject sulfates into the atmosphere and produce global cooling, such as the 1991 Mount Pinatubo eruption. Schmidt's plan has uneven effects, helping low-lying areas such as the Netherlands, Venice, and the Maldives, but threatening the Punjab with drought.The main characters are Frederika Mathilde Louisa Saskia, the Queen of the Netherlands, and granddaughter of Queen Beatrix; Rufus Grant, a part-Comanche exterminator of feral hogs; and Deep "Laks" Singh, a Punjab-Canadian Sikh. Saskia and Grant become entangled in Schmidt's plan, while Singh travels to the Line of Actual Control on the China-India border, where Chinese and Indian volunteers fight each other using non-lethal martial arts. Singh becomes a world-famous hero after several dramatic and well-promoted victories, but is felled by Chinese directed-energy weapons.Meanwhile, the Chinese government observes Schmidt's geoengineering efforts, and engages in advanced psychological warfare, cyberwarfare, and deadly tsunami bombs to shift European governments towards a pro-geoengineering stance. This also results in Saskia abdicating her throne for her daughter, after which she joins the growing consortium of smaller pro-geoengineering nations as the "Queen of the Netherworld."
In the climax of the book, Singh is sent with a team of drones on a covert mission by India, whose monsoons were delayed by Schmidt's geoengineering campaign, to destroy the Texas launcher. He is thwarted by Rufus, who takes action largely to protect Saskia, who is hiding in the launcher's subterranean shaft.
The book's title refers to the idea that once a solar geoengineering scheme begins, abruptly stopping it would result in rapid warming, called a termination shock.
Reception
Omar El Akkad, reviewing Termination Shock for The New York Times, wrote that it was "...at once wildly imaginative and grounded...both a response to a deeply broken reality, and an attempt to alter it." Reason magazine noted that the book concerns many small-scale adaptations to climate change, and interpreted it as, "...a novelistic attempt to break down the challenges of climate change and address them clearly and concretely rather than as a mass of unsolvable civilizational mega-challenges." The Chicago Review of Books found it to be a "compelling read" but "only obliquely an example of Stephenson’s great gifts for speculative fiction", while Publishers Weekly called it, "fiercely intelligent, weird, darkly witty, and boldly speculative." The Sunday Times reviewed it negatively, saying that, "Stephenson’s vision of our climate future doesn’t rise above the level of slightly smug, nerdy fun."Francis Fukuyama compared it favorably to The Ministry for the Future, a 2020 work of climate fiction by Kim Stanley Robinson.
References
External links
Termination Shock Bibliography |