score
float64 4
5.34
| text
stringlengths 256
572k
| url
stringlengths 15
373
|
---|---|---|
4.09375 | In United States history, the Redeemers were a white political coalition in the Southern United States during the Reconstruction era that followed the Civil War. Redeemers were the southern wing of the Bourbon Democrats, the conservative, pro-business faction in the Democratic Party, who pursued a policy of Redemption, seeking to oust the Radical Republican coalition of freedmen, "carpetbaggers", and "scalawags". They generally were led by the rich landowners, businessmen and professionals, and dominated Southern politics in most areas from the 1870s to 1910.
During Reconstruction, the South was under occupation by federal forces and Southern state governments were dominated by Republicans. Republicans nationally pressed for the granting of political rights to the newly freed slaves as the key to their becoming full citizens. The Thirteenth Amendment (banning slavery), Fourteenth Amendment (guaranteeing the civil rights of former slaves and ensuring equal protection of the laws), and Fifteenth Amendment (prohibiting the denial of the right to vote on grounds of race, color, or previous condition of servitude) enshrined such political rights in the Constitution.
Numerous educated blacks moved to the South to work for Reconstruction, and some blacks attained positions of political power under these conditions. However, the Reconstruction governments were unpopular with many white Southerners, who were not willing to accept defeat and continued to try to prevent black political activity by any means. While the elite planter class often supported insurgencies, violence against freedmen and other Republicans was often carried out by other whites; insurgency took the form of the secret Ku Klux Klan in the first years after the war.
In the 1870s, secret paramilitary organizations, such as the White League in Louisiana and Red Shirts in Mississippi and North Carolina undermined the opposition. These paramilitary bands used violence and threats to undermine the Republican vote. By the presidential election of 1876, only three Southern states – Louisiana, South Carolina, and Florida – were "unredeemed", or not yet taken over by white Democrats. The disputed Presidential election between Rutherford B. Hayes (the Republican governor of Ohio) and Samuel J. Tilden (the Democratic governor of New York) was allegedly resolved by the Compromise of 1877, also known as the Corrupt Bargain or the Bargain of 1877. In this compromise, it was claimed, Hayes became President in exchange for numerous favors to the South, one of which was the removal of Federal troops from the remaining "unredeemed" Southern states; this was however a policy Hayes had endorsed during his campaign. With the removal of these forces, Reconstruction came to an end.
In the 1870s, southern Democrats began to muster more political power as former Confederates began to vote again. It was a movement that gathered energy up until the Compromise of 1877, in the process known as the Redemption. White Democratic Southerners saw themselves as redeeming the South by regaining power. They appealed to scalawags (white Southerners who supported the Republican Party after the civil war and during the time of reconstruction).
More importantly, in a second wave of violence following the suppression of the Ku Klux Klan, violence began to increase in the Deep South. In 1868 white terrorists tried to prevent Republicans from winning the fall election in Louisiana. Over a few days, they killed some two hundred freedmen in St. Landry Parish. Other violence erupted. From April to October, there were 1,081 political murders in Louisiana, in which most of the victims were freedmen. Violence was part of campaigns prior to the election of 1872 in several states. In 1874 and 1875, more formal paramilitary groups affiliated with the Democratic Party conducted intimidation, terrorism and violence against black voters and their allies to reduce Republican voting and turn officeholders out. These included the White League and Red Shirts. They worked openly for specific political ends, and often solicited coverage of their activities by the press. Every election from 1868 on was surrounded by intimidation and violence; they were usually marked by fraud as well.
In the aftermath of the disputed gubernatorial election of 1872 in Louisiana, for instance, the competing governors each certified slates of local officers. This situation contributed to the Colfax Massacre of 1873, in which white Democratic militia killed more than 100 Republican blacks in a confrontation over control of parish offices. Three whites died in the violence.
In 1874 remnants of white militia formed the White League, a Democratic paramilitary group started first in Grant Parish of the Red River area of Louisiana, with chapters arising across the state, especially in rural areas. In August the White League turned out six Republican office holders in Coushatta, Louisiana, and told them to leave the state. Before they could make their way, they and five to twenty black witnesses were assassinated. In September, thousands of armed white militia, supporters of the Democratic gubernatorial candidate John McEnery fought against New Orleans police and state militia in what was called the "Battle of Liberty Place". They took over the state government offices in New Orleans and occupied the capitol and armory. They turned Republican governor William Pitt Kellogg out of office, and retreated only in the face of the arrival of Federal troops sent by President Ulysses S. Grant.
[In 1874 the White League turned out six Republican officeholders in Coushatta, Louisiana and told them to leave the state. Before they could make their way, they and five to twenty black witnesses were assassinated by white paramilitary. In 1874 such remnants of white militia formed the White League, a Democratic paramilitary group started first in Grant Parish of the Red River area of Louisiana, with chapters rising across the state, especially in rural areas.]
Similarly, in Mississippi, the Red Shirts formed as a prominent paramilitary group that enforced Democratic voting by intimidation and murder. Chapters of paramilitary Red Shirts arose and were active in North Carolina and South Carolina as well. They disrupted Republican meetings, killed leaders and officeholders, intimidated voters at the polls, or kept them away altogether.
The Redeemers' program emphasized opposition to the Republican governments, which they considered to be corrupt and a violation of true republican principles. They also worked to reestablish white supremacy. The crippling national economic problems and reliance on cotton meant that the South was struggling financially. Redeemers denounced taxes higher than what they had known before the war. At that time, however, the states had few functions, and planters maintained private institutions only. Redeemers wanted to reduce state debts. Once in power, they typically cut government spending; shortened legislative sessions; lowered politicians' salaries; scaled back public aid to railroads and corporations; and reduced support for the new systems of public education and some welfare institutions.
As Democrats took over state legislatures, they worked to change voter registration rules to strip most blacks and many poor whites of their ability to vote. Blacks continued to vote in significant numbers well into the 1880s, with many winning local offices. Black Congressmen continued to be elected, albeit in ever smaller numbers, until the 1890s. George Henry White, the last Southern black of the post-Reconstruction period to serve in Congress, retired in 1901, leaving Congress completely white.
In the 1890s, the Democrats faced challenges with the Agrarian Revolt, when their control of the South was threatened by the Farmers Alliance, the effects of Bimetallism and the newly created People's Party. On the national level, William Jennings Bryan defeated the Bourbons and took control of the Democratic Party nationwide.
Democrats worked hard to prevent such populist coalitions. In the former Confederate South, from 1890 to 1908, starting with Mississippi, legislatures of ten of the eleven states passed disfranchising constitutions, which had new provisions for poll taxes, literacy tests, residency requirements and other devices that effectively disfranchised nearly all blacks and tens of thousands of poor whites. Hundreds of thousands of people were removed from voter registration rolls soon after these provisions were implemented.
In Alabama, for instance, in 1900 fourteen Black Belt counties had 79,311 voters on the rolls; by June 1, 1903, after the new constitution was passed, registration had dropped to just 1,081. Statewide Alabama in 1900 had 181,315 blacks eligible to vote. By 1903 only 2,980 were registered, although at least 74,000 were literate. From 1900 to 1903, white registered voters fell by more than 40,000, although their population grew in overall number. By 1941, more poor whites than blacks had been disfranchised in Alabama, mostly due to effects of the cumulative poll tax. Estimates were that 600,000 whites and 500,000 blacks had been disfranchised.
African Americans and poor whites were shut out of the political process and disfranchised. Southern legislatures passed Jim Crow laws imposing segregation in public facilities and places. The discrimination, segregation and disfranchisement lasted well into the later decades of the 20th century. They were shut out of all offices at the local, state, as well as federal levels, as those who could not vote could not run for office or serve on juries.
While Congress had actively intervened for more than 20 years in elections in the South which the House Elections Committee judged to be flawed, after 1896 it backed off from intervening. Many Northern legislators were outraged about the disfranchisement of blacks and some proposed reducing Southern representation in Congress. They never managed to accomplish that, as Southern representatives formed a strong, one-party voting block for decades. Although educated African Americans mounted legal challenges (with many secretly funded by educator Booker T. Washington and his northern allies), the Supreme Court upheld Mississippi's and Alabama's provisions in its rulings in Williams v. Mississippi (1898) and Giles v. Harris (1903).
People in the movement chose the term "Redemption" from Christian theology. Historian Daniel W. Stowell concludes that white Southerners appropriated the term to describe the political transformation they desired, that is, the end of Reconstruction. This term helped unify numerous white voters, and encompassed efforts to purge southern society of its sins and to remove Republican political leaders.
It also represented the birth of a new southern society, rather than a return to its antebellum predecessor. Historian Gaines M. Foster explains how the South became known as the "Bible Belt" by connecting this characterization with changing attitudes caused by slavery's demise. Freed from preoccupation with federal intervention over slavery, and even citing it as precedent, white southerners joined northerners in the national crusade to legislate morality. Viewed by some as a "bulwark of morality", the largely Protestant South took on a Bible Belt identity long before H. L. Mencken coined the term.
The "redeemed" South
When Reconstruction died, so did all hope for national enforcement of adherence to the constitutional amendments that the U.S. Congress had passed in the wake of the Civil War. As the last Federal troops left the ex-Confederacy, two old foes of American politics reappeared at the heart of the Southern polity – the twin, inflammatory issues of state rights and race. It was precisely on the ground of these two issues that the Civil War had broken out, and in 1877, sixteen years after the secession crisis, the South reaffirmed control over them.
"The slave went free; stood a brief moment in the sun; then moved back again toward slavery", wrote W. E. B. Du Bois. The black community in the South was brought back under the yoke of the Southern Democrats, who had been politically undermined during Reconstruction. Whites in the South were committed to reestablish its own sociopolitical structure with the goal of a new social order enforcing racial subordination and labor control. While the Republicans succeeded in maintaining some power in part of the Upper South, such as Tennessee, in the Deep South there was a return to "home rule".
In the aftermath of the Compromise of 1877, Southern Democrats held the South's black community under increasingly tight control. Politically, blacks were gradually evicted from public office, as the few that remained saw the sway they held over local politics considerably decreased. Socially, the situation was worse, as the Southern Democrats tightened their grip on the labor force. Vagrancy and "anti-enticement" laws were reinstituted. It became illegal to be jobless, or to leave a job before the contract expired. Economically, the blacks were stripped of independence, as new laws gave white planters the control over credit lines and property. Effectively, the black community was placed under a three-fold subjugation that was reminiscent of slavery.
In the years immediately following Reconstruction, most blacks and former abolitionists held that Reconstruction lost the struggle for civil rights for black people because of violence against blacks and against white Republicans. Frederick Douglass and Reconstruction Congressman John R. Lynch cited the withdrawal of federal troops from the South as a primary reason for the loss of voting rights and other civil rights by African Americans after 1877.
By the turn of the 20th century, white historians, led by the Dunning School, saw Reconstruction as a failure because of its political and financial corruption, its failure to heal the hatreds of the war, and its control by self-serving northern politicians, such as the people around President Grant. Historian Claude Bowers said that the worst part of what he called "the Tragic Era" was the extension of voting rights to freedmen, a policy he claimed led to misgovernment and corruption. The freedmen, the Dunning School historians argued, were not at fault because they were manipulated by corrupt white carpetbaggers interested only in raiding the state treasury and staying in power. They agreed the South had to be "redeemed" by foes of corruption. Reconstruction, in short, violated the values of "republicanism" and they classified all Republicans as "extremists". This interpretation of events was the hallmark of the Dunning School which dominated most history textbooks from 1900 to the 1960s.
Beginning in the 1930s, historians such as C. Vann Woodward and Howard K. Beale attacked the "redemptionist" interpretation of Reconstruction, calling themselves "revisionists" and claimed that the real issues were economic. The Northern Radicals were tools of the railroads, and the Republicans in the South were manipulated to do their bidding. The Redeemers, furthermore, were also tools of the railroads and were themselves corrupt.
In 1935, W. E. B. Du Bois published a Marxist analysis in his Black Reconstruction: An Essay toward a History of the Part which Black Folk Played in the Attempt to Reconstruct Democracy in America, 1860–1880. His book emphasized the role of African Americans during Reconstruction, noted their collaboration with whites, their lack of majority in most legislatures, and also the achievements of Reconstruction: establishing universal public education, improving prisons, establishing orphanages and other charitable institutions, and trying to improve state funding for the welfare of all citizens. He also noted that despite complaints, most Southern states kept the constitutions of Reconstruction for many years, some for a quarter of a century.
By the 1960s, neo-abolitionist historians led by Kenneth Stampp and Eric Foner focused on the struggle of freedmen. While acknowledging corruption in the Reconstruction era, they hold that the Dunning School over-emphasized it while ignoring the worst violations of republican principles — namely denying African Americans their civil rights, including their right to vote.
Supreme Court challenges
Although African Americans mounted legal challenges, the U.S. Supreme Court upheld Mississippi's and Alabama's provisions in its rulings in Williams v. Mississippi (1898), Giles v. Harris (1903), and Giles v. Teasley. Booker T. Washington secretly helped fund and arrange representation for such legal challenges, raising money from northern patrons who helped support Tuskegee University.
When white primaries were ruled unconstitutional by the Supreme Court in Smith v. Allwright (1944), civil rights organizations rushed to register African-American voters. By 1947 the All-Citizens Registration Committee (ACRC) of Atlanta managed to get 125,000 voters registered in Georgia, raising black participation to 18.8% of those eligible. This was a major increase from the 20,000 on the rolls who had managed to get through administrative barriers in 1940. Georgia, among other Southern states, passed new legislation (1958) to once again repress black voter registration.
It was not until African-American leaders gained passage of the Civil Rights Act of 1957, the Civil Rights Act of 1964, and the Voting Rights Act of 1965 that the American citizens who were first granted suffrage by the Fifteenth Amendment after the Civil War finally regained the ability to exercise their right to vote.
- Jim Crow laws
- Disfranchisement after the Reconstruction Era
- Phoenix Election Riot, in South Carolina
- Wes Allison, "Election 2000 much like Election 1876", St. Petersburg Times, November 17, 2000.
- Charles Lane, The Day Freedom Died, Henry Holt & Co., 2009, pp. 18–19.
- Glenn Feldman, The Disfranchisement Myth: Poor Whites and Suffrage Restriction in Alabama, Athens: University of Georgia Press, 2004, p. 136.
- COMMITTEE AT ODDS ON REAPPORTIONMENT, The New York Times, December 21, 1900; accessed March 10, 2008.
- Richard H. Pildes, "Democracy, Anti-Democracy, and the Canon", Constitutional Commentary, Vol. 17, 2000, pp. 12 and 21, accessed March 10, 2008.
- Blum and Poole (2005).
- Eric Foner, "A Short History of Reconstruction: 1863–1877", New York: Harper & Row Publishers, 1990, p. 249
- Foner, "A Short History of Reconstruction" (1990), p. 250.
- Richard H. Pildes, "Democracy, Anti-Democracy, and the Canon," Constitutional Commentary, Vol. 17, 2000, pp. 12 and 21], accessed March 10, 2008.
- Chandler Davidson and Bernard Grofman, Quiet Revolution in the South: The Impact of the Voting Rights Act, Princeton: Princeton University Press, 1994, p. 70.
- Ayers, Edward L. The Promise of the New South: Life after Reconstruction (1993).
- Baggett, James Alex. The Scalawags: Southern Dissenters in the Civil War and Reconstruction (2003), a statistical study of 732 Scalawags and 666 Redeemers.
- Blum, Edward J., and W. Scott Poole, eds. Vale of Tears: New Essays on Religion and Reconstruction. Mercer University Press, 2005. ISBN 0-86554-987-7.
- Du Bois, W. E. Burghardt. Black Reconstruction in America 1860-1880 (1935), explores the role of African Americans during Reconstruction
- Foner, Eric. Reconstruction: America's Unfinished Revolution, 1863–1877 (2002).
- Garner, James Wilford. Reconstruction in Mississippi (1901), a classic Dunning School text.
- Gillette, William. Retreat from Reconstruction, 1869–1879 (1979).
- Going, Allen J. "Alabama Bourbonism and Populism Revisited." Alabama Review 1983 36 (2): 83–109. ISSN 0002-4341.
- Hart, Roger L. Redeemers, Bourbons, and Populists: Tennessee, 1870–1896. LSU Press, 1975.
- Jones, Robert R. "James L. Kemper and the Virginia Redeemers Face the Race Question: A Reconsideration". Journal of Southern History, 1972 38 (3): 393–414. ISSN 0022-4642.
- King, Ronald F. "A Most Corrupt Election: Louisiana in 1876." Studies in American Political Development, 2001 15(2): 123–137. ISSN 0898-588x.
- King, Ronald F. "Counting the Votes: South Carolina's Stolen Election of 1876." Journal of Interdisciplinary History 2001 32 (2): 169–191. ISSN 0022-1953.
- Moore, James Tice. "Redeemers Reconsidered: Change and Continuity in the Democratic South, 1870–1900" in the Journal of Southern History, Vol. 44, No. 3 (August 1978), pp. 357–378.
- Moore, James Tice. "Origins of the Solid South: Redeemer Democrats and the Popular Will, 1870–1900." Southern Studies, 1983 22 (3): 285–301. ISSN 0735-8342.
- Perman, Michael. The Road to Redemption: Southern Politics, 1869-1879. Chapel Hill, North Carolina: University of North Carolina Press, 1984. ISBN 0-8078-4141-2.
- Perman, Michael. "Counter Reconstruction: The Role of Violence in Southern Redemption", in Eric Anderson and Alfred A. Moss, Jr, eds. The Facts of Reconstruction (1991) pp. 121–140.
- Pildes, Richard H. "Democracy, Anti-Democracy, and the Canon", Constitutional Commentary, 17, (2000).
- Polakoff, Keith I. The Politics of Inertia: The Election of 1876 and the End of Reconstruction (1973).
- Rabonowitz, Howard K. Race Relations in the Urban South, 1865-1890 (1977).
- Richardon, Heather Cox. The Death of Reconstruction (2001).
- Wallenstein, Peter. From Slave South to New South: Public Policy in Nineteenth-Century Georgia (1987).
- Wiggins; Sarah Woolfolk. The Scalawag in Alabama Politics, 1865—1881 (1991).
- Williamson, Edward C. Florida Politics in the Gilded Age, 1877–1893 (1976).
- Woodward, C. Vann. Origins of the New South, 1877–1913 (1951); emphasizes economic conflict between rich and poor.
- Fleming, Walter L. Documentary History of Reconstruction: Political, Military, Social, Religious, Educational, and Industrial (1906), several hundred primary documents from all viewpoints
- Hyman, Harold M., ed. The Radical Republicans and Reconstruction, 1861–1870 (1967), collection of longer speeches by Radical leaders
- Lynch, John R. The Facts of Reconstruction(1913). Online text by African American member of the United States Congress during Reconstruction era. | https://en.wikipedia.org/wiki/Redeemers |
4.03125 | The Servicemen's Readjustment Act of 1944 (P.L. 78-346, 58 Stat. 284m), known informally as the G.I. Bill, was a law that provided a range of benefits for returning World War II veterans (commonly referred to as G.I.s). Benefits included low-cost mortgages, low-interest loans to start a business, cash payments of tuition and living expenses to attend university, high school or vocational education, as well as one year of unemployment compensation. It was available to every veteran who had been on active duty during the war years for at least one-hundred twenty days and had not been dishonorably discharged; combat was not required. By 1956, roughly 2.2 million veterans had used the G.I. Bill education benefits in order to attend colleges or universities, and an additional 5.6 million used these benefits for some kind of training program.
Historians and economists judge the G.I. Bill a major political and economic success—especially in contrast to the treatments of World War I veterans—and a major contribution to America's stock of human capital that sped long-term economic growth.
Canada operated a similar program for its World War II veterans, with an economic impact similar to the American case. Since the original U.S. 1944 law, the term has come to include other veteran benefit programs created to assist veterans of subsequent wars as well as peacetime service.
- 1 History
- 2 Issues
- 3 Content
- 4 MGIB comparison chart
- 5 See also
- 6 References
- 7 Further reading
- 8 External links
On June 22, 1944, the Servicemen's Readjustment Act of 1944 was signed into law by President Roosevelt, commonly known as the G.I. Bill of Rights.
During the war, politicians wanted to avoid the postwar confusion about veterans' benefits that became a political football in the 1920s and 1930s. President Franklin D. Roosevelt wanted a postwar assistance program to help transition from wartime, but he also wanted it on a need-basis for poor people, not just veterans. The veterans' organizations mobilized support in Congress that rejected FDR's approach and provided benefits only to veterans of military service, including men and women. Ortiz says their efforts "entrenched the VFW and the Legion as the twin pillars of the American veterans' lobby for decades."
Harry W. Colmery, a former national commander of the American Legion and former Republican National Chairman, is credited for writing the first draft of the G.I. Bill. He reportedly jotted down his ideas on stationery and a napkin at the Mayflower Hotel in Washington, D.C. U.S. Senator Ernest McFarland, D-Arizona, was actively involved in the bill's passage and is known, with Warren Atherton, as one of the "fathers of the G.I. Bill." One might then term Edith Nourse Rogers, R-Mass, who helped write and who co-sponsored the legislation, as the "mother of the G.I. Bill". Like Colmery, her contribution to writing and passing this legislation has been obscured by time.
The bill was introduced in the House on January 10, 1944, and in the Senate, the following day, both chambers approved their own versions of the bill.
The bill that President Roosevelt initially proposed had a means test—only poor veterans would be aided. The G.I. Bill was created to prevent a repetition of the Bonus March of 1932 when World War I veterans protested for years they had not been rewarded.
An important provision of the G.I. Bill was low interest, zero down payment home loans for servicemen, with more favorable terms for new construction compared to existing housing. This encouraged millions of American families to move out of urban apartments and into suburban homes.
Another provision was known as the 52–20 clause. This enabled all former servicemen to receive $20 once a week for 52 weeks a year while they were looking for work. Less than 20 percent of the money set aside for the 52–20 Club was distributed. Rather, most returning servicemen quickly found jobs or pursued higher education.
After World War II
|This section does not cite any sources. (June 2008)|
A look at the available statistics reveals that these later bills had an important influence on the lives of returning veterans, higher education, and the economy. A greater percentage of Vietnam veterans used G.I. Bill education benefits (72 percent) than World War II veterans (51 percent) or Korean War veterans (43 percent).
Moreover, because of the ongoing military draft from 1940 to 1973, as many as one third of the population (when both veterans and their dependents are taken into account) were eligible for benefits from the expansion of veterans’ benefits.
The success of the 1944 G.I. Bill prompted the government to offer similar measures to later generations of veterans. The Veterans’ Adjustment Act of 1952, signed into law on July 16, 1952, offered benefits to veterans of the Korean War that served for more than 90 days and had received an “other than dishonorable discharge.” Korean War veterans did not receive unemployment compensation—they were not members of the "52–20 Club" like World War II vets, but they were entitled to unemployment compensation starting at the end of a waiting period which was determined by the amount and disbursement dates of their mustering out pay. They could receive 26 weeks at $26 a week that the federal government would subsidize but administered by the various states. One improvement in the unemployment compensation for Korean War veterans was they could receive both state and federal benefits, the federal benefits beginning once state benefits were exhausted.
One significant difference between the 1944 G.I. Bill and the 1952 Act was that tuition fees were no longer paid directly to the chosen institution of higher education. Instead, veterans received a fixed monthly sum of $110, which they used to pay for their tuition, fees, books, and living expenses. The decision to end direct tuition payments to schools came after a 1950 House select committee uncovered incidents of overcharging of tuition rates by some institutions under the original G.I. Bill in an attempt to defraud the government.
Although the monthly stipend proved sufficient for most Korean War veterans, the decision would have negative repercussions for later veterans. By the end of the program on January 31, 1965, approximately 2.4 million of 5.5 million eligible veterans had used their benefits: roughly 1.2 million for higher education, over 860,000 for other education purposes, and 318,000 for occupational training. Over 1.5 million Korean War veterans obtained home loans.
Whereas the G.I. Bills of 1944 and 1952 were given to compensate veterans for wartime service, the Veterans Readjustment Benefits Act of 1966 (P.L. 89-358) changed the nature of military service in America by extending benefits to veterans who served during times of war and peace. At first there was some opposition to the concept of a peacetime G.I. Bill. President Dwight Eisenhower had rejected such a measure in 1959 after the Bradley commission concluded that military service should be “an obligation of citizenship, not a basis for government benefits.” President Lyndon B. Johnson believed that many of his “Great Society” social programs negated the need for sweeping veterans benefits. But, prompted by unanimous support given the bill by Congress, Johnson signed it into law on March 3, 1966.
Almost immediately critics[who?] within the veterans’ community and on Capitol Hill charged that the bill did not go far enough. At first, single veterans who had served more than 180 days and had received an “other than dishonorable discharge” received only $100 a month from which they had to pay for tuition and all of their expenses. Most found this amount to be sufficient to pay only for books and minor fees, and not enough to live on or attend college full-time. In particular, veterans of the Vietnam War disliked the fact that the bill did not provide them with the same educational opportunities as their World War II predecessors. Consequently, during the early years of the program, only about 25% of Vietnam veterans used their education benefits.
In the next decade, efforts were made to increase veterans’ benefits. Congress succeeded, often in the face of fierce objections from the fiscally conservative Nixon and Ford Administrations, to raise benefit levels. In 1967, a single veteran’s benefits were raised to $130 a month; in 1970 they rose to $175; under the Readjustment Assistance Act of 1972 the monthly allowance rose to $220; in 1974 it rose to $270, $292 in 1976, and then $311 a month in 1977.
As the funding levels increased, the numbers of veterans entering higher education rose correspondingly. In 1976, ten years after the first veterans became eligible, the highest number of Vietnam-era veterans were enrolled in colleges and universities. By the end of the program, proportionally more Vietnam-era veterans (6.8 million out of 10.3 million eligible) had used their benefits for higher education than any previous generation of veterans.
The United States military moved to an all-volunteer force in 1973, and veterans continued to receive benefits, in part as an inducement to enlist, under the Veterans Educational Assistance Program (VEAP) and the Montgomery G.I. Bill (MGIB). From December 1976 through 1987, veterans received assistance under the VEAP. The VEAP departed from previous programs by requiring participants to make a contribution to their education benefits. The Veterans Administration then matched their contributions at a rate of 2 to 1. Enlisted personnel could contribute up to $100 a month up to a maximum of $2700. Benefits could be claimed for up to 36 months.
To be eligible for VEAP, a veteran had to serve for more than 180 days and receive an “other than dishonorable discharge.” Nearly 700,000 veterans used their benefits for education and training under this program.
In 1985, a bill sponsored by Democratic Congressman "Sonny" Gillespie V. Montgomery expanded the G.I. Bill. The MGIB replaced the VEAP for those who served after July 1, 1985. This was an entirely voluntary program in which participants could choose to forfeit $100 per month from their first year of pay. In return, eligible veterans received a tuition allowance and a monthly stipend for up to 36 months of eligible training or education.
Although the G.I. Bill did not specifically advocate discrimination, it was interpreted differently for blacks than for whites. Historian Ira Katznelson argued that "the law was deliberately designed to accommodate Jim Crow". Because the programs were directed by local, white officials, many veterans did not benefit. Of the 67,000 mortgages insured by the G.I. Bill, fewer than 100 were taken out by non-whites.
By 1946, only one fifth of the 100,000 blacks who had applied for educational benefits had registered in college. Furthermore, historically black colleges and universities (HBCUs) came under increased pressure as rising enrollments and strained resources forced them to turn away an estimated 20,000 veterans. HBCUs were already the poorest colleges and served, to most whites, only to keep blacks out of white colleges. HBCU resources were stretched even thinner when veterans’ demands necessitated a shift in the curriculum away from the traditional "preach and teach" course of study offered by the HBCUs.
The United States Department of Veterans Affairs (VA), because of its strong affiliation to the all-white American Legion and VFW (Veterans of Foreign Wars), also became a formidable foe to many blacks in search of an education because it had the power to deny or grant the claims of black G.I.s. Additionally, banks and mortgage agencies refused loans to blacks, making the G.I. Bill even less effective for blacks.
Congress did not include merchant marine veterans in the original G.I. Bill, even though they are considered military personnel in times of war in accordance with the Merchant Marine Act of 1936. As President Roosevelt signed the G.I. Bill in June 1944 he said: "I trust Congress will soon provide similar opportunities to members of the merchant marine who have risked their lives time and time again during war for the welfare of their country." Now that the youngest veterans are in their 80s, there are efforts to recognize their contributions by giving some benefits to the remaining survivors. In 2007, three different bills related to this issue were introduced in Congress, one of which passed the House of Representatives only.
All veteran education programs are found in law in Title 38 of the United States Code. Each specific program is found in its own Chapter in Title 38.
Unlike scholarship programs, the MGIB requires a financial commitment from the service member. However, if the benefit is not used, the service member cannot recoup whatever money was paid into the system.
In some states, the National Guard does offer true scholarship benefits, regardless of past or current MGIB participation.
In 1984, former Mississippi Congressman Gillespie V. “Sonny” Montgomery revamped the G.I. Bill. From 1984 until 2008, this version of the law was called "The Montgomery G.I. Bill". The Montgomery GI Bill — Active Duty (MGIB) states that active duty members forfeit $100 per month for 12 months; if they use the benefits, they receive as of 2012[update] $1564 monthly as a full-time student (tiered at lower rates for less-than-full-time) for a maximum of 36 months of education benefits. This benefit may be used for degree and certificate programs, flight training, apprenticeship/on-the-job training and correspondence courses if the veteran is enrolled full-time. Part-time veteran students receive less, but for a proportionately longer period. This means for every month the veteran received benefits at the half-time, the veterans benefits are only charged for 1/2 of a month. Veterans from the reserve have different eligibility requirements and different rules on receiving benefits (see Ch. 1606, Ch. 1607 and Ch. 33). MGIB may also be used while active, which only reimburses the cost for tuition and fees. Each service has additional educational benefit programs for active duty members. Most delay using MGIB benefits until after separation, discharge or retirement.
The "Buy-Up" option, also known as the "kicker", allows active duty members to forfeit up to $600 more toward their MGIB. For every dollar the service member contributes, the federal government contributes $8. Those who forfeit the maximum ($600) will receive, upon approval, an additional $150 per month for 36 months, or a total of $5400. This allows the veteran to receive $4,800 in additional funds ($5400 total minus the $600 contribution to receive it), but not until after leaving active duty. The additional contribution must be made while still on active duty. It is available for G.I. Bill recipients using either Ch. 30 or Ch. 1607, but cannot be extended beyond 36 months if a combination of G.I. Bill programs are used.
MGIB benefits may be used up to 10 years from the date of last discharge or release from active duty. The 10-year period can be extended by the amount of time a service member was prevented from training during that period because of a disability or because he/she was held by a foreign government or power.
The 10-year period can also be extended if one reenters active duty for 90 days or more after becoming eligible. The extension ends 10 years from the date of separation from the later period. Periods of active duty of less than 90 days qualify for extensions only if one was separated for one of the following:
- A service-connected disability
- A medical condition existing before active duty
For those eligible based on two years of active duty and four years in the Selected Reserve (also known as "call to service"), they have 10 years from their release from active duty, or 10 years from the completion of the four-year Selected Reserve obligation to use MGIB benefits.
At this time, service members cannot recoup any monies paid into the MGIB program should it not be utilized.
Service members may use GI bill in conjunction with Military Tuition Assistance (MilTA) to help with payments above the MilTA CAP. This will reduce the total benefit available once the member leaves service.
- College, business
- Technical or vocational courses
- Correspondence courses
- Apprenticeship/job training
- Flight training (usually limited to 60% for Ch. 30, see Ch. 33 for more flight information)
Under this bill, benefits may be used to pursue an undergraduate or graduate degree at a college or university, a cooperative training program, or an accredited independent study program leading to a degree.
"Chapter 31" is a vocational rehabilitation program that serves eligible active duty servicemembers and veterans with service-connected disabilities. This program promotes the development of suitable, gainful employment by providing vocational and personal adjustment counseling, training assistance, a monthly subsistence allowance during active training, and employment assistance after training. Independent living services may also be provided to advance vocational potential for eventual job seekers, or to enhance the independence of eligible participants who are presently unable to work.
In order to receive an evaluation for Chapter 31 vocational rehabilitation and/or independent living services, those qualifying as a "servicemember" must have a memorandum service-connected disability rating of 20% or greater and apply for vocational rehabilitation services. Those qualifying as "veterans" must have received, or eventually receive, an honorable or other-than-dishonorable discharge, have a VA service-connected disability rating of 10% or more, and apply for services. Law provides for a 12-year basic period of eligibility in which services may be used, which begins on the latter of separation from active military duty or the date the veteran was first notified of a service-connected disability rating. In general, participants have 48 months of program entitlement to complete an individual vocational rehabilitation plan. Participants deemed to have a "serious employment handicap" will generally be granted exemption from the 12-year eligibility period and may receive additional months of entitlement as necessary to complete approved plans.
The Veterans Educational Assistance Program (VEAP) is available for those who first entered active duty between January 1, 1977, and June 30, 1985, and elected to make contributions from their military pay to participate in this education benefit program. Participants' contributions are matched on a $2 for $1 basis by the Government. This benefit may be used for degree and certificate programs, flight training, apprenticeship/on-the-job training and correspondence courses.
Chapter 33 (Post-9/11 G.I. Bill)
Congress, in the summer of 2008, approved an expansion of benefits beyond the current G.I. Bill program for military veterans serving since September 11, 2001, originally proposed by Senator Jim Webb. Beginning in August 2009, recipients became eligible for greatly expanded benefits, or the full cost of any public college in their state. The new bill also provides a housing allowance and $1,000 a year stipend for books, among other benefits.
The VA announced in September 2008 that it would manage the new benefit itself instead of hiring an outside contractor after protests by veteran's organizations and the American Federation of Government Employees. Veterans Affairs Secretary James B. Peake stated that although it was "unfortunate that we will not have the technical expertise from the private sector," the VA "can and will deliver the benefits program on time."
Pending changes to the post-9/11 G.I. Bill
In December 2010 Congress passed the Post-9/11 Veterans Education Assistance Improvements Act of 2010. The new law, often referred to as G.I. Bill 2.0, expands eligibility for members of the National Guard to include time served on Title 32 or in the full-time Active Guard and Reserve (AGR). It does not, however, cover members of the Coast Guard Reserve who have served under Title 14 orders performing duties comparable to those performed by National Guard personnel under Title 32 orders.
The new law also includes:
enrollment periods. In this case if the veteran is full-time, and his or her maximum BAH rate is $1500 per month, then he or she will receive (13/30)x$1500 = $650 for the end of the first period of enrollment, then the veteran will receive (10/30)x$1500 = $500 for the beginning of the second period of enrollment. Effectively, the change in break-pay means the veteran will receive $1150 per month for August instead of $1500 per month. This have a significant impact in December - January BAH payments since most Colleges have 2-4 week breaks.
Another change enables active-duty servicemembers and their G.I. Bill-eligible spouses to receive the annual $1,000 book stipend (pro-rated for their rate of pursuit), adds several vocational, certification and OJT options, and removes the state-by-state tuition caps for veterans enrolled at publicly funded colleges and universities.
Changes to Ch. 33 also includes a new $17,500 annual cap on tuition and fees coverage for veterans attending Private Colleges and foreign colleges and universities.
The Survivors' and Dependents' Educational Assistance Program (DEA) provides education and training opportunities to eligible dependents of veterans who are permanently and totally disabled due to a service-related condition, or who died while on active duty or as a result of a service related condition. The program offers up to 45 months of education benefits. These benefits may be used for degree and certificate programs, apprenticeship, and on-the-job training. Spouses may take correspondence courses
The Montgomery G.I. Bill — Selected Reserve (MGIB-SR) program may be available to members of the Selected Reserve, including all military branch reserve components as well as the Army National Guard and Air National Guard. This benefit may be used for degree and certificate programs, flight training, apprenticeship/on-the-job training and correspondence courses.
The Reserve Educational Assistance Program (REAP) is available to all reservists who, after September 11, 2001, complete 90 days or more of active duty service "in support of contingency operations." This benefit provides reservists return from active duty with up to 80% of the active duty (Chapter 30) G.I. Bill benefits as long as they remain active participants in the reserves.
MGIB comparison chart
|Type||Active Duty MGIB Chapter 30||Active Duty Chap 30 Top-up||Post-9/11 G.I. Bill Chapter 33||Voc Rehab Chapter 31||VEAP Chapter 32||DEA Chapter 35||Selected Reserve Chapter 1606||Selected Reserve (REAP) Chapter 1607||Additional Benefits Tuition Assistance||Additional Benefits Student Loan Repayment Program|
|Time Limit (Eligibility)||10 yrs from last discharge from active duty.||While on active duty only.||15 yrs from last discharge from active duty.||12 yrs from discharge or notification of service-connected disability, whichever is later. In cases of "extreme disability", the 12-year timeline can be waived.||Entered service for the first time between January 1, 1977, and June 30, 1985; Opened a contribution account before April 1, 1987; Voluntarily contributed from $25 to $2700||While in the Selected Reserve||
While in the Selected Reserve. If separated from Ready Reserve for disability which was not result of willful misconduct, for 10 yrs after date of entitlement.
|On the day you leave the Selected Reserve; this include voluntary entry into the IRR.||On the day you leave the Selected Reserve; this include voluntary entry into the IRR.|
|Months of Benefits (Full Time)||36 months||36 months||36 months||48 months||1 to 36 months depending on the number of monthly contributions||up to 45 months||36 Months||36 Months||Contingent as long as you serve as a drilling Reservist.||Contingent as long as you serve as a drilling Reservist.|
- African Americans and the G.I. Bill
- GI Bill Tuition Fairness Act of 2013 (H.R. 357; 113th Congress) - proposed amendments related to in-state versus out-of-state tuition
- Post-9/11 Veterans Educational Assistance Act of 2008
- Glenn C. Altschuler and Stuart M. Blumin, The GI Bill: a new deal for veterans (2009) p 118
- Olson, 1973 and see also Bound and Turner 2002
- Stanley, 2003
- Frydl, 2009
- Suzanne Mettler, Soldiers to citizens: The GI Bill and the making of the greatest generation (2005)
- Lemieux, Thomas; Card, David (2001). "Education, earnings, and the ‘Canadian GI Bill’". Canadian Journal of Economics/Revue canadienne d'économique 34 (2): 313–344. doi:10.1111/0008-4085.00077.
- "The George Washington Uni Profile". DCMilitaryEd.com. Retrieved 2014-01-09.
- David Ortiz, Beyond the Bonus March and GI Bill: how veteran politics shaped the New Deal era (2013) p xiii
- Ortiz, Beyond the Bonus March and GI Bill: how veteran politics shaped the New Deal era (2009) p xiii
- The GI BILL's History: Born Of Controversy: The GI Bill Of Rights
- James E. McMillan (2006). Ernest W. McFarland: Majority Leader of the United States Senate, Governor and Chief Justice of the State of Arizona : a biography. Sharlot Hall Museum Press. p. 113. ISBN 978-0-927579-23-0.
- THE CONGRESSIONAL RESEARCH SERVICE (2004), A CHRONOLOGY OF HOUSING LEGISLATION AND SELECTED EXECUTIVE ACTIONS, 1892-2003, U.S. Government Printing Office
- Jackson, Kenneth T. (1985). Crabgrass Frontier: The Suburbanization of the United States. New York: Oxford University Press. p. 206.
- See The Historical Development of Veterans' Benefits in the United States: A Report on Veterans' Benefits in the United States by the President's Commission on Veterans' Pensions, 84th Congress, 2d Session, House Committee Print 244, Staff Report No. 1, May 9, 1956, pp. 160-161. Also see "The New GI Bill: Who Gets What," Changing Times (May 1953), 22 and Congress and the Nation, 1945-1964: A Review of Government and Politics in the Postwar Years, Washington, D.C.: Congressional Quarterly Service, 1965, 1348.
- Lyndon B. Johnson, "Remarks Upon Signing the 'Cold War GI Bill'" (1966) at The American Presidency Project
- Kotz, Nick (28 August 2005). "Review: 'When Affirmative Action Was White': Uncivil Rights". New York Times. Retrieved 2 August 2015.
- Katznelson, Ira (2006). When affirmative action was white : an untold history of racial inequality in twentieth-century America ([Norton pbk ed.] ed.). New York: W.W. Norton. ISBN 978-0393328516.
- Herbold, Hilary (Winter 1994). "Never a Level Playing Field: Blacks and the GI Bill". The Journal of Blacks in Higher Education (6): 107. doi:10.2307/2962479.
- Herbold, Hilary (Winter 1994). "Never a Level Playing Field: Blacks and the GI Bill". The Journal of Blacks in Higher Education (6): 104–108. doi:10.2307/2962479.
- Howard Johnson, "The Negro Veteran Fights for Freedom!" Political Affairs, May 1947, p. 430.
- Belated Thank You to the Merchant Mariners of World War II Act of 2007
- GI-BILL History
- Buy-Up Program
- Davenport, Christian, "Expanded GI Bill Too Late For Some", Washington Post, October 21, 2008, p. 1.
- More Details on GI Bill 2.0
- Montgomery G.I. Bill Guidelines for Active Duty (MGIB)
- Montgomery G.I. Bill - Active Duty - (U.S. Department of Veterans Affairs)
- Top-up Tuition Assistance - Military Veteran Education Benefits - G.I. Bill Veteran Resources
- Tuition Assistance Top-up - (U.S. Department of Veterans Affairs)
- VEAP - Military Veteran Education Benefits - G.I. Bill Veteran Resources
- Veterans Educational Assistance Program (VEAP) - (U.S. Department of Veterans Affairs)
- Survivors' and Dependents' Educational Assistance Program - (U.S. Department of Veterans Affairs)
- Montgomery G.I. Bill Guidelines for Selected Reserve (MGIB-SR)
- MGIB-SR General Information - (U.S. Department of Veterans Affairs)
- Payment Rates
- Payment Rates
- Payment Rates
- Payment Rates
- Bennett, Michael J. When Dreams Came True: The G.I. Bill and the Making of Modern America (New York: Brassey’s Inc., 1996)
- Bound, John, and Sarah Turner. "Going to War and Going to College: Did World War II and the G.I. Bill Increase Educational Attainment for Returning Veterans?" Journal of Labor Economics Vol. 20, No. 4 (October 2002), pp. 784–815 in JSTOR
- Boulton, Mark. Failing our Veterans: The G.I. Bill and the Vietnam Generation (NYU Press, 2014)
- Keane, Jennifer. Doughboys, the Great War and the Remaking of America (Johns Hopkins University Press, 2001)
- Frydl, Kathleen. The G.I. Bill (Cambridge University Press, 2009)
- Humes, Edward (2006). Over Here: How the G.I. Bill Transformed the American Dream. Harcourt. ISBN 0-15-100710-1.
- Mettler, Suzanne. Soldiers to Citizens: The G.I. Bill and the Making of the Greatest Generation (Oxford University Press, 2005). online; excerpt
- Olson, Keith. "The G. I. Bill and Higher Education: Success and Surprise," American Quarterly Vol. 25, No. 5 (December 1973) 596-610. in JSTORin JSTOR
- Olson, Keith, The G.I. Bill, The Veterans, and The Colleges (Lexington: University Press of Kentucky, 1974)
- Ross, David B. Preparing for Ulysses: Politics and Veterans During World War II (Columbia University Press, 1969).
- Stanley, Marcus (2003). "College Education and the Midcentury GI Bills". The Quarterly Journal of Economics 118 (2): 671–708. doi:10.1162/003355303321675482.
- Van Ells, Mark D. To Hear Only Thunder Again: America's World War II Veterans Come Home. Lanham, MD: Lexington Books, 2001.
- GI Bill Forum[dead link]
- The American Legion's MyGIBill.org
- The Department of Veteran Affairs' GI Bill website
- Central Committee for Conscientious Objectors analysis of the MGIB
- Education Fact Sheet for Guard & Reserve Members
- Education Benefits Available by States
- Web-Enable Education Benefits System
- GI Bill top up program | https://en.wikipedia.org/wiki/G._I._Bill_of_Rights |
4.1875 | |Part of the Politics series|
A primary election is an election that narrows the field of candidates before an election for office. Primary elections are one means by which a political party or a political alliance nominates candidates for an upcoming general election or by-election.
Other methods of selecting candidates include caucuses, conventions, and nomination meetings. Historically, Canadian political parties chose their candidates through nominating conventions held by constituency riding associations. Canadian party leaders are elected at leadership conventions, although some parties have abandoned this practice in favor of one member, one vote systems.
- 1 Types
- 2 Primaries in the United States
- 3 Primaries in Europe
- 4 Primaries in Canada
- 5 Primaries worldwide
- 6 See also
- 7 Notes
- 8 References
- 9 External links
Where primary elections are organized by parties, not the administration, two types of primaries can generally be distinguished:
- Closed primary. (synonyms: internal primaries, party primaries) In the case of closed primaries, internal primaries, or party primaries, only party members can vote.
- Open primary. All voters can take part in an open primary and may cast votes on a ballot of any party. The party may require them to express their support to the party's values and pay a small contribution to the costs of the primary.
In the United States, other types can be differentiated:
- Closed primary. People may vote in a party's primary only if they are registered members of that party prior to election day. Independents cannot participate. Note that because some political parties name themselves independent, the terms "non-partisan" or "unaffiliated" often replace "independent" when referring to those who are not affiliated with a political party. Thirteen states — Connecticut, Delaware, Florida, Kentucky, Maine, Nebraska, Nevada, New Jersey, New Mexico, New York, Oklahoma, Oregon, Pennsylvania, and South Dakota — have closed primaries.
- Semi-closed. As in closed primaries, registered party members can vote only in their own party's primary. Semi-closed systems, however, allow unaffiliated voters to participate as well. Depending on the state, independents either make their choice of party primary privately, inside the voting booth, or publicly, by registering with any party on Election Day. Thirteen states — Alaska, Arizona, Colorado, Iowa, Kansas, Massachusetts, New Hampshire, North Carolina, Rhode Island, Utah, West Virginia, and Wyoming — have semi-closed primaries that allow voters to register or change party preference on election day.
- Open primary. A registered voter may vote in any party primary regardless of his own party affiliation. When voters do not register with a party before the primary, it is called a pick-a-party primary because the voter can select which party's primary he or she wishes to vote in on election day. Because of the open nature of this system, a practice known as raiding may occur. Raiding consists of voters of one party crossing over and voting in the primary of another party, effectively allowing a party to help choose its opposition's candidate. The theory is that opposing party members vote for the weakest candidate of the opposite party in order to give their own party the advantage in the general election. An example of this can be seen in the 1998 Vermont senatorial primary with the nomination of Fred Tuttle as the Republican candidate in the general election.
- Semi-open. A registered voter need not publicly declare which political party's primary that they will vote in before entering the voting booth. When voters identify themselves to the election officials, they must request a party's specific ballot. Only one ballot is cast by each voter. In many states with semi-open primaries, election officials or poll workers from their respective parties record each voter's choice of party and provide access to this information. The primary difference between a semi-open and open primary system is the use of a party-specific ballot. In a semi-open primary, a public declaration in front of the election judges is made and a party-specific ballot given to the voter to cast. Certain states that use the open-primary format may print a single ballot and the voter must choose on the ballot itself which political party's candidates they will select for a contested office.
- Blanket primary. A primary in which the ballot is not restricted to candidates from one party.
- Nonpartisan blanket primary. A primary in which the ballot is not restricted to candidates from one party, where the top two candidates advance to the general election regardless of party affiliation. Louisiana has famously operated under this system, which has been nicknamed the "jungle primary." California has used a nonpartisan blanket primary since 2012 after passing Proposition 14 in 2010, and the state of Washington has used a nonpartisan blanket primary since 2008.
Primaries in the United States
The United States is one of few countries to select candidates through popular vote in a primary election system; most countries rely on party leaders to vet candidates, as was previously the case in the U.S. In modern politics, primary elections have been described as a significant vehicle for taking decision-making from political insiders to the voters, though this is disputed by select political science research. The selection of candidates for federal, state, and local general elections takes place in primary elections organized by the public administration for the general voting public to participate in for the purpose of nominating the respective parties' official candidates; state voters start the electoral process for governors and legislators through the primary process, as well as for many local officials from city councilors to county commissioners. The candidate who moves from the primary to be successful in the general election takes public office.
Primaries can be used in nonpartisan elections to reduce the set of candidates that go on to the general election (qualifying primary). (In the U.S., many city, county and school board elections are non-partisan.) Generally, if a candidate receives more than 50% of the vote in the primary, he or she is automatically elected, without having to run again in the general election. If no candidate receives a majority, twice as many candidates pass the primary as can win in the general election, so a single seat election primary would allow the top two primary candidates to participate in the general election following.
When a qualifying primary is applied to a partisan election, it becomes what is generally known as a blanket or Louisiana primary: typically, if no candidate wins a majority in the primary, the two candidates receiving the highest pluralities, regardless of party affiliation, go on to a general election that is in effect a run-off. This often has the effect of eliminating minor parties from the general election, and frequently the general election becomes a single-party election. Unlike a plurality voting system, a run-off system meets the Condorcet loser criterion in that the candidate that ultimately wins would not have been beaten in a two-way race with every one of the other candidates.
Because many Washington residents were disappointed over the loss of their blanket primary, which the Washington State Grange helped institute in 1935, the Grange filed Initiative 872 in 2004 to establish a blanket primary for partisan races, thereby allowing voters to once again cross party lines in the primary election. The two candidates with the most votes then advance to the general election, regardless of their party affiliation. Supporters claimed it would bring back voter choice; opponents said it would exclude third parties and independents from general election ballots, could result in Democratic or Republican-only races in certain districts, and would in fact reduce voter choice. The initiative was put to a public vote in November 2004 and passed. On July 15, 2005, the initiative was found unconstitutional by the U.S. District Court for the Western District of Washington. The U.S. Supreme Court heard the Grange's appeal of the case in October 2007. In March 2008, the Supreme Court upheld the constitutionality the Grange-sponsored Top 2 primary, citing a lack of compelling evidence to overturn the voter-approved initiative.
In elections using voting systems where strategic nomination is a concern, primaries can be very important in preventing "clone" candidates that split their constituency's vote because of their similarities. Primaries allow political parties to select and unite behind one candidate. However, tactical voting is sometimes a concern in non-partisan primaries as members of the opposite party can strategically vote for the weaker candidate in order to face an easier general election.
In California, under Proposition 14 (Top Two Candidates Open Primary Act), a voter-approved referendum, in all races except for that for U.S. President and county central committee offices, all candidates running in a primary election regardless of party will appear on a single primary election ballot and voters may vote for any candidate, with the top two vote-getters overall move on to the general election regardless of party. The effect of this is that it will be possible for two Republicans or two Democrats to compete against each other in a general election if those candidates receive the most primary-election support.
As a result of a federal court decision in Idaho Republican Party v. Ysursa, the 2011 Idaho Legislature passed House Bill 351 implementing a closed primary system.
In the United States, Iowa and New Hampshire have drawn attention every four years because they hold the first caucus and primary election, respectively, and often give a candidate the momentum to win their party's nomination.
A criticism of the current presidential primary election schedule is that it gives undue weight to the few states with early primaries, as those states often build momentum for leading candidates and rule out trailing candidates long before the rest of the country has even had a chance to weigh in, leaving the last states with virtually no actual input on the process. The counterargument to this criticism, however, is that, by subjecting candidates to the scrutiny of a few early states, the parties can weed out candidates who are unfit for office.
The Democratic National Committee (DNC) proposed a new schedule and a new rule set for the 2008 Presidential primary elections. Among the changes: the primary election cycle would start nearly a year earlier than in previous cycles, states from the West and the South would be included in the earlier part of the schedule, and candidates who run in primary elections not held in accordance with the DNC's proposed schedule (as the DNC does not have any direct control over each state's official election schedules) would be penalized by being stripped of delegates won in offending states. The New York Times called the move, "the biggest shift in the way Democrats have nominated their presidential candidates in 30 years."
Of note regarding the DNC's proposed 2008 Presidential primary election schedule is that it contrasted with the Republican National Committee's (RNC) rules regarding Presidential primary elections. "No presidential primary, caucus, convention, or other meeting may be held for the purpose of voting for a presidential candidate and/or selecting delegates or alternate delegates to the national convention, prior to the first Tuesday of February in the year in which the national convention is held." In 2020, this date is February 4.
Candidates for U.S. President who seek their party's nomination participate in primary elections run by state governments, or caucuses run by the political parties. Unlike an election where the only participation is casting a ballot, a caucus is a gathering or "meeting of party members designed to select candidates and propose policies." Both primaries and caucuses are used in the Presidential nomination process, beginning in January or February and culminating in the late-summer political party conventions. Candidates may earn convention delegates from each state primary or caucus. Sitting presidents generally do not face serious competition from their party.
While it is clear that the closed/semi-closed/semi-open/open classification commonly used by scholars studying primary systems does not fully explain the highly nuanced differences seen from state to state, still, it is very useful and has real-world implications for the electorate, election officials, and the candidates themselves.
As far as the electorate is concerned, the extent of participation allowed to weak partisans and independents depends almost solely on which of the aforementioned categories best describes their state's primary system. Clearly, open and semi-open systems favor this type of voter, since they can choose which primary they vote in on a yearly basis under these models. In closed primary systems, true independents are, for all practical purposes, shut out of the process.
This classification further affects the relationship between primary elections and election commissioners and officials. The more open the system, the greater the chance of raiding, or voters voting in the other party's primary in hopes of getting a weaker opponent chosen to run against a strong candidate in the general election. Raiding has proven stressful to the relationships between political parties, who feel cheated by the system, and election officials, who try to make the system run as smoothly as possible.
Perhaps the most dramatic effect this classification system has on the primary process is its influence on the candidates themselves. Whether a system is open or closed dictates the way candidates run their campaigns. In a closed system, from the time a candidate qualifies to the day of the primary, he must cater to strong partisans, who tend to lean to the extreme ends of the ideological spectrum. In the general election, on the other hand, the candidate must move more towards the center in hopes of capturing a plurality.
Daniel Hannan, a British politician and Member of the European Parliament, claimed, "Open primaries are the best idea in contemporary politics. They shift power from party hierarchs to voters, from Whips to backbenchers and from ministers to Parliament. They serve to make legislatures more diverse and legislators more independent."
Primaries in Europe
In Europe, primaries are not organized by the public administration but by parties themselves. Legislation is mostly silent on primaries. The main reason to this is that the voting method used to form governments, be it proportional representation or two-round systems, lessens the need for an open primary.
Governments are not involved in the process; however, parties may need their cooperation, notably in the case of an open primary, e.g. to obtain the electoral roll, or to cover the territory with a sufficient number of polling stations.
Whereas closed primaries are rather common within many European countries, few political parties in Europe already opted for open primaries. Parties generally organise primaries to nominate the party leader (leadership election). The underlying reason for that is that most European countries are parliamentary democracies. National governments are derived from the majority in the Parliament, which means that the head of the government is generally the leader of the winning party. France is one exception to this rule.
Closed primaries happen in many European countries, while open primaries have so far only occurred in the socialist and social-democratic parties in Greece and Italy, whereas the France's Socialist Party organised the first open primary in France in October 2011.
One of the more recent developments is organizing primaries on the European level. European parties that organized primaries so far were European Green Party (EGP) and Party of European Socialists (PES).
|This section requires expansion. (December 2012)|
In Italy, the first open primaries took place on 16 October 2005. It led to the designation of Romano Prodi as leader of the Olive Tree coalition, which gathered several center and left-wing parties, for the legislative elections of the 9th and 10th April 2006. Several parties of the coalition decided to form a single major Centre-left party:The Democratic Party, which uses the primary elections to choose its candidate to the premiership.
In France, elections follow a two-round system. In the first round, all candidates who have qualified (for example, by obtaining a minimal number of signatures of support from elected officials) are on the ballot. In practice, each candidate usually represents a political party, large or small. In the second round, held two weeks later, the top two candidates run against each other, with the candidates from losing parties usually endorsing one of the two finalists.
The means by which the candidate of an established political party is selected has evolved. Until 2012, none of the six Presidents elected through direct election faced a competitive internal election.
- In 2007, Sarkozy, President of the UMP, organized an approval "primary" without any opponent. He won by 98% and made his candidacy speech thereafter.
- On the left however, the Socialist Party, which helped Mitterrand gain the Presidency for 14 years, has been plagued by internal divisions since the latter departed from politics. Rather than forming a new party, which is the habit on the right-wing, the party started to elect its nominee internally.
- A first try in 1995: Lionel Jospin won the nomination three months before the election. He lost in the run-off to Chirac. Later in 2002, although the candidacy of then-PM Jospin was undisputed in his party, each of the 5 left-wing parties of the government he led sent a candidate... paving the way for a loss of all five.
- The idea made progress coming near the 2007 race, once the referendum on a European constitution was over. The latter showed strong ideological divisions within the left-wing spectrum, and the Socialist Party itself. This prevented the possibility of a primary spanning the whole left-wing, that would give its support to a presidential candidate. Given that no majority supported either a leader or a split, a registration campaign, enabling membership for only 20 euros, and a closed primary was organized, which Ségolène Royal won. She qualified to the national run-off that she lost to Sarkozy.
- In 2011, the Socialist Party decided to organise the first ever open primary in France to pick up the Socialist party and the Radical Party of the Left nominee for the 2012 presidential election. Inspired by the 2008 U.S. primaries, it was seen as a way to reinvigorate the party. The idea was first proposed by Terra Nova, an independent left-leaning think tank, in a 2008 report. It was also criticized for going against the nature of the regime. The open primary was not state-organized : the party took charge of all the electoral procedures, planning to set up 10,000 voting polls. All citizens on the electoral rolls, party members of Socialist party and the Radical Party of the Left, and members of the parties' youth organisation (MJS and JRG), including minors of 15 to 18 years old, were entitled to vote in exchange of a euro to cover the costs. More than 3 million people participated in this first open primary, which was considered a success, and former party leader François Hollande was designated the Socialist and Radical candidate for the 2012 presidential election.
- Other parties organize membership primaries to choose their nominee, such as Europe Ecologie – Les Verts (EE-LV) (2006, 2011), and the French Communist Party in 2011.
- At the local level, membership primaries are the rule for Socialist Party's candidates, but these are usually not competitive. In order to tame potential feud in his party, and prepare the ground for a long campaign, Sarkozy pushed for a closed primary in 2006 to designate the UMP candidate for the 2008 election of the Mayor of Paris. Françoise de Panafieu was elected in a four-way race. However, she did not clinch the mayorship two years later.
For the 2010 general election, the Conservative Party used open primaries to select two candidates. Further open primaries were used to select some Conservative candidates for the 2015 general election, and there are hopes other parties may nominate future candidates in this way.
- Only three parties organised an open primary: France (PS), Greece (ΠΑΣΟΚ), Italy (PD)
- Closed primary happened in nine parties: Belgium (sp.a, PS), Cyprus (ΕΔΕΚ), Denmark (SD), France (PS) until 2011, Ireland (LP), Netherlands (PvdA), Portugal (PS), United Kingdom (Labour)
The case of UK's Labour party leadership election is specific, as three electoral colleges, each accounting for one third of the votes, participate in this primary election: Labour members of Parliament and of the European Parliament, party members and members of affiliated organisations such as trade unions.
- The designation of the party leader was made by the party's congress in the eighteen remaining parties: Austria (SPÖ), Bulgaria (БСП), Czech Republic (ČSSD), Estonia (SDE), Finland (SDP), Germany (SPD), Hungary (MSZP), Latvia (LSDSP), Lithuania (SDPL), Luxembourg (LSAP), Malta (LP), Poland (SLD, UP), Romania (PSD), Slovakia (SMER-SD), Slovenia (SD), Spain (PSOE), Sweden (SAP), United-Kingdom / Northern Ireland (SDLP)
Indeed, the Lisbon treaty, which entered into force in December 2009, lays down that the outcome of elections to the European Parliament must be taken into account in selecting the President of the Commission; the Commission is in some respects the executive branch of the EU and so its president can be regarded as the EU prime minister. Parties are therefore encouraged to designate their candidates for Commission president ahead of the next election in 2014, in order to allow voters to vote with a full knowledge of the facts. Many movements are now asking for primaries to designate these candidates.
- Already in April 2004, a former British conservative MEP, Tom Spencer, advocated for American-style primaries in the European People's Party: "A series of primary elections would be held at two-week intervals in February and March 2009. The primaries would start in the five smallest countries and continue every two weeks until the big five voted in late March. To avoid swamping by the parties from the big countries, one could divide the number of votes cast for each candidate in each country by that country's voting weight in the Council of Ministers. Candidates for the post of president would have to declare by 1 January 2009."
- In July 2013 European Green Party (EGP) announced that it would run a first ever European-wide open primary as the preparation for the European elections in 2014. It was be open to all citizens of the EU over the age of 16 who "supported green values" They elected two transnational candidates who were to be the face of the common campaign of the European green parties united in the EGP, and who also were their candidates for European Commission president.
- Following the defeat of the Party of European Socialists during the European elections of June 2009, the PES Congress that took place in Prague in December 2009 made the decision that PES would designate its own candidate before the 2014 European elections. A Campaign for a PES primary was then launched by PES supporters in June 2010, and it managed to convince the PES Council meeting in Warsaw in December 2010 to set up Working Group "Candidate 2014" in charge of proposing a procedure and timetable for a "democratic" and "transparent" designation process "bringing on board all our parties and all levels within the parties".
The European think-tank Notre Europe also evokes the idea that European political parties should designate their candidate for Vice-president / High representative of the Union for foreign affairs. This would lead European parties to have "presidential tickets" on the American model.
Finally, the European Parliament envisaged to introduce a requirement for internal democracy in the regulation on the statute of European political parties. European parties would therefore have to involve individual members in the major decisions such as designating the presidential candidate.
Primaries in Canada
As in Europe, primary elections in Canada are not organized by the public administration but by parties themselves. Political parties participate in federal elections to the House of Commons, in legislative elections in all ten provinces, and in Yukon. (The legislatures and elections in the Northwest Territories and Nunavut are non-partisan.)
Typically, in the months before an anticipated general election, local riding associations of political parties in each electoral district will schedule and announce a Nomination Meeting (similar to a nominating caucus in the United States). Would-be candidates will then file nomination papers with the association, and usually will devote time to solicit existing party members, and to sign up new party members who will also support them at the nomination meeting. At the meeting, typically each candidate will speak, and then members in attendance will vote. The voting system most often used is an exhaustive ballot system; if no candidate has over 50% of the votes, the candidate with the lowest number of votes will be dropped and another ballot will be held. Also, other candidates who recognize that they will probably not win may withdraw between ballots, and may "throw their support" to (encourage their own supporters to vote for) another candidate. After the nomination meeting, the candidate and the association will obtain approval from party headquarters, and file the candidate's official nomination papers and necessary fees and deposits with Elections Canada or the provincial/territorial election commissions as appropriate.
At times, party headquarters may overturn an association's chosen candidate; for example, if any scandalous information about the candidate comes to light after the nomination. A party headquarters may also "parachute" a prominent candidate into an easy-to-win riding, removing the need to have a nomination meeting. These situations only come up infrequently, as they tend to cause disillusionment among a party's supporters.
Canadian political parties also organize their own elections of party leaders. Not only will the party leader run for a seat in their own chosen riding, they will also become Prime Minister (in a federal election) or Premier (in a province or territory) should their party win the most seats. If the party wins the second-most seats, the party leader will become Leader of the Official Opposition; if the party comes third or lower, the leader will still be recognized as the leader of their party, and will be responsible for co-ordinating the activities and affairs of their party's caucus in the legislature.
In the past, Canadian political parties chose party leaders through the votes of delegates to a Leadership Convention. Local riding associations would choose delegates, usually in a manner similar to how they would choose a candidate for election. These delegates typically said explicitly which leadership candidate they would support. Those delegates, as well as other delegates (e.g. sitting party members of Parliament or the legislature, or delegates from party-affiliated organizations such as labor unions in the case of the New Democratic Party), would then vote, again using the exhaustive ballot method, until a leader was chosen.
Lately, Canada's major political parties have moved to "one member, one vote" systems for their federal leadership elections. A leadership convention is still scheduled, but all party members have a chance to vote for the new leader. Typically, members may vote either in person as a delegate to the convention, online as they watch ballot-by-ballot results on the Internet or on television, or through a mail-in preferential ballot (handled by an "instant runoff" method). This method was used in the 2012 NDP leadership convention which chose Tom Mulcair as federal party leader. When the Liberal Party chose Justin Trudeau as party leader in its leadership convention in 2013, they used a similar process, but only used online preferential voting for members not present at the convention and did not use mail-in ballots. As well, they scaled all members' votes such that each of the 308 riding associations' votes would be equal, notwithstanding how many or how few members voted in each riding.
- United States presidential primary.
- Primary elections in Italy.
- Argentine general election, 2011, Argentine legislative election, 2013, Argentine general election, 2015
- Uruguay, since 1999.
- United New Democratic Party (South Korea, 2007).
- United Kingdom
- Armenia. In an innovation on 2007 November 24 and 25, one political party conducted a non-binding Armenia-wide primary election. The party, the Armenian Revolutionary Federation, invited the public to vote to advise the party which of two candidates they should formally nominate for President of Armenia in the subsequent official election. What characterized it as a primary instead of a standard opinion poll was that the public knew of the primary in advance, all eligible voters were invited, and the voting was by secret ballot. "Some 68,183 people . . . voted in make-shift tents and mobile ballot boxes . . ."
- Colombia In 2006, the Liberal Party and the socialist Democratic Pole hold primary elections, electing Horacio Serpa as liberal candidate and Carlos Gaviria as candidate of the Democratic Pole. For 2010 presidential electiones, four parties held primary elections: The Liberal Party elected former minister Rafael Pardo as candidate, the Democratic Pole elected senator Gustavo Petro, the Conservative Party chose ambassador Noemi Sanin and the Green Party chose former mayor of Bogota Antanas Mockus.
- Costa Rica, the three main political forces National Liberation Party, Social Christian Unity Party and Citizens' Action Party have all run primary elections several times.
- Republic of China (Taiwan): The Democratic Progressive Party selects all its candidates via opinion polls. The candidate with the highest poll rating will be nominated. The KMT selects candidates using a combination of opinion polls (worth 70%) and primary elections (worth 30%).
- Sore-loser law, which states that the loser in a primary election cannot thereafter run as an independent in the general election
- Thomas W. Williams (Los Angeles), opposed the direct primary, 1915
- Smith, Kevin B. (2011). Governing States and Localities. Washington, D.C.: CQ Press. pp. 189–190. ISBN 978-1-60426-728-0.
- "Closed Primary Election Law & Legal Definition". USLegal.com. Retrieved 2012-11-07.
- "Open Primary Law & Legal Definition". USLegal.com. Retrieved 2012-11-07.
- Bowman, Ann (2012). State and Local Government: The Essentials. Boston, MA: Wadsworth. p. 77.
- Dye, Thomas R. (2009). Politics in States and Communities. New Jersey: Pearson Education. p. 152.
- (PDF) http://www.sos.wa.gov/_assets/elections/HistoryofWashingtonStatePrimarySystems.pdf. Missing or empty
- Ginsberg, Benjamin (2011). We the People: An Introduction to American Politics. New York: W.W. Norton & Co. p. 349.
- Cohen, Marty. The Party Decides: Presidential Nominations before and after Reform. Chicago: University of Chicago, 2008.
- Bowman, Ann (2006). State and Local Government: The Essentials. Boston, MA: Houghton Mifflin Co. pp. 75–77.
- "Blanket Primary Law & Legal Definition". USLegal.com. Retrieved 2012-11-07.
- "WASHINGTON STATE GRANGE v. WASHINGTON STATE REPUBLICAN PARTY". 18 March 2008. U.S. Supreme Court. Retrieved 22 April 2012.
- California Secretary of State
- McKinley, Jesse (June 9, 2010). "Calif. Voting Change Could Signal Big Political Shift". The New York Times.
- Idaho Voter's Guide (PDF) http://www.idahovotes.gov/VoterGuide/2012_Voter_Guide_English.pdf?hp. Missing or empty
- "E-votong? Not ready yet.". oregonlive.com. Retrieved 2010-08-11.
- "Democrats Set Primary Calendar and Penalties", New York Times, August 20, 2006
- "GOP.com". Gop.com. Archived from the original on 30 November 2008. Retrieved 2009-01-30.
- Bardes, Barbara (2012). American Government and Politics Today: The Essentials 2011-12 Edition. Boston, MA: Wadsworth. p. 300.
- "Do open primaries favour plutocrats and extremists?". London: Blogs.telegraph.co.uk. 2010-08-29. Retrieved 2010-10-31.
- "GP wins Tory 'open primary' race". BBC News. August 4, 2009. Retrieved May 22, 2010.
- "Tories test the mood in Totnes". BBC News. August 4, 2009. Retrieved May 22, 2010.
- (English) Article by Tom Spencer in European Voice American-style primaries would breathe life into European elections 22.04.2004
- (English) Website of the Campaign for a PES primary
- (English) Resolution of the PES Council in Warsaw, A democratic and transparent process for designating the PES candidate for the European Commission Presidency, 2 December 2010
- (French) Les Brefs de Notre Europe, Des réformes institutionnelles à la politisation – Ou comment l’Union européenne du Traité de Lisbonne peut intéresser ses citoyens, October 2010
- (English) European Parliament press release, Constitutional Affairs Committee discusses pan-European political parties, 31 January 2011
- Cross, William (2006). "Chapter 7: Candidate Nomination in Canada's Political Parties". In Jon H. Pammett and Christopher Dornan. The Canadian Federal Election of 2006 (PDF). Toronto: Dundurn Press. pp. 171–195. ISBN 978-1-55002-650-4.
- Horizon Armenian Weekly, English Supplement, 2007 December 3, page E1, "ARF conducts 'Primaries' ", a Yerkir agency report from the Armenian capital, Yerevan.
- Bibby, John, and Holbrook, Thomas. 2004. Politics in the American States: A Comparative Analysis, 8th Edition. Ed. Virginia Gray and Russell L. Hanson. Washington D.C.: CQ Press, pp. 62–100.
- Brereton Charles. First in the Nation: New Hampshire and the Premier Presidential Primary. Portsmouth, NH: Peter E. Randall Publishers, 1987.
- The Center for Election Science. Electoral System Summary
- Hershey, Majorie. Political Parties in America, 12th Edition. New York: Pearson Longman, 2007. pp. 157–73.
- Kendall, Kathleen E. Communication in the Presidential Primaries: Candidates and the Media, 1912–2000 (2000)
- Primaries: Open and Closed
- Palmer, Niall A. The New Hampshire Primary and the American Electoral Process (1997)
- Scala, Dante J. Stormy Weather: The New Hampshire Primary and Presidential Politics (2003)
- Ware, Alan. The American Direct Primary: Party Institutionalization and Transformation in the North (2002), the invention of primaries around 1900 | https://en.wikipedia.org/wiki/Primary_elections |
4.28125 | Could human life end with an asteroid?
Asteroid impacts have played an enormous role in creating Earth and in altering the course of the evolution of life. It is most likely that an asteroid impact brought the end of the dinosaurs and many other lifeforms at the end of the Mesozoic. Could one asteroid do it again?
Asteroids are very small, rocky bodies that orbit the Sun. "Asteroid" means "star-like," and in a telescope, asteroids look like points of light, just like stars. Asteroids are irregularly shaped because they do not have enough gravity to become round. They are also too small to maintain an atmosphere, and without internal heat they are not geologically active ( Figure below ). Collisions with other bodies may break up the asteroid or create craters on its surface.
In 1991, Asteroid 951 Gaspra was the first asteroid photographed at close range. Gaspra is a medium-sized asteroid, measuring about 19 by 12 by 11 km (12 by 7.5 by 7 mi).
Asteroid impacts have had dramatic impacts on the shaping of the planets, including Earth. Early impacts caused the planets to grow as they cleared their portions of space. An impact with an asteroid about the size of Mars caused fragments of Earth to fly into space and ultimately create the Moon. Asteroid impacts are linked to mass extinctions throughout Earth's history.
The Asteroid Belt
Hundreds of thousands of asteroids have been discovered in our solar system. They are still being discovered at a rate of about 5,000 new asteroids per month. The majority of the asteroids are found in between the orbits of Mars and Jupiter, in a region called the asteroid belt , as shown in Figure below . Although there are many thousands of asteroids in the asteroid belt, their total mass adds up to only about 4% of Earth’s Moon.
The white dots in the figure are asteroids in the main asteroid belt. Other groups of asteroids closer to Jupiter are called the Hildas (orange), the Trojans (green), and the Greeks (also green).
Scientists think that the bodies in the asteroid belt formed during the formation of the solar system. The asteroids might have come together to make a single planet, but they were pulled apart by the intense gravity of Jupiter.
More than 4,500 asteroids cross Earth’s orbit; they are near-Earth asteroids . Between 500 and 1,000 of these are over 1 km in diameter.
Any object whose orbit crosses Earth’s can collide with Earth, and many asteroids do. On average, each year a rock about 5–10 m in diameter hits Earth ( Figure below ). Since past asteroid impacts have been implicated in mass extinctions, astronomers are always on the lookout for new asteroids, and follow the known near-Earth asteroids closely, so they can predict a possible collision as early as possible.
A painting of what an asteroid a few kilometers across might look like as it strikes Earth.
Scientists are interested in asteroids because they are representatives of the earliest solar system ( Figure below ). Eventually asteroids could be mined for rare minerals or for construction projects in space. A few missions have studied asteroids directly. NASA’s DAWN mission will be exploring asteroid Vesta in 2011 and 2012 and dwarf planet Ceres in 2015.
The NEAR Shoemaker probe took this photo as it was about to land on 433 Eros in 2001.
KQED: Asteroid Hunters
Thousands of objects, including comets and asteroids, are zooming around our solar system; some could be on a collision course with Earth. QUEST explores how these Near Earth Objects are being tracked and what scientists are saying should be done to prevent a deadly impact. Learn more at: http://science.kqed.org/quest/video/asteroid-hunters/
- Asteroids are small rocky bodies that orbit the Sun and sometimes strike Earth.
- Most asteroids reside in the asteroid belt, between Mars and Jupiter.
- Near-earth asteroids are the ones most likely to strike Earth, and scientists are always looking out for a large one that may impact our planet and cause problems.
Use these resources to answer the questions that follow.
1. What are asteroids?
2. Where are most asteroids found?
Go to the Asteroid Table.
3. What is the largest asteroid and when was it discovered?
4. What has NEOWISE determined?
5. How many of the asteroids have been cataloged?
6. How are the asteroids detected?
7. What type of telescope is being used?
1. What is the reason there is a belt of asteroids between Mars and Jupiter?
2. Why do scientists look for asteroids that might strike our planet?
3. What do scientists hope to learn from missions to visit asteroids? | http://www.ck12.org/earth-science/Asteroids/lesson/Asteroids/r16/ |
4.3125 | Understanding Reading Assessment
The information on this page is provided to help parents understand how children's numeracy is assessed.
How is Reading assessed?
Progress in reading is assessed according to the extent to which pupils are gaining a deep understanding of the content taught for their year.
Teachers assess against 2 key areas; reading words and reading comprehension.
The national curriculum for reading aims to enable pupils to:
- become a fluent reader using a variety of strategies to read words e.g. using phonic knowledge to recognise and blend phonemes, speedily recognise high frequency words and apply their growing knowledge of root words, prefixes and suffixes.
- understand and comprehend what has been listened to and read e.g. discussing the significance of the title and events, and predicting what might happen or infer feelings.
Age related expectations
Children will be assessed against the objectives for their year group, set out in the National Curriculum.
- We will use the terms, 'emerging,' 'developing' and 'secure' to track their progress against these targets.
- At the end of the year, most children will be expected to be 'secure.' This means they will have reached age related expectations.
- Some children may not reach this stage and this will be reported accordingly. Some children may have 'mastered' these expectations.
Please click on the links below to view the key objectives for your child's year group. These targets are written in 'child speak' to enable the children to understand the skills they need to develop.
Key objectives - Year 1
Key objectives - Year 2
Key objectives - Year 3 and 4
Key objectives - Year 5 and 5
When is reading assessed?
Teachers assess children's reading on a regular basis, using oral and some written work to gather information.
Each half term, teachers gather each child's notes from reading sessions and where appropriate written work and perform a more formal assessment. This is used to support and challenge reading sessions during the following term.
A baseline assessment is completed during the first term in reception. The EYFS baseline helps inform planning and teaching and is a method of measuring progress made from EYFS through to Key Stage 2.
Towards the end of the summer term, children may also complete "optional" tests - this provides additional information to support teacher assessment and also gives the children experience of "proper tests".
Year 2 children and Year 6 children complete more formal National Curriculum tests at the end of the year.
How does reading at home help my child?
We encourage all children to read regularly at home. Regular time spent reading to an adult plays an invaluable role in helping children to become fluent, confident readers. Being able to read well and understand the text is the most important core skill a child can acquire, since it underpins all learning at school.
From the earliest age, children gain great enjoyment out of sitting with mum or dad, granny or granddad, looking at the pictures as a well-known story is shared… often over and over again! Older children, who are more fluent in reading, can develop their appreciation of authors and texts by talking about the book character, plot or underlying message.
Please do talk to your class teacher, if you would like any further information on the phonic or reading strategies used in school, or if you would like further information on how you can help your child develop these crucial skills. | http://www.puriton.somerset.sch.uk/Understanding-reading-levels.aspx |
4.09375 | |Symmetry group||Ci, [2+,2+], (×), order 2|
In geometry, a parallelepiped is a three-dimensional figure formed by six parallelograms (the term rhomboid is also sometimes used with this meaning). By analogy, it relates to a parallelogram just as a cube relates to a square or as a cuboid to a rectangle. In Euclidean geometry, its definition encompasses all four concepts (i.e., parallelepiped, parallelogram, cube, and square). In this context of affine geometry, in which angles are not differentiated, its definition admits only parallelograms and parallelepipeds. Three equivalent definitions of parallelepiped are
- a polyhedron with six faces (hexahedron), each of which is a parallelogram,
- a hexahedron with three pairs of parallel faces, and
- a prism of which the base is a parallelogram.
Parallelepipeds are a subclass of the prismatoids.
Any of the three pairs of parallel faces can be viewed as the base planes of the prism. A parallelepiped has three sets of four parallel edges; the edges within each set are of equal length.
Since each face has point symmetry, a parallelepiped is a zonohedron. Also the whole parallelepiped has point symmetry Ci (see also triclinic). Each face is, seen from the outside, the mirror image of the opposite face. The faces are in general chiral, but the parallelepiped is not.
The volume of a parallelepiped is the product of the area of its base A and its height h. The base is any of the six faces of the parallelepiped. The height is the perpendicular distance between the base and the opposite face.
An alternative method defines the vectors a = (a1, a2, a3), b = (b1, b2, b3) and c = (c1, c2, c3) to represent three edges that meet at one vertex. The volume of the parallelepiped then equals the absolute value of the scalar triple product a · (b × c):
This is true because, if we choose b and c to represent the edges of the base, the area of the base is, by definition of the cross product (see geometric meaning of cross product),
where θ is the angle between b and c, and the height is
where α is the internal angle between a and h.
From the figure, we can deduce that the magnitude of α is limited to 0° ≤ α < 90°. On the contrary, the vector b × c may form with a an internal angle β larger than 90° (0° ≤ β ≤ 180°). Namely, since b × c is parallel to h, the value of β is either β = α or β = 180° − α. So
We conclude that
The latter expression is also equivalent to the absolute value of the determinant of a three dimensional matrix built using a, b and c as rows (or columns):
This is found using Cramer's Rule on three reduced two dimensional matrices found from the original.
If a, b, and c are the parallelepiped edge lengths, and α, β, and γ are the internal angles between the edges, the volume is
For parallelepipeds with a symmetry plane there are two cases:
- it has four rectangular faces
- it has two rhombic faces, while of the other faces, two adjacent ones are equal and the other two also (the two pairs are each other's mirror image).
See also monoclinic.
A perfect parallelepiped is a parallelepiped with integer-length edges, face diagonals, and space diagonals. In 2009, dozens of perfect parallelepipeds were shown to exist, answering an open question of Richard Guy. One example has edges 271, 106, and 103, minor face diagonals 101, 266, and 255, major face diagonals 183, 312, and 323, and space diagonals 374, 300, 278, and 272.
Some perfect parallelepipeds having two rectangular faces are known. But it is not known whether there exist any with all faces rectangular; such a case would be called a perfect cuboid.
Coxeter called the generalization of a parallelepiped in higher dimensions a parallelotope.
Specifically in n-dimensional space it is called n-dimensional parallelotope, or simply n-parallelotope. Thus a parallelogram is a 2-parallelotope and a parallelepiped is a 3-parallelotope.
More generally a parallelotope, or voronoi parallelotope, has parallel and congruent opposite facets. So a 2-parallelotope is a parallelogon which can also include certain hexagons, and a 3-parallelotope is a parallelohedron, including 5 types of polyhedra.
The diagonals of an n-parallelotope intersect at one point and are bisected by this point. Inversion in this point leaves the n-parallelotope unchanged. See also fixed points of isometry groups in Euclidean space.
The edges radiating from one vertex of a k-parallelotope form a k-frame of the vector space, and the parallelotope can be recovered from these vectors, by taking linear combinations of the vectors, with weights between 0 and 1.
The word appears as parallelipipedon in Sir Henry Billingsley's translation of Euclid's Elements, dated 1570. In the 1644 edition of his Cursus mathematicus, Pierre Hérigone used the spelling parallelepipedum. The Oxford English Dictionary cites the present-day parallelepiped as first appearing in Walter Charleton's Chorea gigantum (1663).
Charles Hutton's Dictionary (1795) shows parallelopiped and parallelopipedon, showing the influence of the combining form parallelo-, as if the second element were pipedon rather than epipedon. Noah Webster (1806) includes the spelling parallelopiped. The 1989 edition of the Oxford English Dictionary describes parallelopiped (and parallelipiped) explicitly as incorrect forms, but these are listed without comment in the 2004 edition, and only pronunciations with the emphasis on the fifth syllable pi (/paɪ/) are given.
A change away from the traditional pronunciation has hidden the different partition suggested by the Greek roots, with epi- ("on") and pedon ("ground") combining to give epiped, a flat "plane". Thus the faces of a parallelepiped are planar, with opposite faces being parallel.
- Oxford English Dictionary 1904; Webster's Second International 1947
- Sawyer, Jorge F.; Reiter, Clifford A. (2011). "Perfect parallelepipeds exist". Mathematics of Computation 80: 1037–1040. arXiv:0907.0220. doi:10.1090/s0025-5718-2010-02400-7..
- Properties of parallelotopes equivalent to Voronoi’s conjecture
- Coxeter, H. S. M. Regular Polytopes, 3rd ed. New York: Dover, p. 122, 1973. (He defines parallelotope as a generalization of a parallelogram and parallelepiped in n-dimensions.)
|Look up parallelepiped in Wiktionary, the free dictionary.|
- Weisstein, Eric W., "Parallelepiped", MathWorld.
- Weisstein, Eric W., "Parallelotope", MathWorld.
- Paper model parallelepiped (net) | https://en.wikipedia.org/wiki/Parallelepiped |
4.0625 | - 10 - Not be forcibly removed from their lands or territories
- 21.1 - The right to the improvement of economic and social conditions
- 23 - The right to determine and develop priorities and strategies
- 26 - The right to the lands, territories and resources
- 27 - Open and transparent process to recognize and adjudicate the rights of indigenous peoples
- 28 - The right to redress for lands, territories and resources taken or damaged
- 32 - Free and informed consent prior to the approval of projects affecting lands or territories and other resources
For details and background, see:
- Declaration on the Rights of Indigenous Peoples (Wikipedia);
- The full text of the Declaration (pdf);
- The Indigenous peoples main page provided by OHCHR.
- The articles included here have been selected on the basis of the International Standards page, provided by OHCHR.
What are the stated objectives and aims of the agreement.
States where indigenous peoples live.
The indigenous peoples.
Values & Claims
Indigenous Peoples are equal to all other peoples, and their rights should be recognized accordingly.
Claims on land and territories(Entity Dictionary) is one area where the rights of indigenous peoples are often neglected. | http://www.actor-atlas.info/treaty:un-declaration-on-the-rights-of-indigenous-peoples |
4 | Most middle school students must learn to use a triple beam balance scale at some point during their science classes. Often used by physics or chemistry teachers to demonstrate the principle of mass, these devices can be used to weigh any object within their weight limitations. Triple beam balance scales function by balancing an object with three counterweights—attached to the scale—to accurately find the object's weight. Using one of these devices is not difficult.
Items you will need
- Object to weigh
Calibrate the scale by sliding all three weight poises (the metal brackets that slide along the three beams) to their leftmost positions. Twist the zeroing screw (usually located below the pan in which you place the object to be weighed) until the balance pointer lines up with the fixed zero mark.
Place the object to be weighed on the center of the pan.
Slide the 100-gram poise right one notch at a time. When the indicator drops below the fixed mark, move the poise left one notch. For instance, if your object weighs 487 grams, the 100-gram indicator would drop below the fixed mark on the fifth notch (500 grams). Move the poise back to the 400-gram notch.
Slide the 10-gram poise right one notch at a time. When the indicator drops below the fixed mark, move the poise left one notch. In the case above, the 10-gram indicator would drop below the fixed mark on the ninth notch (90 grams). Move the poise back to the 80-gram notch.
Slide the 1-gram poise slowly across the third beam. There are no notches, so keep an eye on the pointer as you slide. Stop sliding when the pointer lines up with the fixed mark. In the case above, the 1-gram poise will cause the pointer to line up at the fixed mark at 7 grams.
Add the values of all three beams to determine the mass of your object. In the case of our example, add 400 + 80 + 7, resulting in an object mass of 487 grams.
Style Your World With Color
Let your imagination run wild with these easy-to-pair colors.View Article
Explore a range of beautiful hues with the year’s must-have colors.View Article
Let your clothes speak for themselves with this powerhouse hue.View Article
Barack Obama's signature color may bring presidential power to your wardrobe.View Article
- Repeat your measurements twice to be sure of your results. This is especially important in science labs, where operator error can skew the results of an experiment.
- Failing to zero out the scale before using the triple beam balance can result in inaccurate measurements. | http://classroom.synonym.com/use-triple-beam-balance-scale-2503.html |
4.0625 | Posted on: Aug 18, 2005
It turns out you can’t judge an asteroid by its cover, according to a recent study in the journal Nature. Or at least you can’t accurately date a certain asteroid called 433 Eros by counting the impact craters on its surface -- the traditional method for determining an asteroid’s age.
Peter Thomas, a senior research associate in astronomy at Cornell University and lead author on the paper, and Mark Robinson, research associate professor of geological sciences at Northwestern University, analyzed images of Eros gathered four years ago by the Near Earth Asteroid Rendezvous mission. The mission mapped the 20-mile-long, potato-shaped asteroid and its thousands of craters in detail. The two researchers focused on a large impact crater, known as the Shoemaker crater, and a few unusual crater-free areas.
In the Nature article, Thomas and Robinson show that the asteroid’s smooth patches can be explained by a seismic disturbance that occurred when a meteoroid crashed into Eros, shaking the asteroid and creating Shoemaker crater. The shaking caused loose surface material to fill some small craters, essentially erasing craters from approximately 40 percent of Eros’ surface and making the asteroid appear younger than its actual age.
The fact that seismic waves were carried through the center of the asteroid after the impact shows that the asteroid’s interior is cohesive enough to transmit such waves, say the authors. And the smoothing-out effect within a radius of up to 5.6 miles from the 4.7-mile Shoemaker crater -- even on the opposite side of the asteroid -- indicates that Eros’ surface is loose enough to get shaken down by the impact.
Asteroids are small, planet-like bodies that date back to the beginning of the solar system, so studying them can give astronomers insight into the solar system’s formation. And while no asteroids currently threaten Earth, knowing more about their composition could help prepare for a possible future encounter. Eros is the most carefully studied asteroid, in part because its orbit brings it close to earth.
Thomas and Robinson considered various theories for the regions of smoothness, including the idea that ejecta from another impact had blanketed the areas. But they rejected the ejecta hypothesis when calculations showed an impact Shoemaker’s size wouldn’t create enough material to cover the surface indicated. And even if it did, they add, the asteroid’s irregular shape and motion would cause the ejecta to be distributed differently. In contrast, the shaking-down hypothesis fits the evidence neatly.
'Science knows no country, because knowledge belongs to humanity, and is the torch which illuminates the world. ' | http://www.physlink.com/news/081805DatingAnAsteroid.cfm |
4.25 | Blood pressure: what is your target?
How is blood pressure measured?
Blood pressure is measured using an instrument called a sphygmomanometer. It consists of an inflatable cuff, an inflating bulb, and a gauge to show the blood pressure.
The cuff is wrapped around the upper arm, and inflated to a pressure where the pulse in the arm can no longer be heard or felt. The cuff pressure is then raised slightly beyond this point, and then slowly lowered in order to get a reading of the systolic and diastolic blood pressure.
The systolic reading (the first number of the 2) indicates the pressure of blood within your arteries during a contraction of the left ventricle of the heart. The diastolic reading (the second number) indicates the pressure within the arteries when the heart is at rest. Blood pressure is measured in millimetres of mercury (mmHg), for example 120/80 mmHg (known as 120 over 80).
What is normal blood pressure?
According to the Heart Foundation of Australia, as a general guide:
- blood pressure just below 120/80 mmHg can be classified as 'normal'; and
- blood pressure between 120/80 and 140/90 mmHg is classified as 'high-normal'.
A person is defined by the Heart Foundation as having high blood pressure (hypertension) if they:
- have a systolic pressure greater than or equal to 140 mmHg; and/or
- a diastolic pressure greater than or equal to 90 mmHg.
Hypertension is further classified as mild, moderate or severe as the pressure increases above this level.
Low blood pressure, or hypotension, is not as easy to define as it is usually relative to a person’s normal blood pressure reading, and varies between different people. It generally refers to a blood pressure below an average of about 90/60 mmHg.
Getting an accurate reading
According to the Heart Foundation, the diagnosis of high blood pressure should be based on multiple blood pressure measurements taken on separate occasions.
It is recommended that you do not smoke or drink caffeine-containing drinks for 2 hours before having your blood pressure monitored, as this can cause an increase in your readings.
Self-monitoring of blood pressure in your own environment or ambulatory monitoring of blood pressure is also used to help diagnose high blood pressure. For ambulatory blood pressure monitoring, you wear a portable automatic blood pressure machine for 24 hours while going about your usual daily routine. Variations in blood pressure are normal and may occur depending on where and when the blood pressure is taken.
Some people who have raised blood pressure readings taken at the doctor’s surgery actually have acceptable levels outside the surgery, when under normal stress levels. This is known as ‘white-coat’ hypertension.
There are also people with ‘reverse white-coat’ hypertension (also known as masked hypertension), who have normal blood pressure when measured in the clinic but high ambulatory blood pressure readings (those recorded during normal daily activities).
Keeping on target
Your target blood pressure may vary according to whether you have other conditions that can increase your risk of cardiovascular (heart and blood vessel) disease or conditions that have been caused by high blood pressure.
Raised blood pressure is a major risk factor for cardiovascular disease, and the higher your blood pressure, the greater your chance of having heart disease or stroke. For this reason it is important that you have your blood pressure monitored regularly, and that you always take any medicine prescribed for hypertension.
Hypertension can also be controlled to a large extent by lifestyle modifications such as reducing excess weight, undertaking regular physical activity, and giving up smoking. Dietary interventions such as reducing your alcohol and salt intake and following a healthy eating plan may also help to lower your blood pressure and reduce your absolute risk of cardiovascular disease.
- 1. Heart Foundation. Guide to management of hypertension 2008. Assessing and managing raised blood pressure in adults; updated December 2010. http://www.heartfoundation.org.au/SiteCollectionDocuments/HypertensionGuidelines2008to2010Update.pdf (accessed Jan 2015).
2. Hypertension (revised October 2012). In: eTG complete. Melbourne: Therapeutic Guidelines Limited; 2014 Nov. http://online.tg.org.au/complete/ (accessed Jan 2015).
3. National Vascular Disease Prevention Alliance. Guidelines for the management of absolute cardiovascular disease risk; 2012. http://strokefoundation.com.au/site/media/AbsoluteCVD_GL_webready.pdf (accessed Jan 2015).
4. MayoClinic.com. Low blood pressure (hypotension) (updated 2 May 2014). http://www.mayoclinic.com/health/low-blood-pressure/DS00590 (accessed Jan 2015). | http://www.mydr.com.au/heart-stroke/blood-pressure-what-is-your-target |
4.09375 | |This article needs additional citations for verification. (October 2011)|
Roundhouse is a term applied by archaeologists and anthropologists to a type of house with a circular plan, usually with a conical roof. In the later part of the 20th century modern designs of roundhouse eco-buildings started to be built[where?] using techniques such as cob, cordwood or straw bale walls and reciprocal frame green roofs.
Roundhouses were the standard form of housing built in Britain from the Bronze Age throughout the Iron Age, and in some areas well into the Sub Roman period. They used walls made either of stone or of wooden posts joined by wattle-and-daub panels and a conical thatched roof and ranged in size from less than 5m in diameter to over 15m. The Atlantic roundhouse, Broch and Wheelhouse styles were used in Scotland. The remains of many Bronze Age roundhouses can still be found scattered across open heathland, such as Dartmoor, as stone 'hut circles'.
Most of what is assumed about these structures is derived from the layout of the postholes, although a few timbers have been found preserved in bogs. The rest has been postulated by experimental archaeology, which has shown the most likely form and function of the buildings. For example, experiments have shown that a conical roof with a pitch of about 45 degrees would have been the strongest and most efficient design.
Peter J. Reynolds also demonstrated that, although a central fire would have been lit inside for heating and cooking, there could not have been a smoke hole in the apex of the roof, for this would have caused an updraft that would have rapidly set fire to the thatch. Instead, smoke would have accumulated harmlessly inside the roof space, and slowly leaked out through the thatch. Many modern simulations of roundhouses have been built, including:
|Bodrifty Iron Age Settlement||Cornwall||England|
|Brigantium Archeological Centre||High Rochester||Northumberland||England|
|Butser Ancient Farm||Hampshire||England|
|Cockley Cley,||near Swaffham||Norfolk||England|
|Flag Fen||near Peterborough||England|
|Mellor roundhouse reconstruction||Greater Manchester||England|
|Peat Moors Centre||Somerset||England||Closed to the public 31 October 2009|
|Raincliffe Woods||Scarborough||North Yorkshire||England||Roof destroyed by fire April 2013. Timbers and thatch removed by Scarborough Conservation Volunteers. Walls undamaged.|
|Ryedale Folk Museum||near Pickering||North Yorkshire||England|
|St Fagans National History Museum||South Glamorgan||Wales|
|Scottish Crannog Centre||Loch Tay||Perthshire||Scotland||Roundhouse reconstruction on a man made island|
|Tatton Iron Age roundhouse and pit||Cheshire||England|
Modern British roundhouses
New designs of roundhouse are again being built in Britain and elsewhere. In the UK straw bale construction or cordwood walls with reciprocal frame green roofs are used. There is a manufacturer of contemporary Roundhouses in Cheshire, England, using modern materials and engineering to bring the circular floorplan back for modern living.
That Roundhouse is an early example of a modern roundhouse dwelling which was built in Pembrokeshire Coast National Park, Wales without planning permission as part of the Brithdir Mawr village which was discovered by the authorities in 1998. It is constructed from a wooden frame of hand-cut Douglas Fir forest thinnings with cordwood infill, and reciprocal frame turf roof based on permaculture principles mainly from local natural resources. It was subject to a lengthy planning battle including a court injunction to force its demolition before finally receiving planning approval for 3 years in September 2008.
Trulli (singular: trullo) are houses with conical roofs, and sometimes circular walls, found in parts of the southern Italian region of Apulia.
Galicia – Asturias
A palloza is a traditional thatched house as found in the Serra dos Ancares in Galicia, Spain, and in the south-west of Asturias. It is circular or oval, and about ten or twenty metres in diameter and is built to withstand severe winter weather at a typical altitude of 1,200 metres.
The main structure is stone, and is divided internally into separate areas for the family and their animals, with separate entrances. The roof is conical, made from rye straw on a wooden frame. There is no chimney, the smoke from the kitchen fire seeps out through the thatch.
As well as living space for humans and animals, a palloza has its own bread oven, workshops for wood, metal and leather work, and a loom. Only the eldest couple of an extended family had their own bedroom, which they shared with the youngest children. The rest of the family slept in the hay loft, in the roof space.
- See also Castros in Spain
Raun Haus, Papua New Guinea
- Aston, Mick (2001-10-05). "Peter Reynolds: archaeologist who showed us what the Iron Age was really like (obituary)". The Guardian (London).
- "Secret village to be pulled down". BBC News. 1998-10-23. Retrieved 2009-04-12.
- Barkham, Patrick (2009-04-12). "Round the houses". The Guardian (London). Retrieved 2009-04-12.
- Eric Rosenthal (1961–1978). Encyclopaedia of Southern Africa. London: F. Warne. p. 35. ISBN 0-7232-1487-5.
- "Cob roundhouse".
- "Raun Haus / Round Haus".
|Wikimedia Commons has media related to Round houses.|
- Animation showing how an ancient British roundhouse may have been constructed
- Characterising the Welsh Roundhouse: chronology, inhabitation and landscape
- A 21st century iron age round house
- Some examples of Reconstructed Celtic Roundhouses | https://en.wikipedia.org/wiki/Roundhouse_(dwelling) |
4 | Irish Volunteers (18th century)
The Volunteers (also known as the Irish Volunteers) were local militias raised by local initiative in Ireland in 1778. Their original purpose was to guard against invasion and to preserve law and order at a time when British soldiers were withdrawn from Ireland to fight abroad during the American Revolutionary War and the government failed to organise its own militia. Taking advantage of Britain's preoccupation with its rebelling American colonies, the Volunteers were able to pressure Westminster into conceding legislative independence to the Dublin parliament. Members of the Belfast 1st Volunteer Company laid the foundations for the establishment of the United Irishmen organisation. The majority of Volunteer members however were inclined towards the yeomanry, which fought and helped defeat the United Irishmen in the Irish rebellion of 1798.
- 1 Founding
- 2 Politics
- 3 Dungannon Conventions
- 4 Motifs and mottos
- 5 Competitions and awards
- 6 Organisation
- 7 Catholic emancipation
- 8 Demise
- 9 Legacy
- 10 References
- 11 Sources
- 12 See also
As far back as 1715 and 1745, self-constituted bodies of defensive local forces where formed in anticipation of Stuart invasions. For example, in 1744 with the declaration of war with France and in 1745 the landing of Prince Charles Edward in Scotland, a corps of 100 men was enrolled in Cork, known as the "The True Blues", which formed one of the regiments of the "United Independent Volunteers".
In 1757 and 1760 there were volunteer units formed due to the Seven Years' War and due to the French landing at Carrickfergus in 1760. The roll-call of the militia that marched on the French at Carrickfergus listed in the "Collectanea politica", published in 1803, was titled "Ulster volunteers in 1760". From 1766 onwards units were embodied by local landlords in various parts of the country for the preservation of peace and the protection of property. Early volunteer groups (which later became part of the Volunteers) included: First Volunteers of Ireland (1 July 1766); Kilkenny Rangers (2 June 1770); First Magherafelt Volunteers (June 1773); and the Offerlane Blues (10 October 1773).
The rise of the Volunteers was a spontaneous event fired by patriotism and the threat of invasion, as another French landing was anticipated when war broke out in 1778. With British troops being dispatched from Ireland for the war with the American colonies, the landed gentry reacted nervously, and misunderstandings[further explanation needed] arose about Ireland's defence capabilities. Claims that Ireland was ill-prepared for an attack, along with alleged negligence from Dublin Castle, was used to justify the existence of Volunteer companies and their role in defending Ireland. In fact around 4,000 soldiers had been dispatched to the American colonies, leaving as many as 9,000 behind in Ireland.
The Volunteers were built upon existing foundations. Dublin Castle had created militias throughout the 18th century, however these had fallen into disuse. The Volunteers filled the gap left behind, with possibly half of its officers having held commissions in the militia. Historian Thomas Bartlett claims that the purpose of the militia as defined in 1715 would have fitted with the aims of the Volunteers: "of suppressing... all such insurrections and rebellions, and repelling of invasions". Along with this, Irish Protestants of all ranks had a long, strong tradition of self-defence, having formed groups to resist and pursue agrarian insurgents and keeping a watchful eye on Catholics when threats arose.
The Volunteers were independent of the Irish Parliament and Dublin Castle, and this was an established fact by 1779. It is claimed that had the Lord Lieutenant of Ireland, John Hobart, 2nd Earl of Buckinghamshire, been more pro-active and assertive, then the Volunteers could have come under some form of government control.
The regular military deemed the Volunteers of low value in regards to helping repulse a foreign threat. Instead they held the view that they could be a "serviceable riot police", and it was this that they distinguished themselves for. For example, Volunteer companies did duty whilst regular troops had been called away, whilst others were used to pursue agrarian insurgents. When protests were organised in Dublin following the introduction of a bill in the Irish Parliament seeking to outlaw textile workers' combinations, the Volunteers were mobilised to maintain the peace in case of public disorder.
The British victory over the Spanish off Cape St. Vincent in 1780 saw the fear of invasion dissipate, causing the Volunteers to also become involved in politics. Initially they started off agitating for reforms and measures to promote Ireland's prosperity, but later they moved from peaceful persuasion to "the threat of armed dictatorship". In the end Parliament was victorious.
The Volunteers however were also marked by liberal political views. For instance although only Anglican Protestants were allowed to bear arms under the Penal Laws, the Volunteers admitted Presbyterians and a limited number of Catholics, reflecting the recent Catholic Relief Act of 1778.
The Volunteers additionally provided a patriotic outlet, with each corps becoming a debating society. This brought about a shift in power with the Volunteers being controlled by progressive politically minded people and not by the Establishment. The Volunteers also saw the annual Protestant commemorations such as the Battle of the Boyne and the Battle of Aughrim become displays of patriotic sentiment.
In Dublin on 4 November 1779, the Volunteers took advantage of the annual commemoration of King William III's birthday, marching to his statue in College Green and demonstrating for Free Trade between Ireland and England. Previously, under the Navigation Acts, Irish goods had been subject to tariffs upon entering England, whereas English goods could pass freely into Ireland. The Volunteers paraded fully armed with the slogan, "Free Trade or this", as referring to cannon. also cited "Free trade or a Speedy Revolution". According to Liz Curtis the English regime in Ireland was vulnerable, and the Volunteers used this to press for concessions from England using their new-found strength. This demand of the Volunteers was quickly granted by the British government. The Dublin Volunteers' review, saluting a statue of King William III, in College Green on 4 November 1779 was painted by Francis Wheatley.
On 4 June 1782, the Belfast Troop of Light Dragoons volunteer company and the Belfast Volunteer Company paraded through Belfast in honour of the King's birthday. After firing three volleys, they marched to Cave Hill where they were joined by the Belfast Artillery Company, who upon their arrival fired a "royal salute of twenty-one guns". Nine years later on 14 July 1792 in contrast to this in a sign of changing opinions, on the second anniversary of the fall of the Bastille, the Belfast Volunteers exuberantly paraded through Belfast and agreed to send a declaration to the national assembly of France, to which they received "rapturous replies".
On 28 December 1781, members of the Southern Battalion of the County Armagh Volunteers (who formed the First Ulster Regiment) convened and resolved for a meeting in the "most central town of Ulster, which we conceive to be Dungannon", in which delegates from every volunteer association in the province of Ulster were requested to attend. The date of this meeting was pencilled in for "the 15th day of February next, at ten o'clock in the forenoon." On the arranged date, 15 February 1782, delegates from 147 Volunteer corps arrived at the Presbyterian church, at Scotch Street, Dungannon for what would become known as the "Dungannon Convention of 1782". This church has formerly been the favourite meeting place of the Presbyterian Synod of Ulster and later the supreme ecclesiastical court of Irish Presbyterians. After the Volunteer convention it was afterwards known as the "Church of the Volunteers".
This church was used for the next three conventions of Ulster Volunteer corps: 21 June 1782, with delegates from 306 companies attending; 8 September 1783, with delegates from 270 companies; and almost a decade later on 15 February 1793, when the "fires of patriotism that marked the birth of the movement were burning low", which the meeting "failed to kindle them anew".
The first meeting is the best known. Many of the Volunteers were just as concerned with securing Irish free trade and opposing English governmental interference in Ireland as they were in repelling the French. This resulted in them pledging support for resolutions advocating legislative independence for Ireland, whilst proclaiming their loyalty to the British Crown. The first convention according to Sir Jonah Barrington, saw 200 delegates marching two by two into the church "steady, silent, and determined", clothed in their uniform and bearing arms. A poem by Thomas Davis states how "the church was full to the door". The lower part of the church was reserved for delegates with the gallery for their friends who required tickets for admission. Some people who attended the first and second conventions however consider them to be equally important.
After pressure from the Volunteers and a Parliamentary grouping under Henry Grattan, greater autonomy and powers (legislative independence) were granted to the Irish Parliament, in what some called "the constitution of 1782". This resulted in the Volunteers at the third convention proceeding to demand parliamentary reform, however as the American War of Independence was ending, the British government no longer feared the threat of the Volunteers.
The fourth convention in 1793 was held after a period of steep decline in Volunteer membership (see Demise below). This was partly the result of sharp division of opinion amongst Volunteers on political matters, so much so that the County Armagh companies refused to send any delegates to the fourth convention.
The bowl that was used as the pledging-cup of the Volunteers at the first convention was rediscovered in the 1930s in County Tyrone. This bowl was tub-shaped, resembling an Irish mether, and had the original owner's (John Bell) crest and initials engraved on the inside, as well as on the wooden base of it. Decorating this pledging-cup was three silver hoops bearing nine toasts, each of which was numbered as follows: 1. The King, 2. The Queen, 3. The Royal Family, 4. The Memory of St. Patrick, 5. The Sons of St. Patrick, 6. The Daughters of St. Patrick, 7. The Irish Volunteers, 8. The Friends of Ireland, 9. A Free Trade.
An obelisk commemorating the Dungannon Convention of 1782, was erected that year by Sir Capel Molyneux, on a hill a few miles north-east of Armagh city. On it is the following inscription: "This obelisk was erected by the Right Hon. Sir Capel Molyneux, of Castle Dillon, Bart., in the year 1782, to commemorate the glorious revolution which took place in favour of the constitution of the kingdom, under the auspices of the Volunteers of Ireland."
Motifs and mottos
The primary motif of the Volunteers was an Irish harp with the British crown mounted above it, with either the name of the company or a motto curved around it, or both, i.e. "Templepatrick Infantry" or "Liberty & Our Country". This harp and crown motif was prevalent on the Volunteer companies flags, belt-plates and gorgets. Some included the Royal cypher "G.R." standing for King George III. Shamrocks also commonly featured.
Other mottos included amongst variations: For Our King & Country, Pro Rege et Patria (for King and Country), Quis Separabit (none shall separate), and Pro Patria (for Country) Another Volunteer motto is the oft-repeated Pro Aeris et Focis (for our altars and our hearths), a truncated form of Pro Caesare, Pro Aeris et Focis (for our King, out altars, and out hearths), which was also used.
Competitions and awards
Competitions where held between Volunteer corps, with medals given out as marks of distinction for the best marksmen, swordsmen, as well as for the most efficient soldiers. The members of Volunteer corps from the province of Ulster, more specifically from the counties of Antrim, Armagh, Down, Londonderry, and Tyrone featured quite prominently and took an honourable place. Examples of marksmen competitions included best shot with ball and best target shot at 100 yards. Rewards of merit were also given.
Originally each Volunteer company was an independent force typically consisting of 60 to 80 men In some parts of the country, a company could consist of between 60 and 100, and were raised in each parish where the number of Protestants made it viable. Alongside the parish companies, towns had one or more companies. For officers a company had as its highest rank, a captain, followed by a lieutenant, and ensign. They also had surgeons and chaplains. Local Volunteer companies would later amalgamate into battalions led by colonels and generals, some of which consisted of ten to twelve companies.
An example of the amalgamation of Volunteer companies is that of First Ulster Regiment, County Armagh. The First Armagh Company was raised in Armagh city on 1 December 1778, and on 13 January 1779, Lord Charlemont became its captain. As many new Volunteer corps were being raised throughout the county, a meeting was held at Clare on 27 December 1779, where they discussed forming these corps into battalions, with commanding officers appointed and the raising of artillery companies to compliment them. This saw the creation of the Northern Battalion and Southern Battalion of the First Ulster Regiment.
Unlike the volunteer militias formed earlier in the 18th century, which had Crown commissioned officers, the private members of Volunteer companies in a form of military democracy appointed their own, and were "subject to no Government control". These officers were subject to being dismissed for misconduct or incapacity.
An example of Volunteers taking action against their own officers would be two officers commissioned to the Southern Battalion of the First Ulster Regiment: Thomas Dawson (commander) and Francis Dobbs (major). Both would also accept commissions in a Fencible regiment. This met with great disapproval amongst local volunteer companies who found them no longer acceptable as field officers. Lord Charlemont's own company, the First Armagh Company, even protested against the formation of Fencible regiments. By 1 January 1783, both Dawson and Dobbs had received their Fencible commissions and ceased to be volunteers.
Of the 154 companies of Volunteers listed in The Volunteer's Companion (1794); 114 had scarlet uniforms, 18 blue, 6 green, 1 dark green, 1 white, 1 grey, 1 buff, and 12 undetailed. The details of the uniform of each corps varied depending on their choice of colouring for the facing on their uniforms, and for some the lace and buttons, amongst other pieces, for example: the Glin Royal Artillery's uniform was "Blue, faced blue; scarlet cuffs and capes; gold lace", whilst the Offerlane Blues' uniform was "Scarlet, faced blue; silver lace". The Aghavoe Loyals had "scarlet, faced blue", whilst the Castledurrow Volunteers wore green uniforms faced with white and silver lining.
Lord Charlemont desired that all county companies should have the same uniform of scarlet coats with white facings, however some companies had already chosen their colours, or where is existence before his involvement. Whilst information on clothing is scant, it has been suggested that most uniforms were made locally, with badges, buttons, cloth, and hats being procured from places like Belfast and Dublin. The Belfast News Letter carried advertisements from merchants offering: plated and gilt Volunteer buttons, furnished belt and pouch plates, engravings, regimental uniform cloth, and even tents. The painting of Volunteer drums and colours was also offered.
The naming of some Volunteer companies may show a continuation of earlier Protestant anti-Catholic traditions, with corps named after "Protestant" victories such as the Boyne, Aughrim and Enniskillen. Another "Protestant" victory, Culloden, the final battle of the Jacobite Rising of 1745, which saw the defeat of the Young Pretender, was used by the Culloden Volunteers of Cork company.
Reviews of Volunteer corps were held since the earliest days of volunteering, with county companies travelling long distances to attend ones like the Belfast Reviews. Some reviews such as those in County Armagh originally were on a smaller scale, and consisted of a few companies assembling and performing field exercises in a particular district. They later became larger affairs with brigades consisting of battalions of companies.
The order of the day has been recorded for the Newry Review of 1785: most of the attending companies had marched to Newry on the Thursday, the day which Lord Charlemont also arrived. On Friday the companies that formed the First Brigade assembled and marched to the review ground, where Lord Charlemont would inspect them. His arrival was announced by the firing of nine cannons. On the Saturday, the same thing happened again this time for the Second Brigade. The review also demonstrated the attack and defence of Newry.
As the period of the Volunteers drew to an end, some such as those from the County Armagh Volunteers, started considering the larger reviews as a waste of time and energy. One Volunteer, Thomas Prentice, voiced a common opinion to Lord Charlemont that they would rather instead have a few companies meet a few times during the summer for drilling and improvement.
In March 1793 the assembly of armed associations was prohibited, making it illegal to hold a review. The last planned review was for one near Doagh on 14 September 1793 in County Antrim. Ammunition for it had been dispatched in secret a few days prior to companies with serviceable arms so that they can resist any opposition they encountered. An hour before the review was to be held, news spread that the 38th Regiment, the Fermanagh Militia, and detachment of Artillery had arrived in Doagh, resulting in the review being abandoned with no date for resumption.
The Volunteers had no unified view in regards to Catholic emancipation, and their attitude towards Catholics were not uniformly hostile. The threat posed by Catholics was deemed to be near non-existent, and that local Volunteers were "under no apprehensions from the Papists". The Volunteers exerted considerable pressure on the British government to ease the Penal Laws on Catholics such as the Relief Acts of 1778 and 1782. The passing of the Relief Act of 1778, resulted in the Catholic hierarchy giving their support to the British in the American War of Independence, even so far as to having fasts for the success of British arms. The war also offered a chance for Catholics to show their loyalty.
As early as June 1779 this perceived lack of threat from Catholics, allowed them to be able to enlist into some Volunteer companies, and in counties Wexford and Waterford, tried to set up their own. The Catholic hierarchy however were "resolutely suspicious" of the Volunteers, even though generally Catholics "cheered on the Volunteers".
At the Dungannon Convention of 1782, a resolution was passed that proclaimed the rejoice at the relaxation of the Penal Laws, whilst saying that Catholics "should not be completely free from restrictions". In contrast at Ballybay, County Monaghan, the Reverend John Rodgers addressed a meeting of Volunteers, imploring them "not to consent to the repeal of the penal laws, or to allow of a legal toleration of the Popish religion". John Wesley wrote in his Journal that the Volunteers should "at least keep the Papists in order", whilst his letter to the Freeman's Journal in 1780, which many would have agreed with, argued that he would not have the Catholics persecuted at all, but rather hindered from being able to cause harm.
County Armagh disturbances
In the 1780s sectarian tensions rose to dangerous levels in County Armagh, culminating in sectarian warfare between the Protestant Peep o' Day Boys and the Catholic Defenders that raged for over a decade. Many local Volunteers, holding partisan views, became involved in the conflict. In November 1788, the Benburb Volunteers were taunted by a "Catholic mob" near Blackwaterstown. The Benburb Volunteers then opened fire upon the Catholics killing two, and mortally wounding three others. In July 1789, the Volunteers assaulted the Defenders who had assembled at Lisnaglade Fort near Tandragee, resulting in more lives being lost. In 1797 Dr. William Richardson wrote a detailed analysis for the 1st Marquess of Abercorn, where he claimed that the troubles were caused by the excitement of volunteering during the American Revolutionary War, which gave "the people high confidence in their own strength".
Belfast 1st Volunteer Company
Outside of Ulster, Catholics found few supporters as Protestants were a minority concerned with their privileges. In Ulster, Protestants and Catholics where almost equal in number and sectarian rivalries remained strong, exemplfied by the County Armagh disturbances. In contrast, east of the River Bann in counties Antrim and Down, the Protestants were such an "overwhelming majority" that they had little to fear from Catholics, and became their biggest defenders.
According to the The Volunteers Companion, printed in 1784, there were five different Volunteer companies in Belfast, the first of which was the Belfast 1st Volunteer Company formed on 17 March 1778. Delegates from this company to the national convention of 1782 were "bitterly disappointed" that their fellow Volunteers were still opposed to giving Catholics the vote. In 1783 they became the first company of Volunteers in Ireland to "defiantly" admit Catholics into their ranks, and in May 1784 attended mass at St. Mary's chapel. Indeed, the building of this chapel was largely paid for by the Belfast 1st Volunteer Company. In sharp contrast to this, no Roman Catholic was ever admitted into a County Armagh company.
In 1791, the Belfast 1st Volunteer Company passed its own resolution arguing in favour of Catholic emancipation. In October that year the Society of United Irishmen was founded, initially as an offshoot of the Volunteers. In 1792, a new radical company was created as part of the Belfast Regiment of Volunteers, the Green Company, under which guise the United Irishmen held their initial meetings. Wolfe Tone, a leading member of the United Irishmen, was elected to be an honorary member of the Green Company, who he also calls the First Company, hinting that the Belfast 1st Volunteer Company reorganised itself into the Green Company.
Eventually the United Irishmen would advocate revolutionary and republican ideals inspired by the French Revolution. Ironically it was only 31 years previous when Belfast had called upon volunteer militias from counties Antrim, Armagh, and Down to defend it from the French.
The Volunteers became less influential after the end of the war in America in 1783, and rapidly declined except in Ulster. Whilst volunteering remained of interest in counties Antrim and Down, in other places such as neighbouring County Armagh, interest was in serious decline as was membership.
Internal politics too played a role in the Volunteers demise with sharp divisions of opinion regarding political affairs, possibly including "disapproval of the revolutionary and republican sentiments then being so freely expressed", especially amongst northern circles.
The ultimate demise of the Volunteers occurred during 1793 with the passing of the Gunpowder Act and Convention Act, both of which "effectively killed off Volunteering", whilst the creation of a militia, followed by the yeomanry, served to deprive the Volunteers of their justification of being a voluntary defence force.
Whilst some Volunteer members would join the United Irishmen, the majority were inclined towards the Yeomanry, which was used to help put down the United Irishmen's rebellion in 1798. Some of these United Irishmen and Yeomen had received their military training in the same Volunteer company; for example, the Ballymoney company's Alexander Gamble became a United Irishman, whilst George Hutcinson, a captain in the company, joined the Yeomanry.
It was the Volunteers of 1782 that launched a paramilitary tradition in Irish politics, a tradition, whether nationalist or unionist, has continued to shape Irish political activity with the ethos of "the force of argument had been trumped by the argument of force".
The Volunteers of the 18th century set a precedent for using the threat of armed force to influence political reform. George Washington, also a member of the landed gentry, had written about them: "Patriots of Ireland, your cause is our own". While their political aims were limited, and their legacy was ambiguous, combining future elements of both Irish nationalism and Irish unionism.
The Ulster Volunteers founded in 1912 to oppose Irish Home Rule, made frequent reference to the Irish Volunteers, and attempted to link its activities with theirs. They shared many features such as regional strength, leadership, and a Protestant recruitment base. The Irish Volunteers, formed in November 1913, were in part inspired and modeled on the Ulster Volunteers, but its founders, including Eoin MacNeill and Patrick Pearse, also drew heavily upon the legacy of the 18th-century Volunteers.
Renowned Irish historian and writer James Camlin Beckett, stated that when the Act of Union between Great Britain and Ireland was being debated in the Parliament of Ireland throughout 1800, that the "national spirit of 1782 was dead". Despite this, Henry Grattan, who had helped secure the Irish parliament's legislative independence in 1782, bought Wicklow borough at midnight for £1,200, and after dressing in his old Volunteer uniform, arrived at the House of Commons of the Irish parliament at 7 a.m., after which he gave a two-hour speech against the proposed union.
Denis McCullough and Bulmer Hobson of the Irish Republican Brotherhood (IRB) established the Dungannon Clubs in 1905..."to celebrate those icons of the constitutionalist movement, the Irish Volunteers of 1782"
MacNeill stated of the original Volunteers, "the example of the former Volunteers (of 1782) is not that they did not fight but that they did not maintain their organisation till their objects had been secured".
One of the mottos used by the Volunteer's Quis Separabit, meaning "who shall separate us", which was in use by them from at least 1781, is also used by the Order of St. Patrick (founded in 1783), and is used by several Irish British Army regiments such as the Royal Dragoon Guards, Royal Ulster Rifles (previously Royal Irish Rifles), 4th Royal Irish Dragoon Guards, 88th Regiment of Foot (Connaught Rangers) and its successor the Connaught Rangers. It was also adopted by the anti-Home Rule organisation, the Ulster Defence Union, and is also the motto of the paramilitary Ulster Defence Force.
- Blackstock, Allan (2001). Issue 2 of Belfast Society publications, ed. Double traitors?: the Belfast Volunteers and Yeomen, 1778–1828. Ulster Historical Foundation. p. 2. ISBN 978-0-9539604-1-5. Retrieved 3/10/09. Check date values in:
- Garvin, Tom (1981). The Evolution of Irish Nationalist Politics. Gill and Macmillan Ltd. p. 20. ISBN 0-7171-1312-4.
- Curtis, Liz (1994). The Cause of Ireland: From the United Irishmen to Partition. Beyond the Pale Publications. p. 4. ISBN 0-9514229-6-0.
- Bardon, Jonathan; A History of Ulster, page 217-220. The Black Staff Press, 2005. ISBN 0-85640-764-X
- Ulster Museum, History of Belfast exhibition
- Padraig O Snodaigh; THE IRISH VOLUNTEERS 1715–1793 – A list of the units, pg 88. Irish Academic Press, Dublin.
- Day, Robert; The Ulster Volunteers of '82: Their Medals, Badges, &c., Ulster Journal of Archaeology, Second Series, Vol. 4, No. 2 (Jan. 1898).
- Bigger, Francis Joseph; Ulster Volunteers in 1760, Ulster Journal of Archaeology, Second Series, Vol. 8, No. 4 (Oct. 1902).
- Google Books – Collectanea Politica
- Bigger, Francis Joseph; The National Volunteers of Ireland, 1782, Ulster Journal of Archaeology, Second Series, Vol. 15, No. 2/3 (May 1909).
- Paterson, T. G. F.; The County Armagh Volunteers of 1778–1793, Ulster Journal of Archaeology, Third Series, Vol. 4 (1941)
- Thomas Bartlett (2010). Ireland: a History. Cambridge University Press. p. 179-. ISBN 978-1-107-42234-6.
- Stewart, A.T.Q. (1998). A Deeper Silence: The Hidden Origins of the United Irishmen. Blackstaff Press. pp. 4–5. ISBN 0-85640-642-2.
- Cruise O'Brien, Conor (1994). The great melody: a thematic biography and commented anthology of Edmund Burke. American Politics and Political Economy Series. University of Chicago Press. p. 179. ISBN 978-0-226-61651-3. Retrieved 3/10/09. Check date values in:
- Berresford Ellis, Peter (1985). A History of the Irish Working Class. Pluto. pp. 63–64. ISBN 0-7453-0009-X.
- F.X. Martin, T.W. Moody (1980). The Course of Irish History. Mercier Press. pp. 232–233. ISBN 1-85635-108-4.
- Ian McBride. History and Memory in Modern Ireland. Cambridge University Press. ISBN 0-521-79366-1.
- Jonah Barrington's Memoirs; chapter 7 on the Volunteers
- Duffy, Sean (2005). A Concise History of Ireland. pp. 132–133. ISBN 0-7171-3810-0.
- Paterson, T. G. F.; The County Armagh Volunteers of 1778–1793: List of Companies, Ulster Journal of Archaeology, Third Series, Vol. 6 (1943) Cite error: Invalid
<ref>tag; name "Paterson" defined multiple times with different content (see the help page).
- Bardon, Jonathan; A History of Ulster, page 214-217. The Black Staff Press, 2005. ISBN 0-85640-764-X
- W. T. Latimer; Church of the Volunteers, Dungannon, Ulster Journal of Archaeology, Second Series, Vol. 1, No. 1 (Sep. 1894).
- F.X. Martin, T.W. Moody (1994). The Course of Irish History. Mercier Press. p. 233. ISBN 1-85635-108-4.
- Duffy, Sean (2005). A Concise History of Ireland. pp. 133–134. ISBN 0-7171-3810-0.
Quote: We know our duty to our Sovereign, and are loyal. We know our duty to ourselves, and are resolved to be free. We seek for our rights and no more than our rights
- British Museum, Pelham MSS., i, p. 308 (printed in Deputy Keeper's Report, N.I. Record Office (1936), p. 16)
- Biggar, Francis Joseph; The Ulster Volunteers of '82: Their Medals, Badges, &c. Gillball Volunteers, Ulster Journal of Archaeology, Second Series, Vol. 5, No. 1 (Oct. 1898).
- Maitland, W. H.; History of Magherafelt, page 13. Moyola Books, 1916, republished 1988. ISBN 0-9511836-2-1
- Bigger, Francis Joseph; The National Volunteers of Ireland, 1782, Ulster Journal of Archaeology, Second Series, Vol. 15, No. 2/3 (May 1909)
- Longman, Hurst, Rees, Orme, and Brown: Miscellaneous works of the Right Honourable Henry Grattan, 1822
- Queen's University Belfast. "Act of Union". Retrieved 14 November 2011. Cite error: Invalid
<ref>tag; name "QUB" defined multiple times with different content (see the help page).
- Bardon, Jonathan; A History of Ulster, page 223. The Black Staff Press, 2005. ISBN 0-85640-764-X
- Google Books – The Four Nations: A History of the United Kingdom, by Frank Welsh
- Ulster Museum – Henry Joy McCracken's Volunteer Coat
- Google Books – The New monthly magazine
- Connolly, S.J., Oxford Companion to Irish History, page 611. Oxford University Press, 2007. ISBN 978-0-19-923483-7
- Thomas Camac, Robert Day and William Cathcart; The Ulster Volunteers of 1782: Their Medals, Badges, Flags, &c. (Continued), Ulster Journal of Archaeology, Second Series, Vol. 6, No. 1 (Jan. 1900).
- Bartlett, Thomas (2010). Ireland: A History. Cambridge University Press. p. 190. ISBN 978-0-521-19720-5.
- Timothy Bowman. Carson's Army, The Ulster Volunteer Force. 1910-22. Manchester University Press. pp. 16, 68. ISBN 9-780719073724.
- Jackson, Alvin; Home Rule - An Irish History 1800-2000, page 120. Weidenfeld & Nicolson, 2003. ISBN 1-84212-724-1. Quote: The UVF was a direct inspiration for the Irish Volunteers, formed in November 1913 by those on the nationalist side who feared that Home Rule had stalled.
- Kelly, M. J. (2006). The Fenian ideal and Irish nationalism, 1882–1916. Volume 4 of Irish historical monographs series. Boydell & Brewer Ltd,. pp. 213–214. ISBN 978-1-84383-204-1. Retrieved 3 February 2014.
- "The Union". University College Cork. Retrieved 3 November 2011.
- Charles Townshend, Easter 1916, The Irish Rebellion (2006), p18
- Townshend, Charles (1983). Political violence in Ireland: government and resistance since 1848. Oxford Historical Monographs. Clarendon Press. p. 295. ISBN 978-0-19-821753-4. Retrieved 14/01/10. Check date values in:
- Day, Robert; On Three Gold Medals of the Irish Volunteers, The Journal of the Royal Society of Antiquaries of Ireland, Fifth Series, Vol. 10, No.4 (31 December 1900).
- Rosie Cowan, Ireland correspondent (28 September 2002). "The rise and fall of Johnny Adair | UK news". London: The Guardian. Retrieved 25 October 2010.
- Stewart, A.T.Q. (1998). A Deeper Silence: The Hidden Origins of the United Irishmen. Blackstaff, ISBN 0-85640-642-2.
- Jackson, T.A. (1946). Ireland Her Own. Cobbett Press.
- Curtis, Liz (1994). The Cause of Ireland: From the United Irishmen to Partition. Beyond the Pale Publications. ISBN 0-9514229-6-0.
- F.X. Martin, T.W. Moody (1994). The Course of Irish History. Mercier Press. ISBN 1-85635-108-4.
- Llwelyn, Morgan (2001). Irish Rebels. O'Brien Press. ISBN 0-86278-857-9.
- Connolly, S.J., Oxford Companion to Irish History, Oxford University Press, 2007. ISBN 978-0-19-923483-7
- Kelly, M. J. (2006). The Fenian ideal and Irish nationalism, 1882–1916. Boydell & Brewer Ltd,.ISBN 9781843832041.
- Townshend, Charles (1983). Political violence in Ireland: government and resistance since 1848. Clarendon Press. ISBN 978-0-19-821753-4. | https://en.wikipedia.org/wiki/Irish_Volunteers_(18th_century) |
4.03125 | National Research Council, The National Academies
Video length 4:37 min.Learn more about Teaching Climate Literacy and Energy Awareness»
See how this Video supports the Next Generation Science Standards»
High School: 7 Disciplinary Core Ideas
About Teaching Climate Literacy
Other materials addressing 4f
Other materials addressing 4e
Other materials addressing 5b
Notes From Our Reviewers
The CLEAN collection is hand-picked and rigorously reviewed for scientific accuracy and classroom effectiveness.
Read what our review team had to say about this resource below or learn more about
how CLEAN reviews teaching materials
Teaching Tips | Science | Pedagogy |
- A good video to show at the beginning of a unit on climate change.
- May need to break video into sections because the information presented is very dense.
- Students will need scaffolding.
- High level - recommended for advanced classes.
About the Science
- Comments from expert scientist: The video provides clear evidence of rising surface temperatures over the past century. It refers to the multitude of observational records - including in-situ and satellite measurements of temperature, snow and ice cover - to make the case that the Earth is warming.
About the Pedagogy
- No supporting teaching resources with this video. Can download transcript as pamphlet.
Technical Details/Ease of Use
- Can change captions to other languages.
- High quality video and resolution.
- Whole series is here: http://www.youtube.com/playlist?annotation_id=annotation_392971&feature=iv&list=PL38EB9C0BC54A9EE2&src_vid=-IuVzcp39rs
Next Generation Science Standards See how this Video supports:
Disciplinary Core Ideas: 7
HS-ESS2.A1:Earth’s systems, being dynamic and interacting, cause feedback effects that can increase or decrease the original changes.
HS-ESS2.A3:The geological record shows that changes to global and regional climate can be caused by interactions among changes in the sun’s energy output or Earth’s orbit, tectonic events, ocean circulation, volcanic activity, glaciers, vegetation, and human activities. These changes can occur on a variety of time scales from sudden (e.g., volcanic ash clouds) to intermediate (ice ages) to very long-term tectonic cycles.
HS-ESS2.C1:The abundance of liquid water on Earth’s surface and its unique combination of physical and chemical properties are central to the planet’s dynamics. These properties include water’s exceptional capacity to absorb, store, and release large amounts of energy, transmit sunlight, expand upon freezing, dissolve and transport materials, and lower the viscosities and melting points of rocks.
HS-ESS2.D1:The foundation for Earth’s global climate systems is the electromagnetic radiation from the sun, as well as its reflection, absorption, storage, and redistribution among the atmosphere, ocean, and land systems, and this energy’s re-radiation into space.
HS-ESS2.D2:Gradual atmospheric changes were due to plants and other organisms that captured carbon dioxide and released oxygen.
HS-ESS2.D3:Changes in the atmosphere due to human activity have increased carbon dioxide concentrations and thus affect climate.
HS-ESS2.E1:The many dynamic and delicate feedbacks between the biosphere and other Earth systems cause a continual co-evolution of Earth’s surface and the life that exists on it. | http://cleanet.org/resources/43784.html |
4.1875 | For decades, scientists and the public alike have wondered why some fireflies exhibit synchronous flashing, in which large groups produce rhythmic, repeated flashes in unison sometimes lighting up a whole forest at once.
Now, UConns Andrew Moiseff, a professor in the Department of Physiology and Neurobiology in the College of Liberal Arts and Sciences, has conducted the first experiments on the purpose of this phenomenon. His results, reported in the journal Science, suggest that synchronous flashing encourages female fireflies recognition of suitable mates.
There have been lots of really good observations and hypotheses about firefly synchrony, Moiseff says. But until now, no one has experimentally tested whether synchrony has a function.
Moiseff has had an interest in fireflies since he was an undergraduate at Stony Brook University. There he met his current collaborator, Jonathan Copeland of Georgia Southern University, who was a graduate student at the time. When the two graduated, Moiseff moved on to pursue other research interests. But in 1992, Copeland received an enlightening phone call.
He had commented in a paper that firefly synchrony was rare, and mostly seen in southeast Asia, says Moiseff. But a naturalist from Tennessee called him to say that each summer the fireflies at her summer cabin all flashed at the same time.
Moiseff and Copeland flew down to the Great Smoky Mountains National Park to check out the fireflies and, says Moiseff, theyve been going back every year since.
Fireflies which are actually a type of beetle produce bioluminescence as a mating tool, in which males display a species-specific pattern of flashes while cruising through the air, looking for females, says Moiseff. These patterns consist of one or more flashes followed by a characteristic pause, during which female fireflies, perched on leaves or branches, will produce a single response flash if they spot a suitable male.
Of the roughly 2,000 species of fireflies around the world, scientists estimate that about 1 percent synchronize their flashes over large areas. Thousands of male fireflies may blink at once, creating a spectacular light show. In their current study, Moiseff and Copeland wondered what evolutionary benefit this species gains from synchronous flashing.
The two hypothesized that males synchronize to facilitate the females ability to recognize the particular flashing pattern of their own species. To test this theory, they collected females of the synchronous species Photinus carolinus from the Smoky Mountains National Park and exposed them in the laboratory to groups of small blinking lights meant to mimic male fireflies. Each individual light produced the P. carolinus flashing pattern, but the experimenters varied the degree to which the flashes were in synch with one another.
We had the technology to design something that we thought would create a virtual world for these females, says Moiseff.
Their results showed that females responded more than 80 percent of the time to flashes that were in perfect unison or in near-perfect unison. But when the flashes were out of synch, the females response rate was 10 percent or less.
Since synchronous species are often observed in high densities, Moiseff and Copeland concluded that their results suggest a physiological problem in the females information processing. Male fireflies are typically in flight while searching for females, so their flashes appear in different locations over time. Therefore, says Moiseff, females must be able to recognize visual cues over a large area.
But, he points out, this behavior presents a problem in areas crowded with male fireflies. Instead of seeing a single flying male, the female would see a cluttered landscape of flashes that could be individually unrecognizable.
When males are flashing in high densities, the females inability to focus on just one male would make it very difficult for her to detect her species-specific pattern, Moiseff says. So if the males synchronize, it can maintain the fidelity of the signal in the presence of many other males.
Whether the females cant or simply choose not to discriminate spatial information on small scales is unclear, says Moiseff. His future research will focus on questions that address whether physiological constraints or behavioral decisions are driving the evolution of synchrony.
Overall, says Moiseff, he is interested in the role that animal physiology plays in shaping evolution.
Animals have evolved to solve unique problems in many different ways, and Im interested in how they do that, he says. Fireflies have these tiny heads and these tiny brains, but they can do some complex and amazing things.
Explore further: Biologists expose hidden costs of firefly flashes | http://phys.org/news/2011-07-fireflies-synch.html |
4.34375 | Although the Jews were their primary targets, the Nazis and their collaborators also persecuted other groups for racial or ideological reasons. Among the earliest victims of Nazi discrimination in Germany were political opponents—primarily Communists, Socialists, Social Democrats, and trade union leaders. The Nazis also persecuted authors and artists whose works they considered subversive or who were Jewish, subjecting them to arrest, economic restrictions, and other forms of discrimination. The Nazis targeted Roma (Gypsies) on racial grounds. Roma were among the first to be killed in mobile gas vans at the Chelmno killing center in Poland. The Nazis also deported more than 20,000 Roma to the Auschwitz-Birkenau camp, where most of them were murdered in the gas chambers. The Nazis viewed Poles and other Slavic peoples as inferior. Poles who were considered ideologically dangerous (including intellectuals and Catholic priests) were targeted for execution. Between 1939 and 1945, at least 1.5 million Polish citizens were deported to German territory for forced labor. Hundreds of thousands were also imprisoned in Nazi concentration camps. It is estimated that the Germans killed at least 1.9 million non-Jewish Polish civilians during World War II.
During the autumn and winter of 1941-1942 in the occupied Soviet Union, German authorities conducted a racist policy of mass murder of Soviet prisoners of war: Jews, persons with "Asiatic features," and top political and military leaders were selected out and shot. Around three million others were held in makeshift camps without proper shelter, food, or medicine with the deliberate intent that they die. In Germany, the Nazis incarcerated Christian church leaders who opposed Nazism, as well as thousands of Jehovah's Witnesses who refused to salute Adolf Hitler or to serve in the German army. Through the so-called “Euthanasia Program,” the Nazis murdered an estimated 200,000 individuals with mental or physical disabilities. The Nazis also persecuted male homosexuals, whose behavior they considered a hindrance to the preservation of the German nation. | http://www.ushmm.org/wlc/en/article.php?ModuleId=10007871 |
4.5 | Algebra: In Simplest Terms
In this series, host Sol Garfunkel explains how algebra is used for solving real-world problems and clearly explains concepts that may baffle many students. Graphic illustrations and on-location examples help students connect mathematics to daily life. The series also has applications in geometry and calculus instruction.
1. Introduction—An introduction to the series, this program presents several mathematical themes and emphasizes why algebra is important in today’s world.
2. The Language of Algebra—This program provides a survey of basic mathematical terminology. Content includes properties of the real number system and the basic axioms and theorems of algebra. Specific terms covered include algebraic expression, variable, product, sum term, factors, common factors, like terms, simplify, equation, sets of numbers, and axioms.
3. Exponents and Radicals—This program explains the properties of exponents and radicals: their definitions, their rules, and their applications to positive numbers.
4. Factoring Polynomials—This program defines polynomials and describes how the distributive property is used to multiply common monomial factors with the FOIL method. It covers factoring, the difference of two squares, trinomials as products of two binomials, the sum and difference of two cubes, and regrouping of terms.
5. Linear Equations—This is the first program in which equations are solved. It shows how solutions are obtained, what they mean, and how to check them using one unknown.
6. Complex Numbers—To the sets of numbers reviewed in previous lessons, this program adds complex numbers — their definition and their use in basic operations and quadratic equations.
7. Quadratic Equations—This program reviews the quadratic equation and covers standard form, factoring, checking the solution, the Zero Product Property, and the difference of two squares.
8. Inequalities—This program teaches students the properties and solution of inequalities, linking positive and negative numbers to the direction of the inequality.
9. Absolute Value—In this program, the concept of absolute value is defined, enabling students to use it in equations and inequalities. One application example involves systolic blood pressure, using a formula incorporating absolute value to find a person’s “pressure difference from normal.”
10. Linear Relations—This program looks at the linear relationship between two variables, expressed as a set of ordered pairs. Students are shown the use of linear equations to develop and provide information about two quantities, as well as the applications of these equations to the slope of a line.
11. Circle and Parabola—The circle and parabola are presented as two of the four conic sections explored in this series. The circle, its various measures when graphed on the coordinate plane (distance, radius, etc.), its related equations (e.g., center-radius form), and its relationships with other shapes are covered, as is the parabola with its various measures and characteristics (focus, directrix, vertex, etc.).
12. Ellipse and Hyperbola—The ellipse and hyperbola, the other two conic sections examined in the series, are introduced. The program defines the two terms, distinguishing between them with different language, equations, and graphic representations.
13. Functions—This program defines a function, discusses domain and range, and develops an equation from real situations. The cutting of pizza and encoding of secret messages provide subjects for the demonstration of functions and their usefulness.
14. Composition and Inverse Functions—Graphics are used to introduce composites and inverses of functions as applied to calculation of the Gross National Product.
15. Variation—In this program, students are given examples of special functions in the form of direct variation and inverse variation, with a discussion of combined variation and the constant of proportionality.
16. Polynomial Functions—This program explains how to identify, graph, and determine all intercepts of a polynomial function. It covers the role of coefficients; real numbers; exponents; and linear, quadratic, and cubic functions. This program touches upon factors, x-intercepts, and zero values.
17. Rational Functions—A rational function is the quotient of two polynomial functions. The properties of these functions are investigated using cases in which each rational function is expressed in its simplified form.
18. Exponential Functions—Students are taught the exponential function, as illustrated through formulas. The population of Massachusetts, the “learning curve,” bacterial growth, and radioactive decay demonstrate these functions and the concepts of exponential growth and decay.
19. Logarithmic Functions—This program covers the logarithmic relationship, the use of logarithmic properties, and the handling of a scientific calculator. How radioactive dating and the Richter scale depend on the properties of logarithms is explained
20. Systems of Equations—The case of two linear equations in two unknowns is considered throughout this program. Elimination and substitution methods are used to find single solutions to systems of linear and nonlinear equations.
21. Systems of Linear Inequalities—Elimination and substitution are used again to solve systems of linear inequalities. Linear programming is shown to solve problems in the Berlin airlift, production of butter and ice cream, school redistricting, and other situations while constraints, corner points, objective functions, the region of feasible solutions, and minimum and maximum values are also explored.
22. Arithmetic Sequences and Series—When the growth of a child is regular, it can be described by an arithmetic sequence. This program differentiates between arithmetic and nonarithmetic sequences as it presents the solutions to sequence- and series-related problems
23. Geometric Sequences and Series—This program provides examples of geometric sequences and series (f-stops on a camera and the bouncing of a ball), explaining the meaning of nonzero constant real number and common ratio.
24. Mathematical Induction—Mathematical proofs applied to hypothetical statements shape this discussion on mathematical induction. This segment exhibits special cases, looks at the development of number patterns, relates the patterns to Pascal’s triangle and factorials, and elaborates the general form of the theorem.
25. Permutations and Combinations—How many variations in a license plate number or poker hand are possible? This program answers the question and shows students how it’s done.
26. Probability—In this final program, students see how the various techniques of algebra that they have learned can be applied to the study of probability. The program shows that games of chance, health statistics, and product safety are areas in which decisions must be made according to our understanding of the odds.
Instructional Video Resources
Use our classroom videos for every curriculum and every grade level.
Lending LibraryAccess our lending library and order form for video titles for all grade levels and subject areas.
Find Us at the Following Events:
March 15: Montpelier, ND, Family Literacy Event | http://www.prairiepublic.org/education/instructional-resources?post=13951 |
4.15625 | Apollo 13 was the seventh mission of NASA's Project Apollo and the third manned lunar-lander mission. The flight was commanded by Jim Lovell. The other astronauts on board were Jack Swigert and Fred Haise.
The craft was launched successfully toward the Moon, but two days after launch a faulty oxygen tank exploded, and the Service Module became damaged, causing a loss of oxygen and electrical power. There was a very large chance that the astronauts would die before they could return to Earth. They were very short of oxygen. Oxygen is not just used to breathe, on the Apollo spacecraft it was used in a device called a Fuel cell to generate electricity. So they conserved their remaining air by turning off almost all their electrical equipment, including heaters. It became very cold in the spacecraft.
In order to stay alive the astronauts also had to move into the Apollo Lunar Module and make it work as a sort of "lifeboat".
When they approached the Earth they were not sure that their parachutes, needed to slow the Command Module down, would work. The parachutes were thrown out by small explosive charges that were fired by batteries. The cold could have made the batteries fail, in which case the parachutes would not work and the Command Module would hit the ocean so fast that all aboard would be killed.
The flight[change | change source]
Apollo had taken to the Earth's orbit on the 11th April 1970 at 19:13 UTC. They flew from Cape Canaveral and they wanted to land at Fra Mauro. Despite the hardships, the crew made it back to Earth. In spite of the fact that crew did not land on the Moon, the flight became very well known.
Some people regarded it as a failure because they did not land on the Moon. However, others thought it was possibly the National Aeronautics and Space Administrations (NASAs) greatest accomplishment in returning three men in a very damaged spacecraft back to Earth safely.
Coming up to re-entry, it was thought that the electrical equipment would short circuit because the water in the astronauts' breath had turned back into a liquid all over the computers. However, the electronics were fine. | https://simple.wikipedia.org/wiki/Apollo_13 |
4.15625 | A chance discovery of 80-year-old photo plates in a Danish basement is providing new insight into how Greenland glaciers are melting today.
Researchers at the National Survey and Cadastre of Denmark -- that country's federal agency responsible for surveys and mapping -- had been storing the glass plates since explorer Knud Rasmussen's expedition to the southeast coast of Greenland in the early 1930s.
In this week's online edition of Nature Geoscience, Ohio State University researchers and colleagues in Denmark describe how they analyzed ice loss in the region by comparing the images on the plates to aerial photographs and satellite images taken from World War II to today.
Taken together, the imagery shows that glaciers in the region were melting even faster in the 1930s than they are today, said Jason Box, associate professor of geography and researcher at the Byrd Polar Research Center at Ohio State. A brief cooling period starting in the mid-20th century allowed new ice to form, and then the melting began to accelerate again in the 2000s.
"Because of this study, we now have a detailed historical analogue for more recent glacier loss," Box said. "And we've confirmed that glaciers are very sensitive indicators of climate."
Pre-satellite observations of Greenland glaciers are rare. Anders Anker Bjørk, doctoral fellow at the Natural History Museum of Denmark and lead author of the study, is trying to compile all such imagery. He found a clue in the archives of The Arctic Institute in Copenhagen in 2011.
"We found flight journals for some old planes, and in them was a reference to National Survey and Cadastre of Denmark," Bjørk said.
As it happens, researchers at the National Survey had already contacted Bjørk about a find of their own.
"They were cleaning up in the basement and had found some old glass plates with glaciers on them. The reason the plates were forgotten was that they were recorded for mapping, and once the map was produced they didn't have much value."
Those plates turned out to be documentation of Rasmussen's 7th Thule Expedition to Greenland. They contained aerial photographs of land, sea and glaciers in the southeast region of the country, along with travel photos of Rasmussen's team.
The researchers digitized all the old images and used software to look for differences in the shape of the southeast Greenland coastline where the ice meets the Atlantic Ocean. Then they calculated the distance the ice front moved in each time period.
Over the 80 years, two events stand out: glacial retreats from 1933-1934 and 2000-2010. In the 1930s, fewer glaciers were melting than are today, and most of those that were melting were land-terminating glaciers, meaning that they did not contact the sea.
Those that were melting retreated an average of 20 meters per year -- the fastest retreating at 374 meters per year. Fifty-five percent of the glaciers in the study had similar or higher retreat rates during the 1930s than they do today.
Still, more glaciers in southeast Greenland are retreating today, and the average ice loss is 50 meters per year. That's because a few glaciers with very fast melting rates -- including one retreating at 887 meters per year -- boost the overall average.
But to Box, the most interesting part of the study is what happened between the two melting events.
From 1943-1972, southeast Greenland cooled -- probably due to sulfur pollution, which reflects sunlight away from Earth.
Sulfur dioxide is a poisonous gas produced by volcanoes and industrial processes. It has been tied to serious health problems and death, and is also the main ingredient in acid rain. Its presence in the atmosphere peaked just after the Clean Air Act was established in 1963. As it was removed from the atmosphere, the earlier warming resumed.
The important point is not that deadly pollution caused the climate to cool, but rather that the brief cooling allowed researchers to see how Greenland ice responded to the changing climate.
The glaciers responded to the cooling more rapidly than researchers had seen in earlier studies. Sixty percent of the glaciers advanced during that time, while 12 percent were stationary. And now that the warming has resumed, the glacial retreat is dominated by marine-terminating outlet glaciers, the melting of which contributes to sea level rise.
"From these images, we see that the mid-century cooling stabilized the glaciers," Box said. "That suggests that if we want to stabilize today's accelerating ice loss, we need to see a little cooling of our own."
Southeast Greenland is a good place to study the effects of climate change, he explained, because the region is closely tied to air and water circulation patterns in the North Atlantic.
"By far, more storms pass through this region -- transporting heat into the Arctic -- than anywhere else in the Northern Hemisphere. Climate change brings changes in snowfall and air temperature that compete for influence on a glacier's net behavior," he said.
Co-authors on the study include Kurt H. Kjær, Niels J. Korsgaard, Kristian K. Kjeldsen, and Svend Funder at the Natural History Museum of Denmark, University of Copenhagen; Shfaqat A. Khan of the National Space Institute, Technical University of Denmark; Camilla S. Andresen of the Department of Marine Geology and Glaciology at the Geological Survey of Denmark and Greenland; and Nicolaj K. Larsen of the Department of Geoscience at Aarhus University.
Photos, satellite images and other data for the study were provided by the National Survey and Cadastre; The Scott Polar Research Institute in the United Kingdom; the Arctic Institute in Denmark; researchers Bea Csatho and Sudhagar Nagarajan of the Geology Department at the University at Buffalo; and the NASA Land Processes Distributed Active Archive Center at the USGS/Earth Resources Observation and Science Center of Sioux Falls, S.D. Andreas Pedersen of the Danish company MapWork wrote the script for the software used in the study.
This work is a part of the RinkProject funded by the Danish Research Council and the Commission for Scientific Research in Greenland.
Cite This Page: | http://www.sciencedaily.com/releases/2012/05/120529144339.htm |
4.25 | The resource has been added to your collection
The lesson begins by associating the distance between two points with the right triangle that may be formed by joining the points and extending horizontal and vertical lines through the points. This linking is generalized to derive the distance formula for any two points in the plane. The midpoint formula is then derived by taking the average of the coordinates of the two points. Using the distance formula, the equation for circle is derived and then examples follow for finding the equation of a given circle.
This resource has not yet been reviewed.
Not Rated Yet. | http://www.curriki.org/oer/Lesson-26-The-Distance-and-Midpoint-Formulas/ |
4.03125 | Middle Paleolithic Hominids< Introduction to Paleoanthropology
- 1 The second phase of human migration
- 2 Neanderthals
- 3 Homo sapiens
- 4 Out-of-Africa 2: The debate
- 4.1 The "Multi-regional" model
- 4.2 The "Out-of-Africa"/"Replacement" model
- 4.3 Hypothesis testing
- 4.4 Out-of-Africa 2: The evidence
- 4.5 Fossil record
- 4.6 Molecular biology
- 4.7 Expectations
- 4.8 Intermediate Model
- 5 Case studies
- 6 Population dispersal into Australia/Oceania
- 7 Summary
The second phase of human migrationEdit
The time period between 250,000 and 50,000 years ago is commonly called the Middle Paleolithic.
At the same time that Neanderthals occupied Europe and Western Asia, other kinds of people lived in the Far East and Africa, and those in Africa were significantly more modern than the Neanderthals.
These Africans are thus more plausible ancestors for living humans, and it appears increasingly likely that Neanderthals were an evolutionary dead end, contributing few if any genes to historic populations.
Topics to be covered in this chapter:
- Summary of the fossil evidence for both the Neanderthals and some of their contemporaries;
- Second phase of human migration ("Out-of-Africa 2" Debate)
History of ResearchEdit
In 1856, a strange skeleton was discovered in Feldhofer Cave in the Neander Valley ("thal" = valley) near Dusseldorf, Germany. The skull cap was as large as that of a present-day human but very different in shape. Initially this skeleton was interpreted as that of a congenital idiot.
The Forbes Quarry (Gibraltar) female cranium (now also considered as Neanderthal) was discovered in 1848, eight years before the Feldhofer find, but its distinctive features were not recognized at that time.
Subsequently, numerous Neanderthal remains were found in Belgium, Croatia, France, Spain, Italy, Israel and Central Asia.
Anthropologists have been debating for 150 years whether Neanderthals were a distinct species or an ancestor of Homo sapiens sapiens. In 1997, DNA analysis from the Feldhofer Cave specimen showed decisively that Neanderthals were a distinct lineage.
These data imply that Neanderthals and Homo sapiens sapiens were separate lineages with a common ancestor, Homo heidelbergensis, about 600,000 years ago.
Unlike earlier hominids (with some rare exceptions), Neanderthals are represented by many complete or nearly complete skeletons. Neanderthals provide the best hominid fossil record of the Plio-Pleistocene, with about 500 individuals. About half the skeletons were children. Typical cranial and dental features are present in the young individuals, indicating Neanderthal features were inherited, not acquired.
Morphologically the Neanderthals are a remarkably coherent group. Therefore they are easier to characterize than most earlier human types.
Neanderthal skull has a low forehead, prominent brow ridges and occipital bones. It is long and low, but relatively thin walled. The back of the skull has a characteristic rounded bulge, and does not come to a point at the back.
Cranial capacity is relatively large, ranging from 1,245 to 1,740 cc and averaging about 1,520 cc. It overlaps or even exceeds average for Homo sapiens sapiens. The robust face with a broad nasal region projects out from the braincase. By contrast, the face of modern Homo sapiens sapiens is tucked under the brain box, the forehead is high, the occipital region rounded, and the chin prominent.
Neanderthals have small back teeth (molars), but incisors are relatively large and show very heavy wear.
Neanderthal short legs and arms are characteristic of a body type that conserves heat. They were strong, rugged and built for cold weather. Large elbow, hip, knee joints, and robust bones suggest great muscularity. Pelvis had longer and thinner pubic bone than modern humans.
All adult skeletons exhibit some kind of disease or injury. Healed fractures and severe arthritis show that they had a hard life, and individuals rarely lived past 40 years old.
Neanderthals lived from about 250,000 to 30,000 years ago in Eurasia.
The earlier ones, like at Atapuerca (Sima de Los Huesos), were more generalized. The later ones are the more specialized, "classic" Neanderthals.
The last Neanderthals lived in Southwest France, Portugal, Spain, Croatia, and the Caucasus as recently as 27,000 years ago.
The distribution of Neanderthals extended from Uzbekistan in the east to the Iberian peninsula in the west, from the margins of the Ice Age glaciers in the north to the shores of the Mediterranean sea in the south.
South-West France (Dordogne region) is among the richest in Neanderthal cave shelters:
- La Chapelle-aux-Saints;
- La Ferrassie;
- Saint-Césaire (which is one of the younger sites at 36,000).
Other sites include:
- Krapina in Croatia;
- Saccopastore in Italy;
- Shanidar in Iraq;
- Teshik-Tash (Uzbekistan). The 9-year-old hominid from this site lies at the most easterly known part of their range.
No Neanderthal remains have been discovered in Africa or East Asia.
Chronology and GeographyEdit
The time and place of Homo sapiens origin has preoccupied anthropologists for more than a century. For the longest time, many assumed their origin was in South-West Asia. But in 1987, anthropologist Rebecca Cann and colleagues compared DNA of Africans, Asians, Caucasians, Australians, and New Guineans. Their findings were striking in two respects:
- the variability observed within each population was greatest by far in Africans, which implied the African population was oldest and thus ancestral to the Asians and Caucasians;
- there was very little variability between populations which indicated that our species originated quite recently.
The human within-species variability was only 1/25th as much as the average difference between human and chimpanzee DNA. The human and chimpanzee lineages diverged about 5 million years ago. 1/25th of 5 million is 200,000. Cann therefore concluded that Homo sapiens originated in Africa about 200,000 years ago. Much additional molecular data and hominid remains further support a recent African origin of Homo sapiens, now estimated to be around 160,000-150,000 years ago.
The Dmanisi evidence suggests early Europeans developed in Asia and migrated to Europe creating modern Europeans with minor interaction with African Homo types. July 5 2002 issue of the journal Science and is the subject of the cover story of the August issue of National Geographic magazine. New Asian finds are significant, they say, especially the 1.75 million-year-old small-brained early-human fossils found in Dmanisi, Georgia, and the 18,000-year-old "hobbit" fossils (Homo floresiensis) discovered on the island of Flores in Indonesia.
Such finds suggest that Asia's earliest human ancestors may be older by hundreds of thousands of years than previously believed, the scientists say. Robin Dennell, of the University of Sheffield in England, and Wil Roebroeks, of Leiden University in the Netherlands, describe their ideas in the December 22, 2005 issue of Nature. The fossil and archaeological finds characteristic of early modern humans are represented at various sites in East and South Africa, which date between 160,000 and 77,000 years ago.
Herto (Middle Awash, Ethiopia)Edit
In June 2003, publication of hominid remains of a new subspecies: Homo sapiens idaltu. Three skulls (two adults, one juvenile) are interpreted as the earliest near-modern humans: 160,000-154,000 BP. They exhibit some modern traits (very large cranium; high, round skull; flat face without browridge), but also retain archaic features (heavy browridge; widely spaced eyes). Their anatomy and antiquity link earlier archaic African forms to later fully modern ones, providing strong evidence that East Africa was the birthplace of Homo sapiens.
In 1967, Richard Leakey and his team uncovered a partial hominid skeleton (Omo I), which had the features of Homo sapiens. Another partial fragment of a skull (Omo II) revealed a cranial capacity over 1,400 cc. Dating of shells from the same level gave a date of 130,000 years.
Ngaloba, Laetoli area (Tanzania)Edit
A nearly complete skull (LH 18) was found in Upper Ngaloba Beds. Its morphology is largely modern, yet it retains some archaic features such as prominent brow ridges and a receding forehead. Dated at about 120,000 years ago.
Border Cave (South Africa)Edit
Remains of four individuals (a partial cranium, 2 lower jaws, and a tiny buried infant) were found in a layer dated to at least 90,000 years ago. Although fragmentary, these fossils appeared modern.
Klasies River (South Africa)Edit
Site occupied from 120,000 to 60,000 years ago. Most human fossils come from a layer dated to around 90,000 years ago. They are fragmentary: cranial, mandibular, and postcranial pieces. They appear modern, especially a fragmentary frontal bone that lacks a brow ridge. Chin and tooth size also have a modern aspect.
Blombos Cave (South Africa)Edit
A layer dated to 77,000 BCE yielded 9 human teeth or dental fragments, representing five to seven individuals, of modern appearance.
African skulls have reduced browridges and small faces. They tend to be higher, more rounded than classic Neanderthal skulls, and some approach or equal modern skulls in basic vault shape. Where cranial capacity can be estimated, the African skulls range between 1,370 and 1,510 cc, comfortably within the range of both the Neanderthals and anatomically modern people.
Mandibles tend to have significantly shorter and flatter faces than did the Neanderthals.
Postcranial parts indicate people who were robust, particularly in their legs, but who were fully modern in form.
Out-of-Africa 2: The debateEdit
Most anthropologists agree that a dramatic shift in hominid morphology occurred during the last glacial epoch. About 150,000 years ago the world was inhabited by a morphologically heterogeneous collection of hominids: Neanderthals in Europe; less robust archaic Homo sapiens in East Asia; and somewhat more modern humans in East Africa (Ethiopia) and also SW Asia. By 30,000 years ago, much of this diversity had disappeared. Anatomically modern humans occupied all of the Old World.
In order to understand how this transition occurred, we need to answer two questions:
- Did the genes that give rise to modern human morphology arise in one region, or in many different parts of the globe?
- Did the genes spread from one part of the world to another by gene flow, or through the movement and replacement of one group of people by another?
Unfortunately, genes don't fossilize, and we cannot study the genetic composition of ancient hominid populations directly. However, there is a considerable amount of evidence that we can bring to bear on these questions through the anatomical study of the fossil record and the molecular biology of living populations. The shapes of teeth from a number of hominid species suggest that arrivals from Asia played a greater role in colonizing Europe than hominids direct from Africa, a new analysis of more than 5,000 ancient teeth suggests. (Proceedings of the National Academy of Sciences Aug 2007)
Two opposing hypotheses for the transition to modern humans have been promulgated over the last decades:
- the "multi-regional model" sees the process as a localized speciation event;
- the "out-of-Africa model" sees the process as the result of widespread phyletic transformation.
The "Multi-regional" modelEdit
This model proposes that ancestral Homo erectus populations throughout the world gradually and independently evolved first through archaic Homo sapiens, then to fully modern humans. In this case, the Neanderthals are seen as European versions of archaic sapiens.
Recent advocates of the model have emphasized the importance of gene flow among different geographic populations, making their move toward modernity not independent but tied together as a genetic network over large geographical regions and over long periods of time. Since these populations were separated by great distances and experienced different kinds of environmental conditions, there was considerable regional variation in morphology among them.
One consequence of this widespread phyletic transformation would be that modern geographic populations would have very deep genetic roots, having begun to separate from each other a very long time ago, perhaps as much as a million years.
This model essentially sees multiple origins of Homo sapiens, and no necessary migrations.
The "Out-of-Africa"/"Replacement" modelEdit
This second hypothesis considers a geographically discrete origin, followed by migration throughout the rest of the Old World. By contrast with the first hypothesis, here we have a single origin and extensive migration.
Modern geographic populations would have shallow genetic roots, having derived from a speciation event in relatively recent times. Hominid populations were genetically isolated from each other during the Middle Pleistocene. As a result, different populations of Homo erectus and archaic Homo sapiens evolved independently, perhaps forming several hominid species. Then, between 200,000 and 100,000 years ago, anatomically modern humans arose someplace in Africa and spread out, replacing other archaic sapiens including Neanderthals. The replacement model does not specify how anatomically modern humans usurped local populations. However, the model posits that there was little or no gene flow between hominid groups.
If the "Multi-regional Model" were correct, then it should be possible to see in modern populations echoes of anatomical features that stretch way back into prehistory: this is known as regional continuity. In addition, the appearance in the fossil record of advanced humans might be expected to occur more or less simultaneously throughout the Old World. By contrast, the "Out-of-Africa Model" predicts little regional continuity and the appearance of modern humans in one locality before they spread into others.
Out-of-Africa 2: The evidenceEdit
Until relatively recently, there was a strong sentiment among anthropologists in favor of extensive regional continuity. In addition, Western Europe tended to dominate the discussions. Evidence has expanded considerably in recent years, and now includes molecular biology data as well as fossils. Now there is a distinct shift in favor of some version of the "Out-of-Africa Model".
Discussion based on detailed examination of fossil record and mitochondrial DNA needs to address criteria for identifying:
- regional continuity;
- earliest geographical evidence (center of origin);
- chronology of appearance of modern humans.
The fossil evidence most immediately relevant to the origin of modern humans is to be found throughout Europe, Asia, Australasia, and Africa, and goes back in time as far as 300,000 years ago.
Most fossils are crania of varying degrees of incompleteness. They look like a mosaic of Homo erectus and Homo sapiens, and are generally termed archaic sapiens. It is among such fossils that signs of regional continuity are sought, being traced through to modern populations.
For example, some scholars (Alan Thorne) argue for such regional anatomical continuities among Australasian populations and among Chinese populations. In the same way, some others believe a good case can be made for regional continuity in Central Europe and perhaps North Africa.
By contrast, proponents of a replacement model argue that, for most of the fossil record, the anatomical characters being cited as indicating regional continuity are primitive, and therefore cannot be used uniquely to link specific geographic populations through time.
The equatorial anatomy of the first modern humans in Europe presumably is a clue to their origin: Africa. There are sites from the north, east and south of the African continent with specimens of anatomical modernity. One of the most accepted is Klasies River in South Africa. The recent discovery of remains of H. sapiens idaltu at Herto (Ethiopia) confirms this evidence. Does this mean that modern Homo sapiens arose as a speciation event in Eastern Africa (Ethiopia), populations migrating north, eventually to enter Eurasia? This is a clear possibility.
The earlier appearance of anatomically moderns humans in Africa than in Europe and in Asia too supports the "Out-of-Africa Model".
Just as molecular evidence had played a major role in understanding the beginnings of the hominid family, so too could it be applied to the later history, in principle.
However, because that later history inevitably covers a shorter period of time - no more than the past 1 million years - conventional genetic data would be less useful than they had been for pinpointing the time of divergence between hominids and apes, at least 5 million years ago. Genes in cell nuclei accumulate mutations rather slowly. Therefore trying to infer the recent history of populations based on such mutations is difficult, because of the relative paucity of information. DNA that accumulates mutations at a much higher rate would, however, provide adequate information for reading recent population history. That is precisely what mitochondrial DNA (mtDNA) offers.
MtDNA is a relatively new technique to reconstruct family trees. Unlike the DNA in the cell nucleus, mtDNA is located elsewhere in the cell, in compartments that produce the energy needed to keep cells alive. Unlike an individual's nuclear genes, which are a combination of genes from both parents, the mitochondrial genome comes only from the mother. Because of this maternal mode of inheritance, there is no recombination of maternal and paternal genes, which sometimes blurs the history of the genome as read by geneticists. Potentially, therefore, mtDNA offers a powerful way of inferring population history.
MtDNA can yield two major conclusions relevant for our topic: the first addresses the depth of our genetic routes, the second the possible location of the origin of anatomically modern humans.
- extensive genetic variation, implying an ancient origin, going back at least a million years (certainly around 1.8 million years ago);
- no population would have significantly more variation than any other. Any extra variation the African population might have had as the home of Homo erectus would have been swamped by the subsequent million years of further mutation.
- limited variation in modern mtDNA, implying a recent origin;
- African population would display most variation.
- If modern populations derive from a process of long regional continuity, then mtDNA should reflect the establishment of those local populations, after 1.8 million years ago, when populations of Homo erectus first left Africa and moved into the rest of the Old World. Yet the absence of ancient mtDNA in any modern living population gives a different picture. The amount of genetic variation throughout all modern human populations is surprisingly small, and implies therefore a recent origin for the common ancestor of us all.
- Although genetic variation among the world's population is small overall, it is greatest in African populations, implying they are the longest established.
- If modern humans really did evolve recently in Africa, and then move into the rest of the Old World where they mated with established archaic sapiens, the resulting population would contain a mixture of old and new mtDNA, with a bias toward the old because of the relative numbers of newcomers to archaic sapiens. Yet the evidence does not seem to support this view.
The argument that genetic variation among widely separated populations has been homogenized by gene flow (interbreeding) is not tenable any more, according to population geneticists.
Although these two hypotheses dominate the debate over the origins of modern humans, they represent extremes; and there is also room for several intermediate models.
- One hypothesis holds that there might have been a single geographic origin as predicted by replacement model, but followed by migrations in which newcomers interbred with locally established groups of archaic sapiens. Thus, some of genes of Neanderthals and archaic H. sapiens may still exist in modern populations;
- Another hypothesis suggests that there could have been more extensive gene flow between different geographic populations than is allowed for in the multi-regional model, producing closer genetic continuity between populations. Anatomically modern humans evolved in Africa, and then their genes diffused to the rest of the world by gene flow, not by migration of anatomically modern humans and replacement of local peoples.
In any case the result would be a much less clearcut signal in the fossil record.
Neanderthal fossils have been found in Israel at several sites: Kebara, Tabun, and Amud. For many years there were no reliable absolute dates. Recently, these sites were securely dated. The Neanderthals occupied Tabun around 110,000 years ago. However, the Neanderthals at Kebara and Amud lived 55,000 to 60,000 years ago. By contrast, at Qafzeh Cave, located nearby, remains currently interpreted as of anatomically modern humans have been found in a layer dated to 90,000 years ago.
These new dates lead to the surprising conclusion that Neanderthals and anatomically modern humans overlapped - if not directly coexisted - in this part of the world for a very long time (at least 30,000 years). Yet the anatomical evidence of the Qafzeh hominid skeletons reveals features reminiscent of Neanderthals. Although their faces and bodies are large and heavily built by today's standards, they are nonetheless claimed to be within the range of living peoples. Yet, a recent statistical study comparing a number of measurements among Qafzeh, Upper Paleolithic and Neanderthal skulls found those from Qafzeh to fall in between the Upper Paleolithic and Neanderthal norms, though slightly closer to the Neanderthals.
The Lagar Velho 1 remains, found in a rockshelter in Portugal dated to 24,500 years ago, correspond to the complete skeleton of a four-year-old child.
This skeleton has anatomical features characteristic of early modern Europeans:
- prominent chin and certain other details of the mandible;
- small front teeth;
- characteristic proportions and muscle markings on the thumb;
- narrowness of the front of pelvis;
- several aspects of shoulder and forearm bones.
Yet, intriguingly, a number of features also suggest Neanderthal affinities:
- the front of the mandible which slopes backward despite the chin;
- details of the incisor teeth;
- pectoral muscle markings;
- knee proportions and short, strong lower-leg bones.
Thus, the Lagar Velho child appears to exhibit a complex mosaic of Neanderthal and early modern human features. This combination can only have resulted from a mixed ancestry; something that had not been previously documented for Western Europe. The Lagar Velho child is interpreted as the result of interbreeding between indigenous Iberian Neanderthals and early modern humans dispersing throughout Iberia sometime after 30,000 years ago. Because the child lived several millennia after Neanderthals were thought to have disappeared, its anatomy probably reflects a true mixing of these populations during the period when they coexisted and not a rare chance mating between a Neanderthal and an early modern human.
Population dispersal into Australia/OceaniaEdit
Based on current data (and conventional view), the evidence for the earliest colonization of Australia would be as follows:
- archaeologists have generally agreed that modern humans arrived on Australia and its continental islands, New Guinea and Tasmania, about 35,000 to 40,000 years ago, a time range that is consistent with evidence of their appearance elsewhere in the Old World well outside Africa;
- all hominids known from Greater Australia are anatomically modern Homo sapiens;
- emerging picture begins to suggest purposeful voyaging by groups possessed of surprisingly sophisticated boat-building and navigation skills;
- the only major feature of early Greater Australia archaeology that does NOT fit comfortably with a consensus model of modern human population expansion in the mid-Upper Pleistocene is the lithic technology, which has a pronounced Middle, rather than Upper, Paleolithic cast.
Over the past decade, however, this consensus has been eroded by the discovery and dating of several sites:
- Malakunanja II and Nauwalabila I, located in Arnhem Land, would be 50,000 to 60,000 years old;
- Jinmium yielded dates of 116,000 to 176,000 years ago.
Yet these early dates reveal numerous problems related to stratigraphic considerations and dating methods. Therefore, many scholars are skeptical of their value.
If accurate, these dates require significant changes in current ideas, not just about the initial colonization of Australia, but about the entire chronology of human evolution in the early Upper Pleistocene. Either fully modern humans were present well outside Africa at a surprisingly early date or the behavioral capabilities long thought to be uniquely theirs were also associated, at least to some degree, with other hominids.
As a major challenge, the journey from Southeast Asia and Indonesia to Australia, Tasmania and New Guinea would have required sea voyages, even with sea levels at their lowest during glacial maxima. So far, there is no archaeological evidence from Australian sites of vessels that could have made such a journey. However, what were coastal sites during the Ice Age are mostly now submerged beneath the sea.
Overall the evidence suggested by mitochondrial DNA is the following:
- the amount of genetic variation in human mitochondrial DNA is small and implies a recent origin for modern humans;
- the African population displays the greatest amount of variation; this too is most reasonably interpreted as suggesting an African origin. | https://en.m.wikibooks.org/wiki/Introduction_to_Paleoanthropology/Hominids_MiddlePaleolithic |
4.125 | Questioning and analyzing the American mindset
during the middle to late nineteenth century
Definition of Gothic (adjective)
From the Germanic barbarian tribe, the Goths,
derives a common term describing something
that is crude, or uncivilized, or grotesque.
“Gothic” describes architecture, literature,
persons, and places.
General Characteristics of
A helpless victim
Gothic characters in literature
• Draw upon
• Frighten or
Stereotypical gothic characters include:
The thief with a code of honor
The lonely vampire
The mad scientist
The tormented artist
The werewolf, horrified by himself
The knowing madman
The deformed assassin
The ignored prophet
America in the context of the
The gothic writers explored the cultural anxieties and
fears of the expanding nation: the “dark side”.
These writers addressed such trends as
--technological and scientific progress
--individualism (free will, self made individual)
--slavery and abolition
The gothic writers critiqued the assumption that
America stood as the moral and guiding light of the
world (Winthrop’s “city on a hill”).
Objectives of this unit include:
• Recognizing characteristics of gothic
writers (or dark romantics);
• Understanding the fears and anxieties
explored by the writers;
• Evaluating the effectiveness of the
writers’ critical positions;
• Collecting evidence;
• Expressing our individual opinion;
Supporting that opinion in writing using
• Following a specific rubric.
Before You Go
On the quarter sheet of white
paper on your table—
1.Write your name;
2.Write two characteristics of
A particular slide catching your eye?
Clipping is a handy way to collect important slides you want to go back to later. | http://www.slideshare.net/gswider/the-gothic-period-in-american-literature |
4.34375 | Article, Book Resources
- Grades: 3–5
The Cretaceous Period is the most recent period of the Mesozoic Era, spanning 77 million years, from 142 million to 65 million years ago. In 1822, Omalius d'Halloy termed the chalky rocks (Latin: "creta") found on the English and French sides of the English Channel "Cretaceous." The name is now applied to all materials deposited above Jurassic Period rocks but below Tertiary Period rocks. During the Cretaceous more land was submerged than at any time since the Ordovician Period. Tectonic plates moved apart, causing major mountain building, and continents began to resemble those existing today.
The Cretaceous System is subdivided into the following stages from youngest to oldest: Upper Cretaceous includes Maastrichtian, Campanian, Santonian, Coniacian, Turonian, and Cenomanian; Lower Cretaceous includes Albian, Aptian, Barremian, Hauterivian, Valanginian, and Berriasian. These names, based on type localities where rocks at a given stratigraphic level were first studied, are widely used to refer to rock units or chronological times.
Cretaceous rocks make up nearly 29% of the total area of Phanerozoic deposits on the Earth's landmasses. Because of the great extent of their outcrop area and presence in drill cores, Cretaceous rocks are the most intensively studied part of the Phanerozoic rock column.
At one time Cretaceous seas covered 50% of the present North American continent. Thick Cretaceous sedimentary deposits form a narrow belt of outcrops from British Columbia to Central America. They extend throughout the Rocky Mountains and western Plains states, around the north edge of the Mississippi embayment, and along the southern and eastern sides of the Appalachian Mountains. Volcanoes in the western United States spread layers of volcanic ash, now turned into bentonite, in the center of the continent. Cretaceous rocks crop out on several Caribbean islands. This group extends along the Andes Mountain belt from Colombia to Cape Horn, and onto Antarctica. Cretaceous outcrops cover parts of central and southeastern South America. A vast volcanic province in the Paraná River basin of Uruguay and Brazil is mirrored across the Atlantic Ocean in the Etendeka province of western Africa, regions that were joined before South America and Africa separated.
In Europe, Cretaceous chalk also crops out around the Paris basin. Other deposits are in Denmark, north and central Germany, central and southern France, the Pyrenees Mountains, the Alps, Italy, Slovenia, coastal Croatia, Bosnia and Hercegovina, and Yugoslavia. Russia and the southwestern Asian republics have Cretaceous rocks. There are Cretaceous rocks in west Africa and Mozambique. India's Cretaceous deposits include the Deccan traps, a large, thick sequence of volcanic rocks east of Bombay. Cretaceous rocks crop out in Thailand, Borneo, Japan, Australia, and New Zealand.
In eastern North America the continental border gradually sank, shifting the shoreline inland. Shallow seas spread over the interiors of North America, western Europe, eastern Europe, western Russia, and the northern Arctic. They occupied central Australia and eastern Africa, eventually covering one-third of the present-day landmasses. Tectonic plates moved apart after the breakup of Pangaea in the middle Mesozoic. Previously, the drift of Africa relative to Europe had created the Tethys Sea, an ancestral Mediterranean Sea, with many island arcs, basins, and microcontinental fragments. It reached its maximum width in early Cretaceous time. Movements of Africa, Europe, and the small Adriatic tectonic plate caused subduction that consumed Tethys, while seafloor spreading led to widening of the North Atlantic Ocean.
Europe remained connected to North America until 81 million years ago; Greenland separated from Europe 70 million years ago. As the Adriatic plate carrying Italy closed Tethys and collided with Austria and Switzerland, the Alps rose. The Alpine region also marked a subduction zone. The Mediterranean began to open to the south. Mountain building also took place for the Dinaric and Hellenic Alps during collisions of the Carpathian-Serbo-Macedonian and Apulian arcs, in Saudi Arabia and Oman, and in the Himalayas.
Seafloor spreading in the Atlantic led to westward movement of the North American plate. Compression around the Pacific Ocean margin started the Cordilleran and Laramide orogenies and began the creation of the present-day Rockies, Sierra Nevada, California Coast Ranges, and Andes, mountain belts that extend the length of North and South America. Enormous amounts of granite formed within the crust as subduction carried continental sediments deep within the crust around the Pacific margin and between Africa and Europe and melted. This led to volcanism in many subduction zones. Mountain building also affected Japan and the Philippines.
Marine invertebrates included many modern-looking pelecypods and gastropods. Widespread ammonite fossils have helped define Cretaceous stratigraphy. The shells, or tests, of abundant foraminifera created the chalk deposits that gave the period its name. Extensive coral reefs grew in the equatorial belt. Dinosaurs ruled the land, with sauropods, tyrannosaurs, duck-billed and horned dinosaurs prominent. Pterosaurs and toothed birds flew in the skies, and mosasaurs appeared in the seas. Primitive mammals included insectivorous marsupials. Deciduous trees and other angiosperms created modern-looking forests by the middle of the Cretaceous Period.
The End of the Cretaceous
A great extinction marked the end of the Cretaceous Period. The dinosaurs, ammonites, and many marine creatures abruptly disappeared. High concentrations of iridium and textural features of the boundary clay layer separating Cretaceous and Tertiary rocks support the idea of a meteorite impact producing a worldwide layer of debris at essentially the same moment at the very end of the Cretaceous Period. The huge Chicxulub Crater off the Yucatán Peninsula of Mexico is the likely point of impact. Some researchers favor a volcanic catastrophe or other series of events and a more gradual extinction of species and genera, but the impact theory has gained wide acceptance.
William D. Romey
Bibliography: Dott, R. H., Jr., and Prothero, D. R., Evolution of the Earth, 5th ed. (1993); Hsu, K. J., ed., Mesozoic and Cenozoic Oceans (1986); Moullade, M., and Nairn, A., eds., The Phanerozoic Geology of the World: Vol. 2, The Mesozoic (1983). | http://www.scholastic.com/teachers/article/cretaceous-period |
4.25 | OverviewDownload PDF of this Page
Coral reefs are the ocean’s most diverse and complex ecosystems, supporting 25% of all marine life, including 800 species of reef-building corals and more than one million animal and plant species. They are close relatives of sea anemones and jellyfish, as each coral is a colony consisting of many individual sea anemone-like polyps that are all interconnected.
Tropical coral reefs, found in warm, clear water at relatively shallow depths, are intricately patterned carpets of life growing on foundations formed primarily by calcium carbonate exoskeletons and coralline algae. These structures fuse over time, enlarging the reef and creating countless nooks and crannies. As the reef grows, species from nearly every major taxonomic group cover every square inch of these tightly integrated systems, providing food and shelter to a spectacular variety of fish and invertebrate species, including many of commercial value.
‘Hard’ corals use calcium carbonate from seawater to synthesize a hard, mineral protective shell around each polyp. These exoskeletons, along with shells formed by coralline algae, mollusks and tubeworms, spicules made by sponges, and shells of other calcifying species form the structural foundation of coral reefs. Corals catch plankton with their tentacles, but most of their nutrition comes from photosynthetic algae that live in their tissues, using the coral’s waste products for their own nutrition and feeding the corals with sugars and other nutritious compounds that leak through their cell membranes.
Deep water reefs, formed by large, long-lived but fragile, soft corals are also architecturally and ecologically complex and teem with life, but lack a calcium carbonate foundation. Though beyond the reach of sunlight, underwater lights reveal them to be nearly as beautiful and colorful as their tropical counterparts.
The condition of coral reefs is important to the Ocean Health Index because healthy reefs provide many benefits to people, including food, natural products, coastal protection from storms, jobs and revenue, tourism and recreation, biodiversity and others.
60% of reefs are already seriously damaged by local sources such as overfishing, destructive fishing, anchor damage, coral bleaching, coral mining, sedimentation, pollution, and disease. When these types of human threats are combined with the influence of rising ocean temperatures, 75% of reefs are threatened (Burke et al. 2011).
How Was It Measured?
The extent of coral reefs was derived from the 500m resolution dataset developed for Reefs at Risk Revisited (Burke et al. 2011), in conjunction with a re-sampled version of the Ocean Health Index EEZ regions. The condition of reefs was estimated using data for percentage cover by live coral determined from 12,634 surveys conducted from 1975-2006 and summarized by Bruno and Selig (2007) and Schutte et al. (2010).
The reference point for coral reefs is the percent cover of coral reefs in 1975. The current Status is reported as: Current precent cover of coral reefs ÷ percent cover of coral reefs in 1975.
Like all of the habitats used in the OHI, coral reefs are used as a component in calculating scores for many of the different goals. However, it is used differently depending on the goal in question. Although habitats such as coral reefs are used in calculating these goal scores, countries are not penalized for not having a certain habitat type. Calculations are based on the rank of existing habitats, as opposed to using all possible habitat types. The condition and extent of coral reefs can either be used as a direct measure, or indirectly in a supporting capacity.
Coral reefs is used directly as a component in calculating Coastal Protection (Corals) and Biodiversity (Habitat: Coral). For these goals, the extent and condition of coral habitat factors directly into score calculations. Coral is also used as a component for the Natural Products goal, but is measured in a different way.
Coral reefs is also used indirectly in calculating scores for Artisanal Fishing, Natural Products (Ornamental Fish), and Livelihoods and Economies (Aquarium Trade). For example, Artisanal Fishing: High Bycatch was assessed by looking at the presence of blast and poison fishing, practices that both degrade coral reefs.
What Are The Impacts?
that are exposed to elevated sea surface temperatures expel the symbiotic
photosynthetic algae responsible for their nutrition and coloration
[zooxanthellae] in a process known as coral bleaching. Corals can recover from
occasional bleaching, but not from repeated bleaching. Increases in sea-surface
temperature of about 1-3 °C are projected to result in more frequent coral
bleaching events and widespread mortality.
Elevated sea surface temperatures cause increased damage to reefs from breakage as storm frequency and intensity increase.
Increasing amounts of carbon dioxide in the atmosphere cause increased amounts in surface waters, leading to ocean acidification (lowered pH). Acidification decreases the availability of calcium carbonate, making it harder for corals and other calcifying organisms to form their shells; it also dissolves existing shells.
By the end of the century, it is predicted that ocean pH will drop from its current value of about 8.1 by as much as 0.4 units; by 2050, conditions will not be sufficient for the formation of calcium carbonate (Hoegh-Guldberg et al. 2007).
Overfishing can seriously degrade the structure and health of coral reefs. If populations of algal grazers are reduced, aggressive algae can overgrow the reef. Reefs can also decline if overfishing reduces populations of fish that normally keep coral predators in check.
Overfishing threatens more than 70% of coral reefs in the Caribbean (Burke et al. 2011).
Most hard corals develop and grow very slowly, so recovery from damage caused by hurricanes, shipwrecks or anchors may take many years . Branching hard corals break more easily in storms or from physical contact, but may recover more quickly because they grow faster and each fragment can potentially form a new coral.
HUMAN HEALTH IMPACT
and affiliated sponges contain bioactive chemical compounds that can be useful as
cancer and virus-fighting drugs. For example, AZT, a compound generated by a
Caribbean reef sponge, is an antiretroviral drug that effectively slows the
spread of the HIV virus. Read more here about medicines from coral reefs and other marine organisms.
500 million people depend on coral reefs for coastal protection, food, and
tourism income (Wilkinson, 2008).
Coral reefs help protect shorelines from storm damage and can absorb 70-90% of wave energy.
The total net benefit per year of the world’s coral reefs was estimated in 2003 to be $29.8 billion. Tourism and recreation account for $9.6 billion of this amount, coastal protection for $9.0 billion, fisheries for $5.7 billion, and biodiversity for $5.5 billion. Costs of coral bleaching to tourism expressed in Net Present Value (NPV) were estimated at $10-$40 billion (Cesar, Burke and Pet-Soede, 2003, cited in Conservation International 2008).
What Has Been Done?
Coral reefs are globally widespread and many are threatened by climate change, ocean acidification, pollution.and other pressures. Scientists would be hard pressed to study them all, but ordinary citizens--including you-- can help. A recent study by the European Commission found that volunteers in the Reef Check program identified decreases in live coral cover and increases in the cover of rubble and sand about as well as did scientists from the University of Rhode Island, though their identifications and counts of fishes differed from the professional assessments. The study showed that citizen scientists can make valuable contributions to some aspects of long-term marine environmental monitoring.
In 2010, the not-for-profit
organization Nature Seychelles launched its Reef Rescuers Project. This project
aims to restore coral reefs around the Seychelles. These reefs were damaged in
a mass coral bleaching event caused by an El Niño event in
1998. Surviving coral fragments were selected and used to create an underwater
nursery. These corals were raised on ropes for about a year, a method known as
“coral gardening”, then transplanted to reefs off of Praslin. By 2015, Nature
Seychelles had transplanted more than 24,000 corals.
Now, Nature Seychelles has launched a program called the Coral Reef Rescuers Training Program. The program will provide scientific and practical knowledge about reef gardening and how to transplant the corals from underwater nurseries to sites that need restoration. In passing on their knowledge, Nature Seychelles hopes to give people the tools they need for large-scale restoration projects wherever coral reefs need help.
Get More Information
Coral Reef Alliance [CORAL]
An international NGO founded to support local projects that benefit coral reefs and surrounding communities.
International Coral Reef Initiative [ICRI]
A partnership between non-government organizations, governments and other international organizations that works to implement international conventions and agreements.
Reef Base: A Global Information System for Coral Reefs
A source for coral reef data, publications, maps, and other resources from around the world hosted by World Fish Center
Science to Action: Coral Health Index: Measuring Coral Community Health
A guidebook to evaluating coral health and understanding impacts
World Resources Institute [WRI]: Reefs at Risk Revisited
A booklet detailing spatial and statistical data on current threats to coral reefs.
Bruno, J. F. and E.R. Selig. (2007). Regional Decline of Coral Cover in the Indo-Pacific: Timing, Extent, and Subregional Comparisons. PLoS ONE 2, e711.
Burke, L., K. Reytar, M. Spalding and A. Perry. (2011). Reefs at Risk Revisited. World Resources Institute: Washington D.C.
Hoegh-Guldberg, O. et al. (2007). Coral reefs under rapid climate change and ocean acidification. Science 318, 1737–1742 (2007).
Schutte, V. G. W., E.R. Selig and J.F. Bruno. (2010). Regional spatio-temporal trends in Caribbean coral reef benthic communities. Mar Ecol Prog Ser 402, 115–122.
Spalding, M., C. Ravilious and E.P. Green. (2001). World Atlas of Coral Reefs.Prepared at the UNEP World Conservation Monitoring Centre. Berkeley,CA: University of California Press
Wilkinson, C. (2008). Status of Coral Reefs of the World: 2008. Global Coral Reef Monitoring Network and Reef and Rainforest Research Center, Townsville, Australia. | http://www.oceanhealthindex.org/methodology/components/coral-reefs |
4.03125 | THE GULF STREAM, a relative newcomer on the geological scene, is an odd, fast-moving circulation of warm water that travels in an unfixed position, a few hundred miles north of Florida, up the east coast of the United States to Cape Hatteras, North Carolina, then onto Nantucket Island, before kicking eastward across the Atlantic Ocean to the British Isles. In this way, the Gulf Stream, a part of the western edge of the North Atlantic circulation, acts as a boundary that prevents the warm water of the Sargasso Sea from overflowing the colder, denser waters on the inshore side. The Gulf Stream is 1 of the strongest and most extensive known currents in the world, and it is separated from the United States by a narrow strip of cold water. The Gulf Stream, which can be as much as 50 mile (80 kilometer) wide and 1,300 foot (400 mile) deep, is caused by the northeast and southeast trade winds on the surface of the water and the equatorial currents that meet in the region of the windward islands of the Caribbean Sea. The Gulf Stream’s rival, the Kuroshio Current, located along the western edge of the Pacific Ocean and the coast of Japan, is part of a transpacific system that connects the North Pacific, California, and equatorial currents. The Gulf Stream triples in volume and is strengthened by the waters from the Florida Straits, by way of the Florida Current, and by currents coming from the northern and eastern coast of Puerto Rico and the Bahamas. It can travel more than 60 mile (96 kilometer) a day. The Gulf Stream, first mapped by Benjamin Franklin and his American whaling captain cousin Timothy Folger, early pioneers in using temperature in an attempt to define its boundaries, maintains its dimensions for nearly 1,000 mile (1,610 kilometer) up the East Coast of the United States. Franklin, using observations of current speed in the region of the Gulf Stream and plotting them on a chart, was able to draw a river traversing the Atlantic Ocean with speeds ranging from 4 to 2 knots. The strong carry power of the Gulf Stream’s warm equatorial layers of water has a notable, almost direct effect on the climate in various parts of the Earth. As the Gulf Stream moves pass Cape Hatteras, North Carolina, it begins to flow away from the East Coast of the United States. The altered flow of the Gulf Stream, known as meanders or eddies, separates the cold slope water to the north from warm Sargasso Sea water to the south. As the Gulf Stream flows into deeper water, it carries warm water to the North Atlantic region, as it enters the Norwegian Sea between the Faroe Islands and Great Britain. Thus, the Gulf Stream, which bathes northwest with warmer water and wind currents, is largely believed to be the reason for mild European climate. Too warm to encourage the kind of fish that are the main catch of North Atlantic waters, the Gulf Stream does bring well-developed specimens of tropical fish, like the Portuguese man-of-war jellyfish, much farther to north than they would normally venture. In addition, there are 2 particular species of plant (the double coconut tree indigenous to Jamaica) and animal life (the freshwater eels of Europe) that are carried for thousands of miles by the Gulf Stream surface transport system to the shores of Ireland and . | http://mytravelphoto.org/287-gulf-stream.html |
4 | An electroencephalogram (EEG) is a test used to evaluate the electrical activity in the brain. Brain cells communicate with each other through electrical impulses. An EEG can be used to help detect potential problems associated with this activity.
The test tracks and records brain wave patterns. Small, flat metal discs called electrodes are attached to the scalp with wires. The electrodes analyze the electrical impulses in the brain and send signals to a computer, where the results are recorded.
The electrical impulses in an EEG recording look like wavy lines with peaks and valleys. These lines allow doctors to quickly assess whether there are abnormal patterns. Any irregularities may be a sign of seizures or other brain disorders.
An EEG is used to detect problems in the electrical activity of the brain that may be associated with certain brain disorders. The measurements given by an EEG are used to confirm or rule out various conditions, including:
- seizure disorders (such as epilepsy)
- a head injury
- encephalitis (an inflammation of the brain)
- a brain tumor
- encephalopathy (a disease that causes brain dysfunction)
- memory problems
- sleep disorders
When someone is in a coma, an EEG may be performed to determine the level of brain activity.
The test can also be used to monitor activity during brain surgery.
There are no risks associated with an EEG. The test is painless and safe.
When someone has epilepsy or another seizure disorder, the stimuli presented during the test (such as a flashing light) may cause a seizure. However, the technician performing the EEG is trained to safely manage the situation should this occur.
Before the test, you should take the following steps:
- Wash your hair the night before the EEG, and don’t put any products (such as sprays or gels) in your hair on the day of the test.
- Ask your doctor if you should stop taking any medications before the test. You should also make a list of your medications and give it to the technician performing the EEG.
- Avoid consuming any food or drinks containing caffeine for at least eight hours prior to the test.
Your doctor may ask you to sleep as little as possible the night before the test if you’re required to sleep during the EEG. You may also be given a sedative to help you to relax and sleep before the test begins.
After the EEG is over, you can continue with your regular routine for the day. However, if you were given a sedative, the medication will remain in your system for a little while. This means that you’ll have to bring someone with you so they can take you home after the test. You’ll need to rest and avoid driving until the medication has worn off.
An EEG measures the electrical impulses in your brain by using several electrodes that are attached to your scalp. An electrode is a conductor through which an electric current enters or leaves. The electrodes transfer information from your brain to a machine that measures and records the data.
An EEG may be given at a hospital, at your doctor’s office, or at a laboratory by a specialized technician. It usually takes 30 to 60 minutes to complete. The test typically involves the following steps:
- You’ll be asked to lie down on your back in a reclining chair or on a bed.
- The technician will measure your head and mark where the electrodes will be placed. These spots are then scrubbed with a special cream that helps the electrodes get a high-quality reading.
- The technician will put a sticky gel adhesive on 16 to 25 electrodes. They will then be attached to various spots on your scalp.
- Once the test begins, the electrodes send electrical impulse data from your brain to the recording machine. This machine converts the electrical impulses into visual patterns that can be seen on a screen. These patterns are saved to a computer.
- The technician may instruct you to do certain things while the test is in progress. They may ask you to lie still, close your eyes, breathe deeply, or look at stimuli (such as a flashing light or a picture).
- After the test is complete, the technician will remove the electrodes from your scalp.
During the test, very little electricity is passed between the electrodes and your skin, so you’ll feel very little to no discomfort.
A neurologist (someone who specializes in nervous system disorders) interprets the recordings taken from the EEG and then sends the results to your doctor. Your doctor may schedule an appointment to go over the test results with you.
Electrical activity in the brain is seen in an EEG as a pattern of waves. Different levels of consciousness, such as sleeping and waking, have a specific range of frequencies of waves per second that are considered normal. For example, the wave patterns move faster when you’re awake than when you’re asleep. The EEG will show if the frequency of waves or patterns are normal. Normal activity typically means you don’t have a brain disorder.
Abnormal EEG results may be due to:
- epilepsy or another seizure disorder
- abnormal bleeding or hemorrhage
- sleep disorder
- encephalitis (swelling of the brain)
- a tumor
- dead tissue due to a blockage of blood flow
- alcohol or drug abuse
- head injury
It’s very important to discuss your test results with your doctor. Before you review the results with them, it may be helpful to write down any questions you might want to ask. Be sure to speak up if there’s anything about your results that you don’t understand. | http://www.healthline.com/health/eeg |
4.3125 | This action might not be possible to undo. Are you sure you want to continue?
6.1 SEDIMENTARY BASIN
BASIN AND PETROLEUM SYSTEM
Sedimentary basins correspond to depressions in the upper parts of the Earth’s crust, generally occupied by a sea or an ocean. These depressions are initiated by geodynamic phenomena often associated with the displacement of lithosphere plates. The basement of the sedimentary basins is formed of crust made up of igneous rocks (granite on the continents and basalt in the oceans). Sedimentary rocks such as clays, sandstones, carbonates or massive salt have accumulated in these basins over geological time. Sedimentation generally involves a process extending over tens of millions of years, at a rate of several millimetres per year on average. Chiefly due to the weight of the deposits, the ongoing geodynamic processes and the accumulation of sediments lead to deformation and progressive sinking of the underlying crust. This accentuates the initial depression, giving rise. To a sedimentary filling that is often many kilometers thick. This deepening of the basin, which is known as subsidence, results from the combined effects of tectonic movements and sedimentary overburden. In extreme cases, subsidence can reach as much as 20 km. The tectonic setting is the premier criterion to distinguish different types of sedimentary basins: 1. Extensional basins occur within or between plates and are associated with increased heat flow due to hot mantle plumes. 2. Collisional basins occur where plates collide, either characterized by subduction of an oceanic plate or continental collision. 3. Transtensional basins occur where plates move in a strike-slip fashion relative to each other. 6.2 EXTENSIONAL BASINS
Rift basins develop in continental crust and constitute the incipient extensional basin type. If the process continues, it will ultimately lead to the development of an ocean basin flanked by passive margins, alternatively an intracratonic basin will form. Rift basins consist of a graben or half graben separated from surrounding horsts by normal faults. They can be filled with both continental and marine deposits. Intracratonic basins develop when rifting ceases, which leads to lithospheric cooling due to reduced heat flow are commonly large but not very deep. EXTENSION • Proto-oceanic troughs form the transitional stage to the development of large ocean basins, and are underlain by incipient oceanic crust. • Passive margins develop on continental margins along the edges of ocean basins; subsidence is caused by lithospheric cooling and sediment loading, and depending on the environmental setting clastic or carbonate facies may dominate. • Ocean basins are dominated by pelagic deposition (biogenic material and clays) in the central parts and turbidities along the margins. SUMMER INTERNSHIP REPORT, ONGC 63
1) showing extensional type of basin.Fig (6. Fig(6.2) Showing Collisional type of basin. these basin formed during the subduction process. SUMMER INTERNSHIP REPORT. ONGC 64 .
and retroarc foreland basins. their fill depends strongly on whether they are intra-oceanic or proximal to a continent. SUMMER INTERNSHIP REPORT. COLLISION • Forearc basins form between the accretionary prism and the volcanic arc and subside entirely due to sediment loading. and the sedimentary fill depends primarily on whether they are intra-oceanic or proximal to a continent. alluvial fans) adjacent to lacustrine or marine deposits. they sometimes form island chains.6. Fig (6. they are commonly filled with coarse facies (e. backarc basins. which typically exhibit a fill from deep marine through shallow marine to continental deposits.3) showing Transtension type of basin and their classifications.g. Collisional types of basins are shown in above figure (6. behind the volcanic arc. like trench basins. Accretionary prisms are ocean sediments that are scraped off the subducting plate. • Backarc basins are extensional basins that may form on the overriding plate. they are commonly filled with continental deposits. ONGC 65 . Lithospheric loading causes the development of peripheral foreland basins. • Continental collision leads to the creation of orogenic (mountain) belts. including trench basins. • Retroarc foreland basins form as a result of lithospheric loading behind a mountainous arc under a compressional regime. • Foreland basins can accumulate exceptionally thick (~10 km) stratigraphic successions.4 TRANSTENSION BASIN Strike-slip basins form in transtensional regimes and are usually relatively small but also deep. Trench basins can be very deep.2).. Several types of sedimentary basins can be formed due to subduction. forearc basins. 6.3 COLLISIONAL BASIN Subduction is a common process at active margins where plates collides and at least one oceanic plate is involved.
evolution. plate tectonics. brines.6. Basin analysis encompasses many topics since it integrates several fields within geology. architecture and fill of a sedimentary basin by examining geological variables associated with the basin. Petroleum System. Purpose of Basin Analysis Determine the physical chronostratigraphic framework by interpreting sequences. age (chronostratigraphy). Tectonic history . SUMMER INTERNSHIP REPORT. fossil content (biostratigraphy). Prospect generation and evaluation. total subsidence and tectonic subsidence curves on sequence boundaries. It helps the exploration and development of energy. and the effects of thermal changes on these sediments.). It provides a foundation for extrapolating known information into unknown regions in order to predict the nature of the basin where evidence is not available.1 Basin formation and character. Content. ONGC 66 . • Relate changes in rates of tectonic subsidence curves to plate-tectonic events. age. Complete tectonostratigraphic analysis including: • Relate major transgressive-regressive facies cycles to tectonic events. etc. kind of basin. Description and correlation of stratigraphic basin fill(sequence stratigraphy). Stratigraphic framework can be expressed in terms of rock type (lithostratigraphy). The Importance of Basin Analysis on Petroleum Industry is Decided By Geographic location . processes and evolution. well logs.g. A basin model is built on a framework of geological surfaces that are correlated within the basin.4. systems tracts. and parasequences and/or simple sequences on outcrops. That may occur within sedimentary basins. Construct geohistory. thickness and facies of the sediments of primary petroleum concern. water. and seismic data and age date with high resolution biostratigraphy. Basins fill characteristics.4 BASIN ANALYSIS Basin analysis involves interpretation of the formation. such as the reservoir. mineral and other resources (e. Basin analysis techniques. But it emphasis on evaluation of strata that fill stratigraphic basins. The sedimentary history. cap rock and source beds. Major approaches: • • • • • • 6. or rock properties such as seismic velocity (seismic stratigraphy).
The information generated at various stages may require re-interpretation. stratigraphic. surface geological data. a continuous process carried out in stages. with different geoscientific activities playing their pivotal role during stages of work.4. ONGC 67 . to locate favourable structural. which forms the major part of exploration activity. their distribution in space and time and the potential of the total basin are broadly indicated. Stratigraphic and sedimentological information obtained from wells and Seismic data helps in reconstructing the depositional history and inter relating the structural patterns with sedimentation. Stages of Basin Analysis (i) Initial Stage Analysis During the initial stages of exploration the broad framework of a basin may be worked out with the help of satellite imagery. is the search wit the help of the above exploration model. paleogeomorphic. sub-activities. The detailed scheme of basin analysis encompassing the major activities. Relate magmatism to tectonic subsidence curve. directed towards discovery of hydrocarbons.• • • • 6. Detailed lithological and paleoenvironmental studies. nature of sedimentary fill. With this background. working out depositional systems.2 Assign a cause to tectonically enhanced unconformities. aerial photo. when the tectonic framework. Map tectonostratigraphic units. magnetic and seismic data. paleogeomorphic and other subtle prospects for exploratory drilling. certain associational characteristics of hydrocarbon accumulation. Determine style and orientation of structures with tectonostratigraphic. structural and paleotectonic analysis. due to improvement in techniques or concepts or discovery of new plays in the basin. Basin analysis is thus. structural styles and habitat of oil/gas are better known and more refined and quantitative analysis becomes feasible. From knowledge of the worldwide sedimentary basins. The stratigraphic and record of the basin full is the basis for interpreting the casualties of hydrocarbon generation and accumulation. (ii) Middle Stage Analysis The second stage of basin analysis is reached during the advanced phases of hydrocarbon exploration. geochemical studies and identification of petroleum systems are the key elements at this stage. gravity. (iii) Final Stage Analysis The final stage is basin analysis. it is clear that basin analysis requires a synergistic approach. SUMMER INTERNSHIP REPORT. while the flow chart following it provides a broad overview. This results in a more precise definition of oil and gas generation and accumulation zones and their relationship with the stratigraphic and tectonic settings in various parts of the basin leading to a predictive exploration model.
or more often a portion of a sedimentary basin.5. Fig(6. A petroleum system is a sedimentary basin. higher plants and bacteria that together make up the major part of our planet’s biomass.4) Showing a petroleum system corresponds to a sedimentary basin. reservoir rock. ONGC 68 . where we find all the essential geological and physico-chemical ingredients like source rock. it brings together the geological processes necessary for the formation and accumulation of oil and gas in deposits. It should be noted that this kind of rock.1 Elements of A Petroleum System (i) Source Rock The source rock. combined with progressive burial of the source rock and appropriate migration and trapping of the hydrocarbons. SUMMER INTERNSHIP REPORT. However. This subsidence caused by plate tectonic.5 PETROLEUM SYSTEM Sedimentary basins are the subsiding areas where sediments accumulate to form stratigraphic successions. the organic matter should account for at least 2% of the rock by weight). which is a clayey or carbonate sediment containing a large quantity of organic debris accumulated at the same time as the mineral constituents.6. rich in organic sedimentary matter (for it to be called a source rock. This organic material corresponds to the accumulation of more or less well preserved remains of organic tissues derived from populations of organisms. To allow significant quantities of organic matter to accumulate in a sediment. which combines all the essential structural and sedimentary ingredients: source rock. reservoir rock. is far from common and requires very special conditions. Hydrocarbons formed and entrapped within such sedimentary basins. not all sedimentary basins satisfy the necessary conditions for them to become oil bearing and this is where the concept of a petroleum system comes in. cap rock and trap. or most frequently some part of a basin. Thermal history of the source rock associated with its progressive burial and the appropriate migration of hydrocarbons and their entrapment. 6. In addition. These organisms are essentially planktonic algae. cap rock and trap.
which are also referred to as reservoir rocks. where they are destroyed by chemical (oxidation) or biological (biodegradation) mechanisms. it acts as a petroleum and gas factory. Sedimentary environment must be devoid of oxygen (anoxic). generally made up of porous and permeable rocks. Conditions for Source Rock • • • • The depositional environment must be associated with an eco-system that produces a large amount of biomass (high biological productivity) (Pedersen and Calvert. sedimentary basins which lack intervals with sufficient quantities of organic matter cannot develop oil bearing deposits. to prevent decomposition of the organic matter by aerobic bacteria and consumption by benthic organisms (Demaison and Moore. petroleum products have a lower density than the water completely impregnating the sedimentary rocks). For example. In any case. Rocks rich in organic matter are most often clayey or marly (mixture of clay and limestone). (ii) Reservoir Rocks A system of drains. the cap can be a clayey rock or massive salt. These may take the form of closures around high points. Because of its impermeable character. 1980). of the rock volume. hydrocarbons can encounter flaws in the plumbing. in a way. going hand in hand with the weak circulation of water and anoxic conditions (Huc. in the same way as occurs during accidental oil pollution. This should be combined with a good preservation of the organic matter after the death of the organisms as well as during its incorporation into the sediment. ONGC 69 . (iii) Cap Rock A cap rock must be situated above the drains. 1990). These rocks have porosity ranging from 5 to 25%. the presence of impermeable barriers (breaks SUMMER INTERNSHIP REPORT. The hydrocarbons formed within the source rock are later expelled towards the reservoir. Due to their buoyancy. The absence of cap rocks results in the dispersal of the hydrocarbons in the sedimentary basin and their escape towards the surface. the hydrocarbons migrate towards the surface of the basin along sedimentary beds (in almost all cases. 1988). (iv) Traps During their migration towards the surface. These drains can also be considered as the plumbing of the petroleum system. the cap rock will confine the hydrocarbons to the porous and permeable system within which they are migrating. and even up to 30%. for example due to the fold geometry of an anticline. Hydrocarbons are formed by thermal decomposition of the fossilised organic matter contained within he rock. being fine grained with a low porosity and permeability. Sets of fractures or faults can also act as drains for the hydrocarbons.The source rock is an essential element in the petroleum system since. These properties result from their sedimentation in a low-energy environment.
is partly explained by the contribution of fossil thermal energy and its progressive dissipation over time.in continuity of the drains caused by offsets in the sedimentary succession due to faults) or a deterioration in the drain quality (loss of permeability). These traps are called structural or stratigraphic according to whether their main cause is the deformation of the porous layers (folding or faulting) or lateral variations of porosity and permeability in the sediments. This thermal flux from the Earth is manifested by a progressive increase of temperature with depth in the sediments of about 30°C/km (known as the geothermal gradient). 6.(a) Fault (b) Anticline (c) Unconformity (d) Pinchout. This increase in temperature during the subsidence of the source rock prompts the transformation of part of the organic matter that is present into petroleum and gas.5 Ga ago.2 EVOLUTION OF A PETROLEUM SYSTEM Apart from containing these essential ingredients.5) showing most suitable structure for entrapment of hydrocarbon. ONGC 70 . The increase of temperature with depth. SUMMER INTERNSHIP REPORT. originating from the time of the Earth’s formation during the accretion of planetesimals around 4.5. The remaining contribution comes from the thermal energy released due to the continual decay of radioactive elements naturally present in the Earth’s crust. the petroleum system should be seen as a whole entity functioning in a dynamic framework. This contribution accounts for about a half of the heat flow. which is well known to miners. (a) (b) (a) (d) Fig (6. These situations create zones of accumulation of hydrocarbon bearing fluids that correspond to the deposits from which oil operators can extract crude oils and gases. Over the course of geological time (generally some tens of millions of years) the source rock in a subsiding basin will become buried and its temperature will rise.
SUMMER INTERNSHIP REPORT. this arrival of oil at shallow depth leads to the formation of enormous superficial accumulations impregnating the exposed rocks. a thermal history is required that involves progressive heating up of the source rocks. This corresponds to a kinetic phenomenon that depends on both temperature and time. until they eventually encounter a trap where they can accumulate. with rising temperature. In some cases. Head et al.2003). (ii) Entrapment in Reservoir Rock This is just what happens when a sponge (source rock) is pressed between two porous bricks (surrounding reservoir rocks). The analysis of seepages. These seepages can be found in most of the petroleum provinces which are currently active. The large molecules characterizing the initial solid organic matter are split up into smaller molecules that make up a liquid called petroleum. these molecules are themselves reduced in size. activated by thermal energy. Oils reaching the surface in this way. a difference due to the greater compressibility of the former. leads to the breaking of chemical bonds and the production of chemical species of lower and lower molecular weight. shows). They were exploited throughout Antiquity as a source of bitumen. In such cases. hydrocarbons will occur as the natural localized emanations of oil or gas (seepage. ONGC 71 . thus forming a gas. are altered by bacteria. the hydrocarbons migrate towards the surface of the basin along the drains (secondary migration). as well as by the early explorationists who used them to locate oil deposits. For petroleum to be formed. In fact. This involves a phenomenon of which. Then.(i) Cracking It is described as entering the oil window and then the gas window. 1984. The hydrocarbons formed in this way are expelled from the source rock (primary migration). when they exist. this displacement is governed by the difference in pressure between the source rock and the drains. After expulsion from the source rock. that render the oils very viscous (Connan.. Hydrocarbons will eventually find their way to the surface if they are not held back by the traps or if the cover forms an inadequate seal. still forms part of the panoply of modern-day oil explorationists. or which accumulate at shallow depths.
This action might not be possible to undo. Are you sure you want to continue? | https://www.scribd.com/doc/48096687/Basin-and-Petroleum-System |
4.1875 | A fun, easy-to-implement collection of activities that give elementary and middle-school students a real understanding of key math concepts
Math is a difficult and abstract subject for many students, yet teachers need to make sure their students comprehend basic math concepts. This engaging activity book is a resource teachers can use to give students concrete understanding of the math behind the questions on most standardized tests, and includes information that will give students a firm grounding to work with more advanced math concepts.
Math Wise! is a key resource for teachers who want to teach their students the fundamentals that drive math problems.
Have you read this book? Be the first to write a review
buy now for | http://www.moviemars.com/professional-and-technical/book-math-wise-9780470471999.htm |
4.09375 | |This article needs additional citations for verification. (August 2011)|
A scientific control is an experiment or observation designed to minimize the effects of variables other than the single independent variable. This increases the reliability of the results, often through a comparison between control measurements and the other measurements. Scientific controls are a part of the scientific method.
An example of a scientific control (sometimes called an "experimental control") might be testing plant fertilizer by giving it to only half the plants in a garden: the plants that receive no fertilizer are the control group, because they establish the baseline level of growth that the fertilizer-treated plants will be compared against. Without a control group, the experiment cannot determine whether the fertilizer-treated plants grow more than they would have if untreated.
Ideally, all variables in an experiment will be controlled (accounted for by the control measurements) and none will be uncontrolled. In such an experiment, if all the controls work as expected, it is possible to conclude that the experiment is working as intended and that the results of the experiment are due to the effect of the variable being tested. That is, scientific controls allow an investigator to make a claim like "Two situations were identical until factor X occurred. Since factor X is the only difference between the two situations, the new outcome was caused by factor X."
There are many forms of controlled experiments. A relatively simple one separates research subjects or biological specimens into two groups: an experimental group and a control group. No treatment is given to the control group, while the experimental group is changed according to some key variable of interest, and the two groups are otherwise kept under the same conditions.
Controls eliminate alternate explanations of experimental results, especially experimental errors and experimenter bias. Many controls are specific to the type of experiment being performed, as in the molecular markers used in SDS-PAGE experiments, and may simply have the purpose of ensuring that the equipment is working properly. The selection and use of proper controls to ensure that experimental results are valid (for example, absence of confounding variables) can be very difficult. Control measurements may also be used for other purposes: for example, a measurement of a microphone's background noise in the absence of a signal allows the noise to be subtracted from later measurements of the signal, thus producing a processed signal of higher quality.
For example, if a researcher feeds an experimental artificial sweetener to sixty laboratory rats and observes that ten of them subsequently become sick, the underlying cause could be the sweetener itself or something unrelated. Other variables, which may not be readily obvious, may interfere with the experimental design. For instance, perhaps the rats were simply not supplied with enough food or water, or the water was contaminated and undrinkable, or the rats were under some psychological or physiological stress, etc. Eliminating each of these possible explanations individually would be time-consuming and difficult. However, if a control group is used that does not receive the sweetener but is otherwise treated identically, any difference between the two groups can be ascribed to the sweetener itself with much greater confidence.
Types of control
The simplest types of control are negative and positive controls, and both are found in many different types of experiments. These two controls, when both are successful, are usually sufficient to eliminate most potential confounding variables: it means that the experiment produces a negative result when a negative result is expected, and a positive result when a positive result is expected.
Negative controls are groups where no phenomenon is expected. They ensure that there is no effect when there should be no effect. To continue with the example of drug testing, a negative control is a group that has not been administered the drug of interest. This group receives either no preparation at all or a sham preparation (that is, a placebo), either an excipient-only (also called vehicle-only) preparation or the proverbial "sugar pill." We would say that the control group should show a negative or null effect.
In an example where there are only two possible outcomes, positive and negative, then if the treatment group and the negative control both produce a negative result, it can be inferred that the treatment had no effect. If the treatment group and the negative control both produce a positive result, it can be inferred that a confounding variable acted on the experiment, and the positive results are not due to the treatment.
In other examples, outcomes might be measured as lengths, times, percentages, and so forth. For the drug testing example, we could measure the percentage of patients cured. In this case, the treatment is inferred to have no effect when the treatment group and the negative control produce the same results. Some improvement is expected in the placebo group due to the placebo effect, and this result sets the baseline which the treatment must improve upon. Even if the treatment group shows improvement, it needs to be compared to the placebo group. If the groups show the same effect, then the treatment was not responsible for the improvement (because the same number of patients were cured in the absence of the treatment). The treatment is only effective if the treatment group shows more improvement than the placebo group.
Positive controls are groups where a phenomenon is expected. That is, they ensure that there is an effect when there should be an effect, by using an experimental treatment that is already known to produce that effect (and then comparing this to the treatment that is being investigated in the experiment).
Positive controls are often used to assess test validity. For example, to assess a new test's ability to detect a disease (its sensitivity), then we can compare it against a different test that is already known to work. The well-established test is the positive control, since we already know that the answer to the question (whether the test works) is yes.
Similarly, in an enzyme assay to measure the amount of an enzyme in a set of extracts, a positive control would be an assay containing a known quantity of the purified enzyme (while a negative control would contain no enzyme). The positive control should give a large amount of enzyme activity, while the negative control should give very low to no activity.
If the positive control does not produce the expected result, there may be something wrong with the experimental procedure, and the experiment is repeated. For difficult or complicated experiments, the result from the positive control can also help in comparison to previous experimental results. For example, if the well-established disease test was determined to have the same effectiveness as found by previous experimenters, this indicates that the experiment is being performed in the same way that the previous experimenters did.
When possible, multiple positive controls may be used — if there is more than one disease test that is known to be effective, more than one might be tested. Multiple positive controls also allow finer comparisons of the results (calibration, or standardization) if the expected results from the positive controls have different sizes. For example, in the enzyme assay discussed above, a standard curve may be produced by making many different samples with different quantities of the enzyme.
In randomization, the groups that receive different experimental treatments are determined randomly. While this does not ensure that there are no differences between the groups, it ensures that the differences are distributed equally, thus correcting for systematic errors.
For example, in experiments where crop yield is affected (e.g. soil fertility), the experiment can be controlled by assigning the treatments to randomly selected plots of land. This mitigates the effect of variations in soil composition on the yield.
In blind experiments, at least some information is withheld from participants in the experiments (but not the experimenter). For example, to evaluate the success of a medical treatment, an outside expert might be asked to examine blood samples from each of the patients without knowing which patients received the treatment and which did not. If the expert's conclusions as to which samples represent the best outcome correlates with the patients who received the treatment, this allows the experimenter to have much higher confidence that the treatment is effective.
In double-blind experiments, at least some participants and some experimenters do not possess full information while the experiment is being carried out. Double-blind experiments are most often used in clinical trials of medical treatments, to verify that the supposed effects of the treatment are produced only by the treatment itself. Trials are typically randomized and double-blinded, with two (statistically) identical groups of patients being compared. The treatment group receives the treatment, and the control group receives a placebo. The placebo is the "first" blind, and controls for the patient expectations that come with taking a pill, which can have an effect on patient outcomes. The "second" blind, of the experimenters, controls for the effects on patient expectations due to unintentional differences in the experimenters' behavior. Since the experimenters do not know which patients are in which group, they cannot unconsciously influence the patients. After the experiment is over, they then "unblind" themselves and analyse the results.
In clinical trials involving a surgical procedure, a sham operated group is used to ensure that the data reflect the effects of the experiment itself, and are not a consequence of the surgery. In this case, double blinding is achieved by ensuring that the patient does not know whether their surgery was real or sham, and that the experimenters who evaluate patient outcomes are different from the surgeons and do not know which patients are in which group.
- False positive
- False negative
- Designed experiment
- Controlling for a variable
- James Lind cured scurvy using a controlled experiment that has been described as the first clinical trial.
- Wait list control group
- Life, Vol. II: Evolution, Diversity and Ecology: (Chs. 1, 21-33, 52-57). W. H. Freeman. 1 December 2006. p. 15. ISBN 978-0-7167-7674-1. Retrieved 14 February 2015.
- Johnson PD, Besselsen DG (2002). "Practical aspects of experimental design in animal research" (PDF). ILAR J 43 (4): 202–6. PMID 12391395.
- James Lind (1753). A Treatise of the Scurvy. PDF
- Simon, Harvey B. (2002). The Harvard Medical School guide to men's health. New York: Free Press. p. 31. ISBN 0-684-87181-5. | https://en.wikipedia.org/wiki/Control_experiment |
4.03125 | August 30, 2013
New Wildfire Insights May Change Climate Predictions
Brett Smith for redOrbit.com - Your Universe Online
In the study, which was publish in Nature Communications, scientists examined two types of particles taken from the 2011 Las Conchas fire in New Mexico: soot – similar to diesel exhaust, and tar balls – small, round organic blobs that are abundant during a biomass fire. The team determined that tar balls made up 80 percent of the particles from the Las Conchas fire.
Using a field emission scanning electron microscope, the team was able to enhance the differences among the various particles. Under the microscope, tar balls were seen as either “dark” or “bright.” The two types have a differing impact on climate change since they absorb and scatter radiation from the sun differently. The team was able to identify four categories of soot, ranging from bare to heavily-coated. Each type of soot particle has different optical properties.
To better understand the particles’ composition and properties, the scientists heated tar balls and soot in a special chamber, essentially baking off their exterior.
The scientists said that determining how these particles affect climate would mean understanding much more than how much heat they can retain. For instance, water vapor condenses more easily on oxidized particles to eventually form clouds. The researchers added that they are not yet able to determine what role these particles play with respect to climate.
“We don’t have an answer to that,” said study author Claudio Mazzoleni, an associate professor of physics at Michigan Technological University. “The particles might be warming in and of themselves, but if they don’t let solar radiation come down through the atmosphere, they could cool the surface. They may have strong effects, but at this point, it’s not wise to say what.”
“However, our study does provide modelers new insights on the smoke particle properties, and accounting for these properties in models might provide an answer to that question,” he added.
“The big thing we learned is that we should not forget about tar balls in climate models,” said co-author Swarup China, a graduate student at MTU, “especially since those models are predicting more and more wildfires.”
The study findings are being published just as California’s Rim fire has been declared the fifth-largest wildfire in state history. According to reports, the fire has grown to almost 200,000 acres.
Some of the acreage consumed in the blaze can be attributed to backfire operations by firefighters. The technique involves lighting low-intensity fires to rob the main fire of potential fuel.
“The fire is not having erratic growth like it was before,” reported Alison Hesterly, a Rim fire information officer. “And the forward spread of the fire is slowing, which is a good thing.” | http://www.redorbit.com/news/science/1112936124/new-wildfire-insights-change-climate-predictions-083013/ |
4.15625 | Dominions were semi-independent polities that were nominally under The Crown, constituting the British Empire and later the British Commonwealth, beginning in the later part of the 19th century. They included Canada, Australia, Pakistan, India, Ceylon (Sri Lanka), New Zealand, Newfoundland, South Africa, and the Irish Free State. The Balfour Declaration of 1926 recognised the Dominions as "autonomous Communities within the British Empire". In 1931 the Statute of Westminster recognized the Dominions as fully sovereign from the United Kingdom, with which they shared a common allegiance to the Crown. The Dominions and later constitutional monarchies within the Commonwealth of Nations maintained the same royal house and royal succession from before full sovereignty, and became known after the year 1953 as Commonwealth realms.
Earlier usage of dominion to refer to a particular territory dates back to the 16th century and was sometimes used to describe Wales from 1535 to around 1800.
- 1 Definition
- 2 Historical development
- 3 Dominions
- 4 Foreign relations
- 5 From Dominions to Commonwealth realms
- 6 See also
- 7 Notes
- 8 References
In English common law, the Dominions of the British Crown were all the realms and territories under the sovereignty of that Crown. For example, the Order in Council annexing the island of Cyprus in 1914 declared that, from 5 November, the island "shall be annexed to and form part of His Majesty's Dominions".
Use of dominion to refer to a particular territory dates back to the 16th century and was sometimes used to describe Wales from 1535 to around 1800: for instance, the Laws in Wales Act 1535 applies to "the Dominion, Principality and Country of Wales". Dominion, as an official title, was conferred on the Colony of Virginia about 1660 and on the Dominion of New England in 1686. These dominions never had self-governing status. The creation of the short-lived Dominion of New England was designed—contrary to the purpose of later dominions—to increase royal control and to reduce the colony's self-government.
Under the British North America Act 1867, what is now eastern Canada received the status of "Dominion" upon the Confederation of several British possessions in North America. However, it was at the Colonial Conference of 1907 when the self-governing colonies of Canada and the Commonwealth of Australia were referred to collectively as Dominions for the first time. Two other self-governing colonies—New Zealand and Newfoundland—were granted the status of Dominion in the same year. These were followed by the Union of South Africa in 1910 and the Irish Free State in 1922. At the time of the founding of the League of Nations in 1924, the League Covenant made provision for the admission of any "fully self-governing state, Dominion, or Colony", the implication being that "Dominion status was something between that of a colony and a state".
Dominion status was formally defined in the Balfour Declaration of 1926, which recognised these countries as "autonomous Communities within the British Empire", thus acknowledging them as political equals of the United Kingdom. The Statute of Westminster 1931 converted this status into legal reality, making them essentially independent members of what was then called the British Commonwealth.
Following the Second World War, the decline of British colonialism led to Dominions generally being referred to as Commonwealth realms and the use of the word dominion gradually diminished. Nonetheless, though disused, it remains Canada's legal title and the phrase Her Majesty's Dominions is still used occasionally in legal documents in the United Kingdom.
The word dominions originally referred to the possessions of the Kingdom of England. Oliver Cromwell's full title in the 1650s was "Lord Protector of the Commonwealth of England, Scotland and Ireland, and the dominions thereto belonging". In 1660, King Charles II gave the Colony of Virginia the title of dominion in gratitude for Virginia's loyalty to the Crown during the English Civil War. The Commonwealth of Virginia, a State of the United States, still has "the Old Dominion" as one of its nicknames. Dominion also occurred in the name of the short-lived Dominion of New England (1686–1689). In all of these cases, the word dominion implied no more than being subject to the English crown.
Responsible government: precursor to Dominion status
The foundation of "Dominion" status followed the achievement of internal self-rule in British Colonies, in the specific form of full responsible government (as distinct from "representative government"). Colonial responsible government began to emerge during the mid-19th century. The legislatures of Colonies with responsible government were able to make laws in all matters other than foreign affairs, defence and international trade, these being powers which remained with the Parliament of the United Kingdom. Bermuda, notably, was never defined as a Dominion, despite meeting this criteria, but as a self-governing colony that remains part of the British Realm.
Nova Scotia soon followed by the Province of Canada (which included modern southern Ontario and southern Quebec) were the first Colonies to achieve responsible government, in 1848. Prince Edward Island followed in 1851, and New Brunswick and Newfoundland in 1855. All except for Newfoundland and Prince Edward Island agreed to form a new federation named Canada from 1867. This was instituted by the British Parliament in the British North America Act 1867. (See also: Canadian Confederation). Section 3 of the Act referred to the new entity as a "Dominion", the first such entity to be created. From 1870 the Dominion included two vast neighbouring British territories that did not have any form of self-government: Rupert's Land and the North-Western Territory, parts of which later became the Provinces of Manitoba, Saskatchewan, Alberta, and the separate territories, the Northwest Territories, Yukon and Nunavut. In 1871, the Crown Colony of British Columbia became a Canadian province, Prince Edward Island joined in 1873 and Newfoundland in 1949.
The conditions under which the four separate Australian colonies—New South Wales, Tasmania, Western Australia, South Australia—and New Zealand could gain full responsible government were set out by the British government in the Australian Constitutions Act 1850. The Act also separated the Colony of Victoria (in 1851) from New South Wales. During 1856, responsible government was achieved by New South Wales, Victoria, South Australia, and Tasmania, and New Zealand. The remainder of New South Wales was divided in three in 1859, a change that established most of the present borders of NSW; the Colony of Queensland, with its own responsible self-government, and the Northern Territory (which was not granted self-government prior to federation of the Australian Colonies). Western Australia did not receive self-government until 1891, mainly because of its continuing financial dependence on the UK Government. After protracted negotiations (that initially included New Zealand), six Australian colonies with responsible government (and their dependent territories) agreed to federate, along Canadian lines, becoming the Commonwealth of Australia, in 1901.
In South Africa, the Cape Colony became the first British self-governing Colony, in 1872. (Until 1893, the Cape Colony also controlled the separate Colony of Natal.) Following the Second Boer War (1899–1902), the British Empire assumed direct control of the Boer Republics, but transferred limited self-government to Transvaal in 1906, and the Orange River Colony in 1907.
The Commonwealth of Australia was recognised as a dominion in 1901, and the Dominion of New Zealand and the Dominion of Newfoundland were officially given Dominion status in 1907, followed by the Union of South Africa in 1910.
Canadian Confederation and evolution of the term Dominion
In connection with proposals for the future government of British North America, use of the term "Dominion" was suggested by Samuel Leonard Tilley at the London Conference of 1866 discussing the confederation of the Province of Canada (subsequently becoming the Province of Ontario and the Province of Quebec), Nova Scotia and New Brunswick into "One Dominion under the Name of Canada", the first federation internal to the British Empire. Tilley's suggestion was taken from the 72nd Psalm, verse eight, "He shall have dominion also from sea to sea, and from the river unto the ends of the earth", which is echoed in the national motto, "A Mari Usque Ad Mare". The new government of Canada under the British North America Act of 1867 began to use the phrase "Dominion of Canada" to designate the new, larger nation. However, neither the Confederation nor the adoption of the title of "Dominion" granted extra autonomy or new powers to this new federal level of government. Senator Eugene Forsey wrote that the powers acquired since the 1840s that established the system of responsible government in Canada would simply be transferred to the new Dominion government:
By the time of Confederation in 1867, this system had been operating in most of what is now central and eastern Canada for almost 20 years. The Fathers of Confederation simply continued the system they knew, the system that was already working, and working well.
The constitutional scholar Andrew Heard has established that Confederation did not legally change Canada's colonial status to anything approaching its later status of a Dominion.
At its inception in 1867, Canada's colonial status was marked by political and legal subjugation to British Imperial supremacy in all aspects of government—legislative, judicial, and executive. The Imperial Parliament at Westminster could legislate on any matter to do with Canada and could override any local legislation, the final court of appeal for Canadian litigation lay with the Judicial Committee of the Privy Council in London, the Governor General had a substantive role as a representative of the British government, and ultimate executive power was vested in the British Monarch—who was advised only by British ministers in its exercise. Canada's independence came about as each of these sub-ordinations was eventually removed.
Heard went on to document the sizeable body of legislation passed by the British Parliament in the latter part of the 19th century that upheld and expanded its Imperial supremacy to constrain that of its colonies, including the new Dominion government in Canada.
When the Dominion of Canada was created in 1867, it was granted powers of self-government to deal with all internal matters, but Britain still retained overall legislative supremacy. This Imperial supremacy could be exercised through several statutory measures. In the first place, the British North America Act of 1867 provided in Section 55 that the Governor General may reserve any legislation passed by the two Houses of Parliament for "the signification of Her Majesty's pleasure", which is determined according to Section 57 by the British Monarch in Council. Secondly, Section 56 provides that the Governor General must forward to "one of Her Majesty's Principal Secretaries of State" in London a copy of any Federal legislation that has been assented to. Then, within two years after the receipt of this copy, the (British) Monarch in Council could disallow an Act. Thirdly, at least four pieces of Imperial legislation constrained the Canadian legislatures. The Colonial Laws Validity Act of 1865 provided that no colonial law could validly conflict with, amend, or repeal Imperial legislation that either explicitly, or by necessary implication, applied directly to that colony. The Merchant Shipping Act of 1894, as well as the Colonial Courts of Admiralty Act of 1890 required reservation of Dominion legislation on those topics for approval by the British Government. Also, the Colonial Stock Act of 1900 provided for the disallowance of any Dominion legislation the British government felt would harm British stockholders of Dominion trustee securities. Most importantly, however, the British Parliament could exercise the legal right of supremacy that it possessed over common law to pass any legislation on any matter affecting the colonies.
For decades, none of the Dominions was allowed to have its own embassies or consulates in foreign countries. All matters concerning international travel, commerce, etc., had to be transacted through British embassies and consulates. For example, all transactions concerning visas and lost or stolen passports by citizens of the Dominions were carried out at British diplomatic offices. It was not until the late 1930s and early 1940s that the Dominion governments were allowed to establish their own embassies, and the first two of these that were established by the Dominion governments in Ottawa and in Canberra were both established in Washington, D.C., in the United States.
As Heard later explained, the British government seldom invoked its powers over Canadian legislation. British legislative powers over Canadian domestic policy were largely theoretical and their exercise was increasingly unacceptable in the 1870s and 1880s. The rise to the status of a Dominion and then full independence for Canada and other possessions of the British Empire did not occur by the granting of titles or similar recognition by the British Parliament but by initiatives taken by the new governments of certain former British dependencies to assert their independence and to establish constitutional precedents.
What is remarkable about this whole process is that it was achieved with a minimum of legislative amendments. Much of Canada's independence arose from the development of new political arrangements, many of which have been absorbed into judicial decisions interpreting the constitution—with or without explicit recognition. Canada's passage from being an integral part of the British Empire to being an independent member of the Commonwealth richly illustrates the way in which fundamental constitutional rules have evolved through the interaction of constitutional convention, international law, and municipal statute and case law.
What was significant about the creation of the Canadian and Australian federations was not that they were instantly granted wide new powers by the Imperial centre at the time of their creation; but that they, because of their greater size and prestige, were better able to exercise their existing powers and lobby for new ones than the various colonies they incorporated could have done separately. They provided a new model which politicians in New Zealand, Newfoundland, South Africa, Ireland, India, Malaysia could point to for their own relationship with Britain. Ultimately, "[Canada's] example of a peaceful accession to independence with a Westminster system of government came to be followed by 50 countries with a combined population of more than 2-billion people."
Colonial Conference of 1907
Issues of colonial self-government spilled into foreign affairs with the Boer War (1899–1902). The self-governing colonies contributed significantly to British efforts to stem the insurrection, but ensured that they set the conditions for participation in these wars. Colonial governments repeatedly acted to ensure that they determined the extent of their peoples' participation in imperial wars in the military build-up to the First World War.
The assertiveness of the self-governing colonies was recognised in the Colonial Conference of 1907, which implicitly introduced the idea of the Dominion as a self-governing colony by referring to Canada and Australia as Dominions. It also retired the name "Colonial Conference" and mandated that meetings take place regularly to consult Dominions in running the foreign affairs of the empire.
The Colony of New Zealand, which chose not to take part in Australian federation, became the Dominion of New Zealand on 26 September 1907; Newfoundland became a Dominion on the same day. The Union of South Africa was referred to as a Dominion upon its creation in 1910.
First World War and Treaty of Versailles
The initiatives and contributions of British colonies to the British war effort in the First World War were recognised by Britain with the creation of the Imperial War Cabinet in 1917, which gave them a say in the running of the war. Dominion status as self-governing states, as opposed to symbolic titles granted various British colonies, waited until 1919, when the self-governing Dominions signed the Treaty of Versailles independently of the British government and became individual members of the League of Nations. This ended the purely colonial status of the dominions.
The First World War ended the purely colonial period in the history of the Dominions. Their military contribution to the Allied war effort gave them claim to equal recognition with other small states and a voice in the formation of policy. This claim was recognised within the Empire by the creation of the Imperial War Cabinet in 1917, and within the community of nations by Dominion signatures to the Treaty of Versailles and by separate Dominion representation in the League of Nations. In this way the "self-governing Dominions", as they were called, emerged as junior members of the international community. Their status defied exact analysis by both international and constitutional lawyers, but it was clear that they were no longer regarded simply as colonies of Britain.
Irish Free State
The Irish Free State, set up in 1922 after the Anglo-Irish War, was the first Dominion to appoint a non-British, non-aristocratic Governor-General when Timothy Michael Healy took the position in 1922. Dominion status was never popular in the Irish Free State where people saw it as a face-saving measure for a British government unable to countenance a republic in what had previously been the United Kingdom of Great Britain and Ireland. Successive Irish governments undermined the constitutional links with Britain until they were severed completely in 1949. In 1937 Ireland adopted, almost simultaneously, both a new constitution that included powers for a president of Ireland and a law confirming the king's role of head of state in external relations.
Second Balfour Declaration and Statute of Westminster
The Balfour Declaration of 1926, and the subsequent Statute of Westminster, 1931, restricted Britain's ability to pass or affect laws outside of its own jurisdiction. Significantly, Britain initiated the change to complete sovereignty for the Dominions. The First World War left Britain saddled with enormous debts, and the Great Depression had further reduced Britain's ability to pay for defence of its empire. In spite of popular opinions of empires, the larger Dominions were reluctant to leave the protection of the then-superpower. For example, many Canadians felt that being part of the British Empire was the only thing that had prevented them from being absorbed into the United States.
Until 1931, Newfoundland was referred to as a colony of the United Kingdom, as for example, in the 1927 reference to the Judicial Committee of the Privy Council to delineate the Quebec-Labrador boundary. Full autonomy was granted by the United Kingdom parliament with the Statute of Westminster in December 1931. However, the government of Newfoundland "requested the United Kingdom not to have sections 2 to 6[—]confirming Dominion status[—]apply automatically to it[,] until the Newfoundland Legislature first approved the Statute, approval which the Legislature subsequently never gave". In any event, Newfoundland's letters patent of 1934 suspended self-government and instituted a "Commission of Government", which continued until Newfoundland became a province of Canada in 1949. It is the view of some constitutional lawyers that—although Newfoundland chose not to exercise all of the functions of a Dominion like Canada—its status as a Dominion was "suspended" in 1934, rather than "revoked" or "abolished".
Canada, Australia, New Zealand, the Irish Free State, Newfoundland and South Africa (prior to becoming a republic and leaving the Commonwealth in 1961), with their large populations of European descent, were sometimes collectively referred to as the "White Dominions". Today Canada, Australia, New Zealand and the United Kingdom are sometimes referred to collectively as the "White Commonwealth".
List of Dominions
|Country[‡ 1]||From||To[‡ 2]||Status|
Continues as a Commonwealth realm and member of the Commonwealth of Nations. 'Dominion' was conferred as the country's title in the 1867 constitution and retained with the constitution's patriation in 1982, but has fallen into disuse.
Continues as a Commonwealth realm and member of the Commonwealth of Nations
Continues as a Commonwealth realm and member of the Commonwealth of Nations
|Newfoundland||1907||1949||After governance had reverted to direct control from London in 1934, became a province of Canada under the British North America Act, 1949 (now the Newfoundland Act) passed in the U.K. parliament, 31 March 1949, prior to the London Declaration of 28 April 1949|
|South Africa||1910||1953||Continued as a Commonwealth realm until it became a republic in 1961 under the Republic of South Africa Constitution Act, 1961, passed by the Parliament of South Africa, long title "To constitute the Republic of South Africa and to provide for matters incidental thereto", assented to 24 April 1961 to come into operation on 31 May 1961.|
| Irish Free State (1922–37)
Éire (1937–49) [‡ 3]
|1922||1949||The link with the monarchy ceased with the passage of the Republic of Ireland Act 1948, which came into force on 18 April 1949 and declared that the state was a republic.|
|India||1947||1950||The Union of India (with the addition of Sikkim) became a federal republic after its constitution came into effect on 26 January 1950.|
|Pakistan||1947||1953||Continued as a Commonwealth realm until 1956 when it became a republic under the name "The Islamic Republic of Pakistan": Constitution of 1956|
|Ceylon||1948||1953||Continued as a Commonwealth realm until 1972 when it became a republic under the name of Sri Lanka|
Four colonies of Australia had enjoyed responsible government since 1856: New South Wales, Victoria, Tasmania and South Australia. Queensland had responsible government soon after its founding in 1859 but, because of ongoing financial dependence on Britain, Western Australia became the last Australian colony to attain self-government in 1890. During the 1890s, the colonies voted to unite and in 1901 they were federated under the British Crown as the Commonwealth of Australia by the Commonwealth of Australia Constitution Act. The Constitution of Australia had been drafted in Australia and approved by popular consent. Thus Australia is one of the few countries established by a popular vote. Under the second Balfour Declaration, the federal government was regarded as coequal with (and not subordinate to) the British and other Dominion governments, and this was given formal legal recognition in 1942 (when the Statute of Westminster was retroactively adopted to the commencement of the Second World War 1939). In 1930, the Australian prime minister, James Scullin, reinforced the right of the overseas Dominions to appoint native-born governors-general, when he advised King George V to appoint Sir Isaac Isaacs as his representative in Australia, against the wishes of the opposition and officials in London. The governments of the States (called colonies before 1901) remained under the Commonwealth but retained links to the UK until the passage of the Australia Act 1986.
The term Dominion is employed in the Constitution Act, 1867 (originally the British North America Act, 1867), and describes the resulting political union. Specifically, the preamble of the act states: "Whereas the Provinces of Canada, Nova Scotia, and New Brunswick have expressed their Desire to be federally united into One Dominion under the Crown of the United Kingdom of Great Britain and Ireland, with a Constitution similar in Principle to that of the United Kingdom..." Furthermore, Sections 3 and 4 indicate that the provinces "shall form and be One Dominion under the Name of Canada; and on and after that Day those Three Provinces shall form and be One Dominion under that Name accordingly".
Usage of the phrase Dominion of Canada was employed as the country's name after 1867, predating the general use of the term dominion as applied to the other autonomous regions of the British Empire after 1907. The phrase Dominion of Canada does not appear in the 1867 act nor in the Constitution Act, 1982, but does appear in the Constitution Act, 1871, other contemporaneous texts, and subsequent bills. References to the Dominion of Canada in later acts, such as the Statute of Westminster, do not clarify the point because all nouns were formally capitalised in British legislative style. Indeed, in the original text of the Constitution Act, 1867, "One" and "Name" were also capitalised.
Frank Scott theorised that Canada's status as a Dominion ended with the Canadian parliament's declaration of war on Germany on 9 September 1939. From the 1950s, the federal government began to phase out the use of Dominion, which had been used largely as a synonym of "federal" or "national" such as "Dominion building" for a post office, "Dominion-provincial relations", and so on. The last major change was renaming the national holiday from Dominion Day to Canada Day in 1982. Official bilingualism laws also contributed to the disuse of dominion, as it has no acceptable equivalent in French.
While the term may be found in older official documents, and the Dominion Carillonneur still tolls at Parliament Hill, it is now hardly used to distinguish the federal government from the provinces or (historically) Canada before and after 1867. Nonetheless, the federal government continues to produce publications and educational materials that specify the currency of these official titles.
Defenders of the title Dominion—including monarchists who see signs of creeping republicanism in Canada—take comfort in the fact that the Constitution Act, 1982 does not mention and therefore does not remove the title, and that a constitutional amendment is required to change it.
The word Dominion has been used with other agencies, laws, and roles:
- Dominion Carillonneur: official responsible for playing the carillons at the Peace Tower since 1916
- Dominion Day (1867–1982): holiday marking Canada's national day; now called Canada Day
- Dominion Observatory (1905–1970): weather observatory in Ottawa; now used as Office of Energy Efficiency, Energy Branch, Natural Resources Canada
- Dominion Lands Act (1872): federal lands act; repealed in 1918
- Dominion Bureau of Statistics (1918–1971): superseded by Statistics Canada
- Dominion Police (1867–1920): merged to form the Royal Canadian Mounted Police (RCMP)
- Dominion Astrophysical Observatory (1918–present); now part of the National Research Council Herzberg Institute of Astrophysics
- Dominion Radio Astrophysical Observatory (1960–present); now part of the National Research Council Herzberg Institute of Astrophysics
- Dominion of Canada Rifle Association founded in 1868 and incorporated by an Act of Parliament in 1890
Toronto-Dominion Bank (founded as the Dominion Bank in 1871 and later merged with the Bank of Toronto), the Dominion of Canada General Insurance Company (founded in 1887), the Dominion Institute (created in 1997), and Dominion (founded in 1927, renamed as Metro stores beginning in August 2008) are notable Canadian corporations not affiliated with government that have used Dominion as a part of their corporate name.
Ceylon, which, as a crown colony, was originally promised "fully responsible status within the British Commonwealth of Nations", was formally granted independence as a Dominion in 1948. In 1972 it adopted a republican constitution to become the Free, Sovereign and Independent Republic of Sri Lanka. By a new constitution in 1978, it became the Democratic Socialist Republic of Sri Lanka.
India and Pakistan
India officially acquired responsible government in 1909, though the first Parliament did not meet until 1919. In the 1930s the idea of making British India as it was then into a dominion - the first one with a non-European population - was seriously discussed, but ran into serious snags - notably the increasing tensions between Hindus and Muslims. India and Pakistan finally separated as independent dominions in 1947. In the changed post-Second World War conditions this proved a transitory stage, and India became a republic in 1950 and Pakistan adopted a republican form of government in 1956.
Irish Free State / Ireland
The Irish Free State (Ireland from 1937) was a British Dominion between 1922 and 1949. As established by the Irish Free State Constitution Act of the United Kingdom Parliament on 6 December 1922 the new state—which had dominion status in the likeness of that enjoyed by Canada within the British Commonwealth of Nations—comprised the whole of Ireland. However, provision was made in the Act for the Parliament of Northern Ireland to opt out of inclusion in the Irish Free State, which—as had been widely expected at the time—it duly did one day after the creation of the new state, on 7 December 1922.
Following a plebiscite of the people of the Free State held on 1 July 1937, a new constitution came into force on 29 December of that year, establishing a successor state with the name of "Ireland" which ceased to participate in Commonwealth conferences and events. Nevertheless, the United Kingdom and other member states of the Commonwealth continued to regard Ireland as a dominion owing to the unusual role accorded to the British Monarch under the Irish External Relations Act of 1936. Ultimately, however, Ireland's Oireachtas passed the Republic of Ireland Act 1948, which came into force on 18 April 1949 and unequivocally ended Ireland's links with the British Monarch and the Commonwealth.
The colony of Newfoundland enjoyed responsible government from 1855 to 1934. It was among the colonies declared dominions in 1907. Following the recommendations of a Royal Commission, parliamentary government was suspended in 1934 due to severe financial difficulties resulting from the depression and a series of riots against the dominion government in 1932. In 1949, it joined Canada and the legislature was restored.
The New Zealand Constitution Act 1852 gave New Zealand its own Parliament (General Assembly) and home rule in 1852. In 1907 New Zealand was proclaimed the Dominion of New Zealand. New Zealand, Canada, and Newfoundland used the word dominion in the official title of the nation, whereas Australia used Commonwealth of Australia and South Africa Union of South Africa. New Zealand adopted the Statute of Westminster in 1947 and in the same year legislation passed in London gave New Zealand full powers to amend its own constitution. In 1986, the New Zealand parliament passed the Constitution Act 1986, which repealed the Constitution Act of 1852 and the last constitutional links with the United Kingdom.
The Union of South Africa was formed in 1910 from the four self-governing colonies of the Cape Colony, Natal, the Transvaal, and the Orange Free State (the last two were former Boer republics). The South Africa Act 1909 provided for a Parliament consisting of a Senate and a House of Assembly. The provinces had their own legislatures. In 1961, the Union of South Africa adopted a new constitution, became a republic, left the Commonwealth (and re-joined following end of Apartheid rule in the 1990s), and became the present-day Republic of South Africa.
Southern Rhodesia (renamed Zimbabwe in 1980) was a special case in the British Empire. Although it was never a dominion, it was treated as a dominion in many respects. Southern Rhodesia was formed in 1923 out of territories of the British South Africa Company and established as a self-governing colony with substantial autonomy on the model of the dominions. The imperial authorities in London retained direct powers over foreign affairs, constitutional alterations, native administration and bills regarding mining revenues, railways and the governor's salary.
Southern Rhodesia was not one of the territories that were mentioned in the 1931 Statute of Westminster although relations with Southern Rhodesia were administered in London through the Dominion Office, not the Colonial Office. When the dominions were first treated as foreign countries by London for the purposes of diplomatic immunity in 1952, Southern Rhodesia was included in the list of territories concerned. This semi-dominion status continued in Southern Rhodesia between 1953 and 1963, when it joined Northern Rhodesia and Nyasaland in the Central African Federation, with the latter two territories continuing to be British protectorates. When Northern Rhodesia was given independence in 1964 it adopted the new name of Zambia, prompting Southern Rhodesia to shorten its name to Rhodesia, but Britain did not recognise this latter change.
Rhodesia unilaterally declared independence from Britain in 1965 as a result of the British government's insistence on majority rule as a condition for independence. London regarded this declaration as illegal, and applied sanctions and expelled Rhodesia from the sterling area. Rhodesia continued with its dominion-style constitution until 1970, and continued to issue British passports to its citizens. The Rhodesian government continued to profess its loyalty to the Sovereign, despite being in a state of rebellion against Her Majesty's Government in London, until 1970, when it adopted a republican constitution following a referendum the previous year. This endured until the state's reconstitution as Zimbabwe Rhodesia in 1979 under the terms of the Internal Settlement; this lasted until the Lancaster House Agreement of December 1979, which put it under interim British rule while fresh elections were held. The country achieved independence deemed legal by the international community in April 1980, when Britain granted independence under the name Zimbabwe.
|This section does not cite any sources. (February 2012)|
Initially, the Foreign Office of the United Kingdom conducted the foreign relations of the Dominions. A Dominions section was created within the Colonial Office for this purpose in 1907. Canada set up its own Department of External Affairs in June 1909, but diplomatic relations with other governments continued to operate through the governors-general, Dominion High Commissioners in London (first appointed by Canada in 1880; Australia followed only in 1910), and British legations abroad. Britain deemed her declaration of war against Germany in August 1914 to extend to all territories of the Empire without the need for consultation, occasioning some displeasure in Canadian official circles and contributing to a brief anti-British insurrection by Afrikaner militants in South Africa later that year. A Canadian War Mission in Washington, D.C., dealt with supply matters from February 1918 to March 1921.
Although the Dominions had had no formal voice in declaring war, each became a separate signatory of the June 1919 peace Treaty of Versailles, which had been negotiated by a British-led united Empire delegation. In September 1922, Dominion reluctance to support British military action against Turkey influenced Britain's decision to seek a compromise settlement. Diplomatic autonomy soon followed, with the U.S.-Canadian Halibut Treaty (March 1923) marking the first time an international agreement had been entirely negotiated and concluded independently by a Dominion. The Dominions Section of the Colonial Office was upgraded in June 1926 to a separate Dominions Office; however, initially, this office was held by the same person that held the office of Secretary of State for the Colonies.
The principle of Dominion equality with Britain and independence in foreign relations was formally recognised by the Balfour Declaration, adopted at the Imperial Conference of November 1926. Canada's first permanent diplomatic mission to a foreign country opened in Washington, D.C., in 1927. In 1928, Canada obtained the appointment of a British high commissioner in Ottawa, separating the administrative and diplomatic functions of the governor-general and ending the latter's anomalous role as the representative of the British government in relations between the two countries. The Dominions Office was given a separate secretary of state in June 1930, though this was entirely for domestic political reasons given the need to relieve the burden on one ill minister whilst moving another away from unemployment policy. The Balfour Declaration was enshrined in the Statute of Westminster 1931 when it was adopted by the British Parliament and subsequently ratified by the Dominion legislatures.
Britain's declaration of hostilities against Nazi Germany on 3 September 1939 tested the issue. Most took the view that the declaration did not commit the Dominions. Ireland chose to remain neutral. At the other extreme, the conservative Australian government of the day, led by Robert Menzies, took the view that, since Australia had not adopted the Statute of Westminster, it was legally bound by the UK declaration of war—which had also been the view at the outbreak of the First World War—though this was contentious within Australia. Between these two extremes, New Zealand declared that as Britain was or would be at war, so it was too. This was, however, a matter of political choice rather than legal necessity. Canada issued its own declaration of war after a recall of Parliament, as did South Africa after a delay of several days (South Africa on September 6, Canada on September 10). Ireland, which had negotiated the removal of British forces from its territory the year before, remained neutral. There were soon signs of growing independence from the other Dominions: Australia opened a diplomatic mission in the US in 1940, as did New Zealand in 1941, and Canada's mission in Washington gained embassy status in 1943.
From Dominions to Commonwealth realms
|This section does not cite any sources. (January 2011)|
Initially, the Dominions conducted their own trade policy, some limited foreign relations and had autonomous armed forces, although the British government claimed and exercised the exclusive power to declare wars. However, after the passage of the Statute of Westminster the language of dependency on the Crown of the United Kingdom ceased, where the Crown itself was no longer referred to as the Crown of any place in particular but simply as "the Crown". Arthur Berriedale Keith, in Speeches and Documents on the British Dominions 1918–1931, stated that "the Dominions are sovereign international States in the sense that the King in respect of each of His Dominions (Newfoundland excepted) is such a State in the eyes of international law". After then, those countries that were previously referred to as "Dominions" became Commonwealth realms where the sovereign reigns no longer as the British monarch, but as monarch of each nation in its own right, and are considered equal to the UK and one another.
The Second World War, which fatally undermined Britain's already weakened commercial and financial leadership, further loosened the political ties between Britain and the Dominions. Australian Prime Minister John Curtin's unprecedented action (February 1942) in successfully countermanding an order from British Prime Minister Winston Churchill that Australian troops be diverted to defend British-held Burma (the 7th Division was then en route from the Middle East to Australia to defend against an expected Japanese invasion) demonstrated that Dominion governments might no longer subordinate their own national interests to British strategic perspectives. To ensure that Australia had full legal power to act independently, particularly in relation to foreign affairs, defence industry and military operations, and to validate its past independent action in these areas, Australia formally adopted the Statute of Westminster in October 1942 and backdated the adoption to the start of the war in September 1939.
The Dominions Office merged with the India Office as the Commonwealth Relations Office upon the independence of India and Pakistan in August 1947. The last country officially made a Dominion was Ceylon in 1948. The term "Dominion" fell out of general use thereafter. Ireland ceased to be a member of the Commonwealth on 18 April 1949, upon the coming into force of the Republic of Ireland Act 1948. This formally signalled the end of the former dependencies' common constitutional connection to the British crown. India also adopted a republican constitution in January 1950. Unlike many dependencies that became republics, Ireland never re-joined the Commonwealth, which agreed to accept the British monarch as head of that association of independent states.
The independence of the separate realms was emphasised after the accession of Queen Elizabeth II in 1952, when she was proclaimed not just as Queen of the United Kingdom, but also Queen of Canada, Queen of Australia, Queen of New Zealand, and of all her other "realms and territories" etc. This also reflected the change from Dominion to realm; in the proclamation of Queen Elizabeth II's new titles in 1953, the phrase "of her other Realms and Territories" replaced "Dominion" with another mediaeval French word with the same connotation, "realm" (from royaume). Thus, recently, when referring to one of those sixteen countries within the Commonwealth of Nations that share the same monarch, the phrase Commonwealth realm has come into common usage instead of Dominion to differentiate the Commonwealth nations that continue to share the monarch as head of state (Australia, Canada, New Zealand, Jamaica, etc.) from those that do not (India, Pakistan, South Africa, etc.). The term "Dominion" is still found in the Canadian constitution where it appears numerous times, but it is largely a vestige of the past, as the Canadian government does not actively use it (see Canada section). The term "realm" does not appear in the Canadian constitution.
The generic language of dominion did not cease in relation to the Sovereign. It was, and is, used to describe territories in which the monarch exercises sovereignty. It also describes a model of governance in newly independent British colonies, featuring a Westminster parliamentary government and the British monarch as head of state:
After World War II, Britain attempted to repeat the dominion model in decolonizing the Caribbean. ... Though several colonies, such as Guyana and Trinidad and Tobago, maintained their formal allegiance to the British monarch, they soon revised their status to become republics. Britain also attempted to establish a dominion model in decolonizing Africa, but it, too, was unsuccessful. ... Ghana, the first former colony declared a dominion in 1957, soon demanded recognition as a republic. Other African nations followed a similar pattern throughout the 1960s: Nigeria, Tanganyika, Uganda, Kenya, and Malawi. In fact, only Gambia, Sierra Leone, and Mauritius retained their dominion status for more than three years.
The phrase His/Her Majesty's dominions is a legal and constitutional phrase that refers to all the realms and territories of the Sovereign, whether independent or not. Thus, for example, the British Ireland Act, 1949, recognised that the Republic of Ireland had "ceased to be part of His Majesty’s dominions". When dependent territories that had never been annexed (that is, were not colonies of the Crown), but were mandates, protectorates or trust territories (of the United Nations or the former League of Nations) were granted independence, the United Kingdom act granting independence always declared that such and such a territory "shall form part of Her Majesty's dominions", and so become part of the territory in which the Queen exercises sovereignty, not merely suzerainty.
Many distinctive characteristics that once pertained only to Dominions are now shared by other states in the Commonwealth, whether republics, independent realms, associated states or territories. The practice of appointing a High Commissioner instead of a diplomatic representative such as an ambassador for communication between the government of a dominion and the British government in London continues in respect of Commonwealth realms and republics as sovereign states.
- British Empire
- Changes in British sovereignty
- Commonwealth of Nations
- Crown colony
- High Commissioner (Commonwealth)
- Name of Canada
- Self-governing colony
- United Kingdom
- Merriam Webster's Dictionary (based on Collegiate vol., 11th ed.) 2006. Springfield, MA: Merriam-Webster, Inc.
- Hillmer, Norman (2001). "Commonwealth". Toronto: Canadian Encyclopedia.
...the Dominions (a term applied to Canada in 1867 and used from 1907 to 1948 to describe the empire's other self-governing members)
- Cyprus (Annexation) Order in Council, 1914, dated 5 November 1914.
- Order quoted in The American Journal of International Law, "Annexation of Cyprus by Great Britain"
- "Parliamentary questions, Hansard, 5 November 1934". hansard.millbanksystems.com. 1934-11-05. Retrieved 2010-06-11.
- Roberts, J. M., The Penguin History of the World (London: Penguin Books, 1995, ISBN 0-14-015495-7), p. 777
- League of Nations (1924). "The Covenant of the League of Nations". Article 1: The Avalon Project at Yale Law School. Retrieved 2009-04-20.
- James Crawford, The Creation of States in International Law (Oxford: Oxford University Press, 1979, ISBN 978-0-19-922842-3), p. 243
- "Dominion". Youth Encyclopedia of Canada (based on Canadian Encyclopedia). Historica Foundation of Canada, 2008. Accessed 2008-06-20. "The word "Dominion" is the official status of Canada. ... The term is little used today."
- National Health Service Act 2006 (c. 41), sch. 22
- Link to the Australian Constitutions Act 1850 on the website of the National Archives of Australia: www.foundingdocs.gov.au
- Link to the New South Wales Constitution Act 1855, on the Web site of the National Archives of Australia: www.foundingdocs.gov.au
- Link to the Victoria Constitution Act 1855, on the Web site of the National Archives of Australia: www.foundingdocs.gov.au
- Link to the Constitution Act 1855 (SA), on the Web site of the National Archives of Australia: www.foundingdocs.gov.au
- Link to the Constitution Act 185 (Tasmania), on the Web site of the National Archives of Australia: www.foundingdocs.gov.au
- Link to the Order in Council of 6 June 1859, which established the Colony of Queensland, on the Web site of the National Archives of Australia.
- The "Northern Territory of New South Wales" was physically separated from the main part of NSW. In 1863, the bulk of it was transferred to South Australia, except for a small area that became part of Queensland. See: Letters Patent annexing the Northern Territory to South Australia, 1863. In 1911, the Commonwealth of Australia agreed to assume responsibility for administration of the Northern Territory, which was regarded by the government of South Australia as a financial burden.www.foundingdocs.gov.au. The NT did not receive responsible government until 1978.
- Link to the Constitution Act 1890, which established self-government in Western Australia: www.foundingdocs.gov.au
- Alan Rayburn (2001). Naming Canada: Stories about Canadian Place Names. University of Toronto Press. pp. 17–21. ISBN 978-0-8020-8293-0.
- "The London Conference December 1866 – March 1867". www.collectionscanada.gc.ca. Retrieved 2010-06-11.
- Andrew Heard (2008-02-05). "Canadian Independence". Check date values in:
|year= / |date= mismatch(help)
- Eugene Forsey (2007-10-14). "How Canadians Govern Themselves". Check date values in:
|year= / |date= mismatch(help)
- Buckley, F. H., The Once and Future King: The Rise of Crown Government in America (Encounter Books, 2014), excerpt: http://fullcomment.nationalpost.com/2014/05/15/f-h-buckley-how-canadas-creation-changed-the-world/.
- F. R. Scott (January 1944). "The End of Dominion Status". The American Journal of International Law (American Society of International Law) 38 (1): 34–49. doi:10.2307/2192530. JSTOR 2192530.
- Europe Since 1914: Encyclopedia of the Age of War and Reconstruction; John Merriman and Jay Winter; 2006; see the British Empire entry which lists the "White Dominions" above except Newfoundland
- J. E. Hodgetts. 2004. "Dominion". Oxford Companion to Canadian History, Gerald Hallowell, ed. (ISBN 0-19-541559-0), at http://www.oxfordreference.com/view/10.1093/acref/9780195415599.001.0001/acref-9780195415599-e-471 - p. 183: "... Ironically, defenders of the title dominion who see signs of creeping republicanism in such changes can take comfort in the knowledge that the Constitution Act, 1982, retains the title and requires a constitutional amendment to alter it."
- Forsey, Eugene A., in Marsh, James H., ed. 1988. "Dominion" The Canadian Encyclopedia. Hurtig Publishers: Toronto.
- "National Flag of Canada Day: How Did You Do?". Department of Canadian Heritage. Retrieved 2008-02-07.
The issue of our country's legal title was one of the few points on which our constitution is not entirely homemade. The Fathers of Confederation wanted to call the country "the Kingdom of Canada". However the British government was afraid of offending the Americans so it insisted on the Fathers finding another title. The term "Dominion" was drawn from Psalm 72. In the realms of political terminology, the term dominion can be directly attributed to the Fathers of Confederation and it is one of the very few, distinctively Canadian contributions in this area. It remains our country's official title.
- s:Republic of South Africa Constitution Act, 1961
- B. Hunter (ed), The Stateman's Year Book 1996-1997, Macmillan Press Ltd, pp. 130-156
- Order in Council of the UK Privy Council, 6 June 1859, establishing responsible government in Queensland. See Australian Government's "Documenting a Democracy" website at this webpage: www.foundingdocs.gov.au
- Constitution Act 1890 (UK), which came into effect as the Constitution of Western Australia when proclaimed in WA on 21 October 1890, and establishing responsible government in WA from that date; Australian Government's "Documenting a Democracy" website: www.foundingdocs.gov.au
- D. Smith, Head of State, MaCleay Press 2005, p. 18
- Scott, Frank R. (January 1944). "The End of Dominion Status". The American Journal of International Law (American Society of International Law) 38 (1): 34–49. doi:10.2307/2192530.
- "The Prince of Wales 2001 Royal Visit: April 25 - April 30; Test Your Royal Skills". Department of Canadian Heritage. 2001. Retrieved 2008-02-07.
As dictated by the British North America Act, 1867, the title is Dominion of Canada. The term is a uniquely Canadian one, implying independence and not colonial status, and was developed as a tribute to the Monarchical principle at the time of Confederation.
- "How Canadians Govern Themselves" (PDF). PDF. Retrieved 2008-02-06. Forsey, Eugene (2005). How Canadians Govern Themselves (6th ed.). Ottawa: Her Majesty the Queen in Right of Canada. ISBN 0-662-39689-8.
The two small points on which our constitution is not entirely homemade are, first, the legal title of our country, “Dominion,” and, second, the provisions for breaking a deadlock between the Senate and the House of Commons.
- The Statesman's Year Book, p. 635
- Indian Independence Act 1947, "An Act to make provision for the setting up in India of two independent Dominions, to substitute other provisions for certain provisions of the Government of India Act 1935, which apply outside those Dominions, and to provide for other matters consequential on or connected with the setting up of those Dominions" passed by the U.K. parliament 18 July 1947.
- The Statesman's Year Book, p. 1002
- On 7 December 1922 (the day after the establishment of the Irish Free State) the Parliament resolved to make the following address to the King so as to opt out of the Irish Free State: ”MOST GRACIOUS SOVEREIGN, We, your Majesty's most dutiful and loyal subjects, the Senators and Commons of Northern Ireland in Parliament assembled, having learnt of the passing of the Irish Free State Constitution Act, 1922, being the Act of Parliament for the ratification of the Articles of Agreement for a Treaty between Great Britain and Ireland, do, by this humble Address, pray your Majesty that the powers of the Parliament and Government of the Irish Free State shall no longer extend to Northern Ireland". Source: Northern Ireland Parliamentary Report, 7 December 1922 and Anglo-Irish Treaty, sections 11, 12.
- The Statesman's Year Book, p. 302
- The Statesman's Year Book, p. 303
- The Statesman's Year Book
- "History, Constitutional - The Legislative Authority of the New Zealand Parliament - 1966 Encyclopaedia of New Zealand". www.teara.govt.nz. 2009-04-22. Retrieved 2010-06-11.
- "Dominion status". NZHistory. Retrieved 2010-06-11.
- Prof. Dr. Axel Tschentscher, LL. M. "ICL - New Zealand - Constitution Act 1986". servat.unibe.ch. Retrieved 2010-06-11.
- The Stateman’s Year Book p. 1156
- Wikisource: South Africa Act 1909
- Rowland, J. Reid. "Constitutional History of Rhodesia: An outline": 245–251. Appendix to Berlyn, Phillippa (April 1978). The Quiet Man: A Biography of the Hon. Ian Douglas Smith. Salisbury: M. O. Collins. pp. 240–256. OCLC 4282978.
- Wood, J. R. T. (April 2008). A matter of weeks rather than months: The Impasse between Harold Wilson and Ian Smith: Sanctions, Aborted Settlements and War 1965–1969. Victoria, British Columbia: Trafford Publishing. p. 5. ISBN 978-1-42514-807-2.
- Harris, P. B. (September 1969). "The Rhodesian Referendum: June 20th, 1969" (pdf). Parliamentary Affairs (Oxford University Press) 23: 72–80. Retrieved 2013-06-04.
- Gowlland-Debbas, Vera (1990). Collective Responses to Illegal Acts in International Law: United Nations action in the question of Southern Rhodesia (First ed.). Leiden and New York: Martinus Nijhoff Publishers. p. 73. ISBN 0-7923-0811-5.
- Statute of Westminster Adoption Act 1942 (Act no. 56 of 1942). The long title for the Act was "To remove Doubts as to the Validity of certain Commonwealth Legislation, to obviate Delays occurring in its Passage, and to effect certain related purposes, by adopting certain Sections of the Statute of Westminster, 1931, as from the Commencement of the War between His Majesty the King and Germany." Link: www.foundingdocs.gov.au.
- Brandon Jernigan, "British Empire" in M. Juang & Noelle Morrissette, eds., Africa and the Americas: Culture, Politics, and History (ABC-CLIO, 2008) p. 204.
- Buckley, F. H., The Once and Future King: The Rise of Crown Government in America, Encounter Books, 2014.
- Choudry, Sujit. 2001 (?). "Constitution Acts" (based on looseleaf by Hogg, Peter W.). Constitutional Keywords. University of Alberta, Centre for Constitutional Studies: Edmonton.
- Holland, R. F., Britain and the Commonwealth Alliance 1918-1939, MacMillan, 1981.
- Forsey, Eugene A. 2005. How Canadians Govern Themselves, 6th ed. (ISBN 0-662-39689-8) Canada: Ottawa.
- Hallowell, Gerald, ed. 2004. The Oxford Companion to Canadian History. Oxford University Press: Toronto; p. 183-4 (ISBN 0-19-541559-0).
- Marsh, James H., ed. 1988. "Dominion" et al. The Canadian Encyclopedia. Hurtig Publishers: Toronto.
- Martin, Robert. 1993 (?). 1993 Eugene Forsey Memorial Lecture: A Lament for British North America. The Machray Review. Prayer Book Society of Canada. A summative piece about nomenclature and pertinent history with abundant references.
- Rayburn, Alan. 2001. Naming Canada: stories about Canadian place names, 2nd ed. (ISBN 0-8020-8293-9) University of Toronto Press: Toronto. | https://en.wikipedia.org/wiki/British_Dominions |
4 | corn laws, regulations restricting the export and import of grain, particularly in England. As early as 1361 export was forbidden in order to keep English grain cheap. Subsequent laws, numerous and complex, forbade export unless the domestic price was low and forbade import unless it was high. The purpose of the laws was to assure a stable and sufficient supply of grain from domestic sources, eliminating undue dependence on foreign supplies, yet allowing for imports in time of scarcity. The corn law of 1815 was designed to maintain high prices and prevent an agricultural depression after the Napoleonic Wars. Consumers and laborers objected, but it was the criticism of manufacturers that the laws hampered industrialization by subsidizing agriculture that proved most effective. Following a campaign by the Anti-Corn-Law League, the corn laws were repealed by the Conservative government of Sir Robert Peel in 1846, despite the opposition of many of his own party, led by Lord George Bentinck and Benjamin Disraeli. With the revival of protectionism in the 20th cent., new grain restriction laws have been passed, but they have not been as extensive as those of earlier times.
See D. G. Barnes, A History of English Corn Laws from 1660 to 1846 (1930, repr. 1965); N. Longmate, The Breadstealers (1984).
The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved. | http://www.factmonster.com/encyclopedia/history/corn-laws.html |
4.09375 | Pollution affects species directly, leading to mortality (in 6% of globally threatened birds) or reduced reproductive success (in 3%), as well as indirectly, through the degradation of habitats (affecting 11%). Pollution associated with agriculture, forestry and industry is the most common threat, and has the greatest impact on marine and freshwater environments and the species that depend upon them.
Pollutants from a range of sources are causing habitat degradation that indirectly affects 11% of all threatened birds; and pollution has direct impacts on 6% of threatened birds through mortality and a further 3% that experience reduced reproductive success (analysis of data held in BirdLife’s World Bird Database 2008). The number of species affected by pollution is low compared with other threats and, significantly, the problems of pollution are relatively easy to solve.
The major pollutants are effluents: from agriculture, forestry, industry, oil spills and the over-application of herbicides and pesticides (see figure). Effluents cause the greatest damage to aquatic environments; both marine and freshwater. A total of 170 threatened species are affected by one or more pollutants. Of those, 97 (57%) are associated with marine or freshwater habitats (compared with 25% of all threatened birds).
Other specific forms of pollution affect a smaller number of species; among them garbage, acid rain and pollution from artificial lights which impacts burrow nesting seabirds such as Newell’s Shearwater Puffinus newelli that return to their colonies after dark and become disorientated by artificial lights.
BirdLife International (2008) Pollution from agriculture, forestry and industry has significant impacts on birds. Presented as part of the BirdLife State of the world's birds website. Available from: http://www.birdlife.org/datazone/sowb/casestudy/155. Checked: 14/02/2016
|Key message: Pollution has direct and indirect impacts on bird populations| | http://www.birdlife.org/datazone/sowb/casestudy/155 |
4.03125 | Conjunctivitis, also known as "pink eye," is an inflammation of the conjunctiva of the eye. The conjunctiva is the membrane that lines the inside of the eye and also a thin membrane that covers the actual eye.
There are many different causes of conjunctivitis. The following are the most common causes:
- bacteria, including:
- Staphylococcus aureus
- Haemophilus influenza
- Streptococcus pneumoniae
- Neisseria gonorrhea
- Chlamydia trachomatis
- viruses, including:
- herpes virus
- chemicals (seen mostly in the newborn period after the use of medicine in the eye to prevent other problems)
Conjunctivitis is usually divided into at least two categories, newborn conjunctivitis and childhood conjunctivitis, with different causes and treatments for each.
- newborn conjunctivitis
The following are the most common causes and treatment options of newborn conjunctivitis:
- chemical conjunctivitis
This is related to an irritation in the eye from the use of eye drops that are given to the newborn to help prevent a bacterial infection. Sometimes, the newborn reacts to the drops and may develop a chemical conjunctivitis. The eyes are usually mildly red and inflamed, starting a few hours after the drops have been placed in the eye, and lasts for only 24 to 36 hours. This type of conjunctivitis usually requires no treatment.
- gonococcal conjunctivitis
This is caused by a bacteria called Neisseria gonorrhea. The newborn obtains this type of conjunctivitis by the passage through the birth canal from an infected mother. This type of conjunctivitis may be prevented with the use of eye drops in newborns at birth. The newborn eyes usually are very red, with thick drainage and swelling of the eyelids. This type usually starts about 2 to 4 days after birth. Treatment for gonococcal conjunctivitis usually will include antibiotics through an intravenous (IV) catheter.
- inclusion conjunctivitis
This is caused by an infection with chlamydia trachomatis, obtained by passage through the birth canal from an infected mother. The symptoms include moderate thick drainage from the eyes, redness of the eyes, swelling of the conjunctiva, and some swelling of the eyelids. This type of conjunctivitis usually starts 5 to 12 days after birth. Treatment usually will include oral antibiotics.
- other bacterial causes
After the first week of life, other bacteria may be the cause of conjunctivitis in the newborn. The eyes may be red and swollen with some drainage. Treatment depends on the type of bacteria that has caused the infection. Treatment usually will include antibiotic drops or ointments to the eye, warm compresses to the eye, and proper hygiene when touching the infected eyes.
- childhood conjunctivitis
Childhood conjunctivitis is a swelling of the conjunctiva and may also include an infection. It is a very common problem in children. Also, large outbreaks of conjunctivitis are often seen in daycare settings or schools. The following are the most common causes of childhood conjunctivitis:
The following are the most common symptoms of childhood conjunctivitis. However, each child may experience symptoms differently. Symptoms may include:
- itchy, irritated eyes
- clear, thin drainage (usually seen with viral or allergic causes)
- sneezing and runny nose (usually see with allergic causes)
- stringy discharge from the eyes (usually seen with allergic causes)
- thick, green drainage (usually seen with bacterial causes)
- ear infection (usually seen with bacterial causes)
- lesion with a crusty appearance (usually seen with herpes infection)
- eyes that are matted together in the morning
- swelling of the eyelids
- redness of the conjunctiva
- discomfort when the child looks at a light
- burning in the eyes
The symptoms of conjunctivitis may resemble other medical conditions or problems. Always consult your child's physician for a diagnosis.
Conjunctivitis is usually diagnosed based on a complete medical history and physical examination of your child's eye. Cultures of the eye drainage are usually not required, but may be done to help confirm the cause of the infection.
Specific treatment for conjunctivitis will be determined by your physician based on:
- your child's age, overall health, and medical history
- extent of the condition
- your child's tolerance for specific medications, procedures, or therapies
- expectations for the course of the condition
- your opinion or preference
Specific treatment depends on the underlying cause of the conjunctivitis.
- bacterial causes
Your child's physician may order antibiotic drops to put in the eyes.
- viral causes
Viral conjunctivitis usually does not require treatment. Your child's physician may order antibiotic drops for the eyes to help decrease the chance of a secondary infection.
- allergic causes
Treatment for conjunctivitis caused by allergies usually will involve treating the allergies. Your child's physician may order oral medications or eye drops to help with the allergies.
If your child has an infection of the eye caused by a herpes infection, your child's physician may refer you to an eye care specialist. Your child may be given both oral medications and eye drops. This is a more serious type of infection and may result in scarring of the eye and loss of vision.
Infection can be spread from one eye to the other, or to other people, by touching the affected eye or drainage from the eye. Proper handwashing is very important. Drainage from the eye is contagious for 24 to 48 hours after beginning treatment.
Click here to view the
Online Resources of Pediatrics | https://www.ecommunity.com/health/index.aspx?pageid=P00998 |
4.0625 | Students will learn about radioactive decay and how to write nuclear equations.
This concept discusses the cause of radioactivity and the two basic types of radioactive decay.
Study the basics of radioactive decay and the properties of atomic nuclei in Marie Curie's laboratory and classroom.
Learn about how radiocarbon dating works and how anthropologists can use this method to figure out who long ago people lived.
Goes over radioactivity and explains alpha, beta, and gamma radiation.
Reviews radioactivity, its causes, and its effect on the atom.
A list of student-submitted discussion questions for Radiation.
This is an activity for students to complete while reading the Radioactivity Concept
This study guide gives a brief overview of radiation. | http://www.ck12.org/physics/Radiation/ |
4.25 | Concave mirrors are used in a number of applications. They form upright, enlarged images, and are therefore useful in makeup application or shaving. They are also used in flashlights and headlights because they project parallel beams of light, and in telescopes because they focus light to produce greatly enlarged images.
The photograph above shows the grinding of the primary mirror in the Hubble space telescope. The Hubble Space Telescope is a reflecting telescope with a mirror approximately eight feet in diameter, and was deployed from the Space Shuttle Discovery on April 25, 1990.
Image in a Concave Mirror
Reflecting surfaces do not have to be flat. The most common curved mirrors are spherical. A spherical mirror is called a concave mirror if the center of the mirror is further from the viewer than the edges are. A spherical mirror is called a convex mirror if the center of the mirror is closer to the viewer than the edges are.
To see how a concave mirror forms an image, consider an object that is very far from the mirror so that the incoming rays are essentially parallel. For an object that is infinitely far away, the incoming rays would be exactly parallel. Each ray would strike the mirror and reflect according to the law of reflection (angle of reflection is equal to the angle of incidence). As long as the section of mirror is small compared to its radius of curvature, all the reflected rays will pass through a common point, called the focal point (f).
If too large a piece of the mirror is used, the rays reflected from the top and bottom edges of the mirror will not pass through the focal point and the image will be blurry. This flaw is called spherical aberration and can be avoided either by using very small pieces of the spherical mirror or by using parabolic mirrors.
A line drawn to the exact center of the mirror and perpendicular to the mirror at that point is called the principal axis. The distance along the principle axis from the mirror to the focal point is called the focal length. The focal length is also exactly one-half of the radius of curvature of the spherical mirror. That is, if the spherical mirror has a radius of 8 cm, then the focal length will be 4 cm.
Objects Outside the Center Point
Above is a spherical mirror with the principle axis, the focal point, and the center of curvature (C) identified on the image. An object has been placed well beyond C, and we will treat this object as if it were infinitely far away. There are two rays of light leaving any point on the object that can be traced without any drawing tools or measuring devices. The first is a ray that leaves the object and strikes the mirror parallel to the principle axis that will reflect through the focal point. The second is a ray that leaves the object and strikes the mirror by passing through the focal point; this ray will reflect parallel to the principle axis.
These two rays can be seen in the image below. The two reflected rays intersect after reflection at a point between C and F. Since these two rays come from the tip of the object, they will form the tip of the image.
If the image is actually drawn in at the intersection of the two rays, it will be smaller and inverted, as shown below. Rays from every point on the object could be drawn so that every point could be located to draw the image. The result would be the same as shown here. This is true for all concave mirrors with the object outside C: the image will be between C and F, and the image will be inverted and diminished (smaller than the object).
The heights of the object and the image are related to their distances from the mirror. In fact, the ratio of their heights is the same ratio as their distances from the mirror. If is object distance, the image distance, the object height and the image height, then
It can also be shown that and from this, we can derive the mirror equation,
In this equation, f is the focal length do is the object distance, and di is the image distance.
The magnification equation for a mirror is the image size divided by the object size, where m gives the magnification of the image.
Example Problem: A 1.50 cm tall object is placed 20.0 cm from a concave mirror whose radius of curvature is 30.0 cm. What is the image distance and what is the image height?
Solution: The focal length is one-half the center of curvature so it is 15.0 cm. multiply both sides by to get and .
The image distance is 60.0 cm. The image is 3 times as far from the mirror as the object so it will be 3 times as large, or 4.5 cm tall.
Objects Between the Center Point and the Focal Point
Regardless of where the object is, its image's size and location can be determined using the equations given earlier in this section. Nonetheless, patterns emerge in these characteristics. We already know that the image of an object outside the center point is closer and smaller than the object. When an object is between the center point and the focal point, the image is larger and closer. These characteristics can also be determined by drawing the rays coming off of the object; this is called a ray diagram.
Look again at the image above that was shown earlier in the lesson. If you consider the smaller arrow to be the object and follow the rays backward, the ray diagram makes it clear that if an object is located between the center point and focal point, the image is inverted, larger, and at a greater distance.
Example Problem: If an 3.00 cm tall object is held 15.00 cm away from a concave mirror with a radius of 20.00 cm, describe its image.
Solution: To solve this problem, we must determine the height of the image and the distance from the mirror to the image. To find the distance, use the mirror equation:
Next determine the height of the image:
Using this equation, we can determine that the height of the image is 6 cm.
This image is a real image, which means that the rays of light are real rays. These are represented in the ray diagram as solid lines, while virtual rays are dotted lines.
Objects Inside the Focal Point
Consider what happens when the object for a concave mirror is placed between the focal point and the mirror. This situation is sketched at below.
Once again, we can trace two rays to locate the image. A ray that originates at the focal point and passes through the tip of the object will reflect parallel to the principal axis. The second ray we trace is the ray that leaves the tip of the object and strikes the mirror parallel to the principal axis.
Below is the ray diagram for this situation. The rays are reflected from the mirror and as they leave the mirror, they diverge. These two rays will never come back together and so a real image is not possible. When an observer looks into the mirror, however, the eye will trace the rays backward as if they had followed a straight line. The dotted lines in the sketch show the lines the rays would have followed behind the mirror. The eye will see an image behind the mirror just as if the rays of light had originated behind the mirror. The image seen will be enlarged and upright. Since the light does not actually pass through this image position, the image is virtual.
Example Problem: A 1.00 cm tall object is placed 10.0 cm from a concave mirror whose radius of curvature is 30.0 cm. Determine the image distance and image size.
Solution: Since the radius of curvature is 30.0 cm, the focal length is 15.0 cm. The object distance of 10.0 cm tells us that the object is between the focal point and the mirror.
and plugging known values yields
Multiplying both sides of the equation by yields and .
The negative image distance indicates that the image is behind the mirror. We know the image is virtual because it is behind the mirror. Since the image is 3 times as far from the mirror as the object, it will be 3 times as tall. Therefore, the image height is 3.00 cm.
- A spherical mirror is concave if the center of the mirror is further from the viewer than the edges.
- For an object that is infinitely far away, incoming rays would be exactly parallel.
- As long as the section of mirror is small compared to its radius of curvature, all the reflected rays will pass through a common point, called the focal point.
- The distance along the principle axis from the mirror to the focal point is called the focal length, f. This is also exactly one-half of the radius of curvature.
- The mirror equation is .
- The magnification equation is .
- For concave mirrors, when the object is outside C, the image will be between C and F and the image will be inverted and diminished (smaller than the object).
- For concave mirrors, when the object is between C and F, the image will be beyond C and will be enlarged and inverted.
- For concave mirrors, when the object is between F and the mirror, the image will be behind the mirror and will be enlarged and upright.
Follow up questions:
- What is the name of the line that goes through the center of a concave mirror?
- What is the name of the point where the principal axis touches the mirror?
- Light rays that approach the mirror parallel to the principal axis reflect through what point?
- A concave mirror is designed so that a person 20.0 cm in front of the mirror sees an upright image magnified by a factor of two. What is the radius of curvature of this mirror?
- If you have a concave mirror whose focal length is 100.0 cm, and you want an image that is upright and 10.0 times as tall as the object, where should you place the object?
- A concave mirror has a radius of curvature of 20.0 cm. Locate the image for an object distance of 40.0 cm. Indicate whether the image is real or virtual, enlarged or diminished, and upright or inverted.
- A dentist uses a concave mirror to examine a tooth that is 1.00 cm in front of the mirror. The image of the tooth forms 10.0 cm behind the mirror.
- What is the mirror’s radius of curvature?
- What is the magnification of the image?
- When a man stands 1.52 m in front of a shaving mirror, the image produced is inverted and has an image distance of 18.0 cm. How close to the mirror must the man place his face if he wants an upright image with a magnification of 2.0? | http://www.ck12.org/physics/Concave-Mirrors/lesson/Images-in-a-Concave-Mirror/r14/ |
4.125 | ping triangulationA process developed by IBM in which client requests over the Internet can be routed to the cell that is geographically closest. When one or more mirror sites exists, ping triangulation uses a process called echo location. When a server receives a client request, it sends out an ICMP echo, or ping, packet across the Internet to the mirror sites and times the echo response. From this information, the most appropriate site to respond to the client request is determined. Basically, ping triangulation maps in multidimensional space the location of every mirror site and the end-user, sending that user not only to an open server but to the closest open server.
Stay up to date on the latest developments in Internet terminology with a free weekly newsletter from Webopedia. Join to subscribe now.
Webopedia's student apps roundup will help you to better organize your class schedule and stay on top of assignments and homework. Read More »20 Ways to Shorten a URL
If you need to shorten a long URL try this list of 20 free online redirection services. Read More »Top 10 Tech Terms of 2015
The most popular Webopedia definitions of 2015. Read More »
This Webopedia study guide describes the different parts of a computer system and their relations. Read More »Network Fundamentals Study Guide
Networking fundamentals teaches the building blocks of modern network design. Learn different types of networks, concepts, architecture and... Read More »The Five Generations of Computers
Learn about each of the five generations of computers and major technology developments that have led to the current devices that we use today. Read More » | http://www.webopedia.com/TERM/P/ping_triangulation.html |
4.0625 | In the thick of whale season, researchers from Hawai'i Institute of Marine Biology (HIMB) and the National Oceanic and Atmospheric Administration (NOAA) shed new light on the wintering grounds of the humpback whale. The primary breeding ground for the North Pacific was always thought to be the main Hawaiian Islands (MHI). However, a new study has shown that these grounds extend all the way throughout the Hawaiian Archipelago and into the Northwestern Hawaiian Islands (NWHI), also known as Papahānaumokuākea Marine National Monument (PMNM).
Humpback whales, an endangered species, were once on the brink of extinction due to commercial whaling practices of the last century. Today, thanks to international protection, their numbers have dramatically increased, resulting in a greater presence of these singing mammals during the winter months. Song is produced by male humpback whales during the winter breeding season. All males on a wintering ground sing roughly the same song any given year, but the song changes from year to year. No one is exactly sure why the whales sing but some researchers believe it could be a display to other males. Between 8,500 and 10,000 whales migrate to Hawai'i each winter; while the rest of the population can be found in places like Taiwan, the Philippines, the Mariana Islands, Baja California, Mexico, amongst other Pacific locations (Calambokidis et al. 2008).
Over the past three decades, population recovery has resulted in a steady increase in the number of whales and a geographic expansion of their distribution in the MHI. Until recently, however, no empirical evidence existed that this expansion included the Northwestern Hawaiian Islands. This changed recently when scientists from HIMB and NOAA published their findings in the current issue of the journal Marine Ecology Progress Series, detailing the presence of humpback whale song in the Northwestern Hawaiian Archipelago. These researchers deployed instruments known as Ecological Acoustic Recorders (EARs) in both the NWHI and MHI to record the occurrence of humpback whale song, as an indicator of winter breeding activity. Humpback whale song was found to be prevalent throughout the NWHI and demonstrated trends very similar to those observed in the MHI.
Dr. Marc Lammers, a researcher at HIMB and the lead scientist of the project explains "these findings are exciting because they force us to re-evaluate what we know about humpback whale migration and the importance of the NWHI to the population." The results are also of particular relevance in light of recent suggestions that an undocumented wintering area for humpback whales exists somewhere in the central North Pacific. Dr. Lammers and his colleagues believe that the NWHI could be that area.
|Contact: Carlie Wiener|
University of Hawaii at Manoa | http://bio-medicine.org/biology-news-1/Researchers-discover-new-wintering-grounds-for-humpback-whales-using-sound-18180-1/ |
4.28125 | Parallelism Teacher Resources
Find Parallelism educational ideas and activities
Showing 1 - 20 of 4,205 resources
Grammar Bytes - Parallel Structure
The first exercise in a series of worksheets, this handout asks learners to read 10 sets of sentences and choose the one with no errors in structure. Tip: Find all of the worksheets on parallel structure throughout our website and create...
4th - 10th English Language Arts
Grammar Bytes PowerPoint Presentation: Parallel Structure
When preparing students for standardized tests, this presentation about parallel structure could be a great way to review. Using concrete examples, and providing detailed explanations, students could use this as an independent review.
9th - 12th English Language Arts
Persuasion and Parallel Structure
Discuss the definition of parallel structure with your high school class. In small groups, they read a section of "The Declaration of Independence" to identify examples of parallel structure. Each learner writes an essay explaining the...
9th - 12th English Language Arts
Parallelism, Including Correlative Conjunctions and Comparisons
After reading the first reference page about parallel structure using correlative conjunctions, young learners rewrite nine sentences with errors in parallelism. Even the strongest writers in your language arts class could benefit from...
7th - 10th English Language Arts
Parallel Structure Practice
Practice parallel structure with a multiple-choice exercise. Twenty questions challenge learners of all ages to correctly fill in blanks with phrases that are parallel in structure to what is already there. Tip: As noted, this worksheet...
5th - 12th English Language Arts
Parallel Structure, Exercise 3
As the third worksheet in a series about parallel structure, this worksheet continues to challenge students' writing skills. It includes twenty multiple choice questions; students must select the correct phrase to complete each sentence...
4th - 12th English Language Arts
Parallel Structure, Exercise 1
Challenge your pupils' writing skills with this two-page worksheet. There are a total of twenty sentences which must be read in order to determine whether or not they contain errors in parallel structure. Note: This worksheet accompanies...
4th - 11th English Language Arts
Identifying Parallel Structure in Sentences
Examine parallelism in sentence structure. Ninth graders review Lincoln's Gettysburg Address to find examples of parallelism, and look at the Declaration of Independence for the same. They compose an original piece of writing in which...
9th - 10th English Language Arts
Test Review Sheet: Collection Four and Rhetoric
Reinforce rhetorical reading with your eighth grader honors class (or standard-level high schoolers). Using quotes from American presidents and political leaders, pupils identify the rhetorical devices highlighted in each quote....
8th English Language Arts
Grammar Practice: Parallel Structure
Help your young writers improve the clarity of their sentences by showing them how to create parallel structures as they construct sentences. Two exercises give kids practice identifying the correct parallel structure and crafting...
6th - 8th English Language Arts
Test Review Sheet: Collection 8 & Rhetorical Devices
Challenge your literary analysts with this test review sheet. Learners identify rhetorical devices and parallel structure in addition to defining literary devices and vocabulary. While there is no test included, this could be used as a...
8th - 9th English Language Arts | http://www.lessonplanet.com/lesson-plans/parallelism |
4.28125 | What if you knew that 25% of a number was equal to 24? How could you find that number? After completing this Concept, you'll be able to use the percent equation to solve problems like this one.
The percent equation is often used to solve problems. It goes like this:
Rate is the ratio that the percent represents ( in the second version).
Total is often called the base unit.
Part is the amount we are comparing with the base unit.
Find 25% of $80.
We are looking for the part. The total is $80. ‘of’ means multiply. is 25%, so we can use the second form of the equation: 25% of $80 is Part, or .
, so the Part we are looking for is $20.
Express $90 as a percentage of $160.
This time we are looking for the rate. We are given the part ($90) and the total ($160). Using the rate equation, we get . Dividing both sides by 160 tells us that the rate is 0.5625, or 56.25%.
$50 is 15% of what total sum?
This time we are looking for the total. We are given the part ($50) and the rate (15%, or 0.15). Using the rate equation, we get . Dividing both sides by 0.15, we get . So $50 is 15% of $333.33.
Watch this video for help with the Examples above.
$96 is 12% of what total sum?
This time we are looking for the total. We are given the part ($96) and the rate (12%, or 0.12). Using the rate equation, we get . Dividing both sides by 0.15, we get . So $96 is 12% of $800.
Find the following.
- 30% of 90
- 27% of 19
- 16.7% of 199
- 11.5% of 10.01
- 0.003% of 1,217.46
- 250% of 67
- 34.5% of y
- 17.02% of y
- x% of 280
- a% of 0.332
Answers for Explore More Problems
To view the Explore More answers, open this PDF file and look for section 3.14.
Texas Instruments Resources
In the CK-12 Texas Instruments Algebra I FlexBook® resource, there are graphing calculator activities designed to supplement the objectives for some of the lessons in this chapter. See http://www.ck12.org/flexr/chapter/9613. | http://www.ck12.org/arithmetic/Percent-Equations/lesson/Percent-Equations---Intermediate/ |
4.09375 | Video length: 8:04 min.Learn more about Teaching Climate Literacy and Energy Awareness»
See how this Video supports the Next Generation Science Standards»
Middle School: 6 Disciplinary Core Ideas
High School: 5 Disciplinary Core Ideas
About Teaching Climate Literacy
Other materials addressing 5b
Notes From Our Reviewers
The CLEAN collection is hand-picked and rigorously reviewed for scientific accuracy and classroom effectiveness.
Read what our review team had to say about this resource below or learn more about
how CLEAN reviews teaching materials
Teaching Tips | Science | Pedagogy |
- This could be used in any lesson on climate change, oceanography, marine science, ecology, and chemistry.
- It is important that the students understand that ocean acidification is the other carbon dioxide problem.
- Ocean acidification, while not directly impacting the climate system, is the result of the oceans soaking up much of the CO2 emitted into the atmosphere, resulting in a potentially catastrophic impact on the ocean ecosystem.
About the Science
- The video discusses two indicators of global change effects on the Southern Ocean: 1. Changes in Antarctic bottom water and 2: ocean acidification and its effect on pteropod shells.
- Details how tiny plankton and massive ocean currents hold clues to how rapidly the Southern Ocean is changing.
- Additionally the video discusses how the pteropods may provide an early warning of climate "tipping points" to come.
- The video discusses the link between global climate and the potential changes in the formation of Antarctic bottom water, as well as ocean chemistry.
- Comments from expert scientist: As for the scientific strengths, it is very clear and well explained. From the field to the lab to outlining results from the study, the story is easy to understand. This is also very actual science and a "hot" topic in marine science (ocean acidification in the Arctic Ocean and its impact on Limacina helicina). This is a very interested material and relevant to CLEAN.
About the Pedagogy
- The narrator asks focused questions in the video, and a transcript is provided for students to read.
- This video explicitly discusses how our understanding of the climate system is improved through observations and data collection, as how these in turn help to inform predictive models.
- A written transcript of the video is provided.
Next Generation Science Standards See how this Video supports:
Disciplinary Core Ideas: 6
MS-PS1.B1:Substances react chemically in characteristic ways. In a chemical process, the atoms that make up the original substances are regrouped into different molecules, and these new substances have different properties from those of the reactants.
MS-ESS2.C1:Water continually cycles among land, ocean, and atmosphere via transpiration, evaporation, condensation and crystallization, and precipitation, as well as downhill flows on land.
MS-ESS2.C4:Variations in density due to variations in temperature and salinity drive a global pattern of interconnected ocean currents.
MS-ESS2.D1:Weather and climate are influenced by interactions involving sunlight, the ocean, the atmosphere, ice, landforms, and living things. These interactions vary with latitude, altitude, and local and regional geography, all of which can affect oceanic and atmospheric flow patterns.
MS-ESS2.D3:The ocean exerts a major influence on weather and climate by absorbing energy from the sun, releasing it over time, and globally redistributing it through ocean currents.
MS-ESS3.D1:Human activities, such as the release of greenhouse gases from burning fossil fuels, are major factors in the current rise in Earth’s mean surface temperature (global warming). Reducing the level of climate change and reducing human vulnerability to whatever climate changes do occur depend on the understanding of climate science, engineering capabilities, and other kinds of knowledge, such as understanding of human behavior and on applying that knowledge wisely in decisions and activities.
Disciplinary Core Ideas: 5
HS-PS1.B3:The fact that atoms are conserved, together with knowledge of the chemical properties of the elements involved, can be used to describe and predict chemical reactions.
HS-ESS2.D1:The foundation for Earth’s global climate systems is the electromagnetic radiation from the sun, as well as its reflection, absorption, storage, and redistribution among the atmosphere, ocean, and land systems, and this energy’s re-radiation into space.
HS-ESS2.D3:Changes in the atmosphere due to human activity have increased carbon dioxide concentrations and thus affect climate.
HS-ESS2.E1:The many dynamic and delicate feedbacks between the biosphere and other Earth systems cause a continual co-evolution of Earth’s surface and the life that exists on it.
HS-ESS3.D1:Though the magnitudes of human impacts are greater than they have ever been, so too are human abilities to model, predict, and manage current and future impacts. | http://cleanet.org/resources/42854.html |
4 | You Are Here
Activity 2: Counting Circle
Activity time: 8 minutes
Preparation for Activity
- Clear an area for participants to gather in a circle.
Description of Activity
Participants are challenged to pay extremely close attention to the rest of the group. In this game, they may speak only when their voice will not interrupt someone else.
Gather everyone in a circle. Explain, in these words or your own:
The objective of the game is simple: With everyone in a circle, individuals call out sequential numbers. For instance, Alyssa says "one," Kyle says "two," Nara says "three," etc. However, any time a person speaks over another person, the count starts over. Do not try to plan the order in which people will speak, and do not try to divvy up roles. Instead, try to pay such close attention to everyone in the group that you sense when there is empty space into which you can speak.
Challenge the group to count to ten in this fashion. If the group accomplishes this goal quickly, try the game again, this time with all participants closing or covering their eyes.
After the game, invite reflection with questions such as:
- Did you contribute more to the success of the game by speaking, or by remaining silent?
- Was there any way to tell when it was a good time to speak? How?
- Was it easier or harder to get to ten than you anticipated? What made it easy or hard? | http://www.uua.org/re/tapestry/children/sing/session8/229983.shtml |
4.03125 | French Pronunciation Teacher Resources
Find French Pronunciation educational ideas and activities
Showing 1 - 20 of 179 resources
Languages as Reflection of Cultures and Civilizations: French Speaking Countries
Expand your class's vision of the French-speaking world by conducting this research project. Pupils focus on building 21st century skills while they look up information about a French country and put together presentations
7th - 12th Social Studies & History CCSS: Designed
Pre-AP Strategies for French Language and Culture
Build vocabulary, fluency and confidence in your French speakers by having them participate in some of these engaging activities. Several suggestions are given, but you will have to design the actual instructional activity yourself.
9th - 12th Languages
Apprenons le Francais (Let's Learn French)
Bonjour! Teach your class this basic greetings and much more with a unit of materials. Included here are lessons, vocabulary practice materials and activities, conversation practice handouts, word puzzles, and more to support your class...
K - 8th Languages
J'ai mal à la tête! (I have a headache!) -- Health Expressions in French
Oh, no! Everyone is getting sick! Young French speakers use French expressions regarding physical health, some of which are idioms. With the use of health expressions provided in the lesson plan, pairs work together to write stories that...
K - 4th Languages
An Exploratory Approach to the Teaching of French in the Middle School
Middle schoolers review the most recent vocabulary list of French words. Using literature by Victor Hugo and Guy de Maupassant, they discover the history and culture of France. Using a map and the text, they locate the cities and...
6th - 8th Languages | http://www.lessonplanet.com/lesson-plans/french-pronunciation |
4 | A genetic examination of tarsiers indicates that the saucer-eyed primates developed three-color vision when they were still nocturnal.
A new study suggests that primates’ ability to see in three colors may not have evolved as a result of daytime living, as has long been thought. The findings, published in the journalProceedings of the Royal Society B, are based on a genetic examination oftarsiers, the nocturnal, saucer-eyed primates that long ago branched off from monkeys, apes and humans.
By analyzing the genes that encodephotopigments in the eyes of modern tarsiers, the researchers concluded that the last ancestor that all tarsiers had in common had highly acute three-color vision, much like that of modern-day primates.
Such vision would normally indicate a daytime lifestyle. But fossils show that the tarsier ancestor was also nocturnal, strongly suggesting that the ability to see in three colors somehow predated the shift to daytime living.
The coexistence of the two normally incompatible traits suggests that primates were able to function during twilight or bright moonlight for a time before making the transition to a fully diurnal existence.
“Today there is no mammal we know of that has trichromatic vision that lives during night,” said an author of the study, Nathaniel J. Dominy, associate professor of anthropology at Dartmouth. “And if there’s a pattern that exists today, the safest thing to do is assume the same pattern existed in the past.
“We think that tarsiers may have been active under relatively bright light conditions at dark times of the day,” he added. “Very bright moonlight is bright enough for your cones to operate.”
Via Dr. Stefan Gruenwald | http://www.scoop.it/t/dark-emperor-and-other-poems-of-the-night/p/4000275416/2013/04/19/for-early-primates-a-night-was-filled-with-color |
4.03125 | geom3d[triangle] - define a triangle
triangle(T, [A, B, C], n)
triangle(T, [l1, l2, l3], n)
the name of the triangle
A, B, C
l1, l2, l3
(optional) list of three names representing the names of the x-axis, y-axis and the z-axis respectively.
A triangle is a polygon having three sides. A vertex of a triangle is a point at which two of the sides meet.
A triangle T can be defined as follows:
from three given points A, B, C.
from three given lines l1, l2, l3.
To access the information relating to a triangle T, use the following function calls:
returns the form of the geometric object (i.e., triangle3d if T is a triangle).
returns the list of three vertices of T
returns a detailed description of the triangle T.
The command with(geom3d,triangle) allows the use of the abbreviated form of this command.
define three points A0,0,0,B1,1,1, and C1,0,2
define the triangle T1 that has A,B,C as its vertices
geom3d[altitude], geom3d[area], geom3d[IsEquilateral], geom3d[IsRightTriangle], geom3d[objects], geom3d[sides]
Download Help Document | http://www.maplesoft.com/support/help/MapleSim/view.aspx?path=geom3d/triangle |
4 | Details about Discourse on Method and The Meditations:
René Descartes was a central figure in the scientific revolution of the seventeenth century. In his Discourse on Method he outlined the contrast between mathematics and experimental sciences, and the extent to which each one can achieve certainty. Drawing on his own work in geometry, optics, astronomy and physiology, Descartes developed the hypothetical method that characterizes modern science, and this soon came to replace the traditional techniques derived from Aristotle. Many of Descartes' most radical ideas - such as the disparity between our perceptions and the realities that cause them - have been highly influential in the development of modern philosophy.
Back to top
Rent Discourse on Method and The Meditations 1st edition today, or search our site for other textbooks by Rene Descartes. Every textbook comes with a 21-day "Any Reason" guarantee. Published by Penguin Classics. | http://www.chegg.com/textbooks/discourse-on-method-and-the-meditations-1st-edition-9780140442069-0140442065?ii=18&trackid=fc961769&omre_ir=0&omre_sp= |
4.625 | The process of building memory involves three components: the senses, short-term memory and long-term memory. Information is received through and briefly held for a few seconds in the senses. This information is quickly lost if it’s not attended to. Attention is a mentally demanding process that chooses between relevant and irrelevant information. Sensory memory is information received by our senses, visual sensory and auditory sensory memories. Short-term or working memory is a mental storage space, which can store five to nine pieces of information at any one time. Long-term memory is the part of the memory system containing large amounts of data.
The working memory can be described as our ‘mental workspace’. We receive information through sensory stimuli; the information is processed in the working memory, and then stored in the long-term memory. The ‘mental workspace’ has a limited capacity, and is used for cognitive tasks such as comprehension, learning and reasoning. In order for students to actively engage in learning connections between prior knowledge and new content must be explicit. Content must also be relevant and appeal to students’ interests otherwise it will not reach our ‘mental workspace’.
There are three main parts that encompass our Long-Term Memory, Episodic Memory, Semantic Memory and Procedural Memory. Episodic Memory stores memories for the past relating to personal experiences; these are specific to each individual’s life. Semantic Memory stores facts and generalised information in the form of plans, concepts, principles and rules. Episodic Memory stores information as images where Semantic Memory stores information as ‘mental files’ such as problem-solving skills. Procedural Memory refers to, ‘know how’, the ability to remember how to perform a task or employ a strategy. The primary strategy for transferring information from Working Memory to Long-Term Memory is referred to as encoding or elaboration. The key ingredient that facilitates long-term storage is meaningfulness. This term refers not to the inherent interest or worthiness of information, but rather to the degree to which it can be related to information already stored in our Long-Term Memory. We also have Implicit and Explicit Memories; Implicit Memory is defined as information that was encoded during a particular episode and Explicit Memory is used when recalling specific events.
Teaching Memory Strategies:
Teachers can help students to remember facts by presenting the material in different ways, breaking up lessons and providing engaging activities. Breaking activities into smaller segments helps students since students are known to have quite short attention spans. By breaking up lessons you only require their attention for smaller periods of time, they are more likely to stay engaged and we know that memories can only be stored when we are engaged, passive memories cannot be stored. Alternating among various types of class activities keep the students focused and also allowing opportunities for movement... | http://www.studymode.com/essays/Memory-Strategies-And-Their-Place-In-1492988.html |
4 | April 20, 2012
Cambrian Explosion Trigger Found In Grand Canyon
Millions of years ago, the creatures who would become the ancestors of all life, animals and humans alike, were simple, sometimes composed of individual cells. Evolution had been slow, with very little diversification. Then, as the waters began to shift and separate exposing new areas of land to air and daylight, this slow evolution began to explode into activity. Referred to as the Cambrian Explosion, this diversification is estimated to have taken several million years itself, breeding many new, multicellular organisms and bringing about the first appearance of shells and skeletons.
Fossil records document this explosion, yet it has been largely misunderstood what caused it to happen and at what time. Now, a second geological mystery, known as the Great Unconformity, may provide geologists and scientists with answers about both.Geoscience professor at the University of Wisconsin-Madison, Shana Peters, explains the Great Unconformity this way, “The Great Unconformity is a very prominent geomorphic surface and there´s nothing else like it in the entire rock record.”
The Great Unconformity juxtaposes rocks formed billions of years ago deep within the Earth´s crust with younger rocks formed by deposits left from shallow seas during the Cambrian period. Examples of the Great Unconformity can be seen in the Grand Canyon. Puzzling to geologists and scientists the world over, the Great Unconformity has been seen as a gap in the rock record and even our Earth´s history.
Now, Peters–who lead the new study–says the gap itself could help us understand what caused the Cambrian Explosion.
In the April 19 issue of the journal Nature, Peters and his colleague Robert Gaines of Pomona College report the Great Unconformity and the Cambrian Explosion may have the same geological forces to thank.
“The magnitude of the unconformity is without rival in the rock record,” Gaines says. “When we pieced that together, we realized that its formation must have had profound implications for ocean chemistry at the time when complex life was just proliferating.”
Looking at more than 20,000 rock samples from across North America, Peters and Gaines found clues of a link between the biological, chemical, and physical effects of the two oddities.
It is believed shallow seas in the Cambrian period repeatedly advanced and retreated across the North American continent, eroding away the surface rock. As this erosion occurred, the basement rock below became exposed to the surface for the first time. As these rocks and air reacted with one another, a chemical weather process took place, releasing calcium, iron, potassium, and silica into the oceans. This changed the seawater chemistry and created the Great Unconformity.
The change of seawater chemistry also had an effect on the organisms living there.
“Your body has to keep a balance of these ions in order to function properly,” Peters explains. “If you have too much of one you have to get rid of it, and one way to get rid of it is to make a mineral.”
Fossil records show these changes happened around the same time, something Peters says is notable.
“It´s likely biomineralization didn´t evolve for something, it evolved in response to something – in this case, changing seawater chemistry during the formation of the Great Unconformity. Then once that happened, evolution took it in another direction.”
“Today those biominerals play essential roles as varied as protection (shells and spines), stability (bones), and predation (teeth and claws).”
Peters now says the gap in time expressed by the Great Unconformity may actually be less of a gap and more of a starting point for future research. | http://www.redorbit.com/news/science/1112518024/cambrian-explosion-trigger-found-in-grand-canyon/ |
4.09375 | provided by EG-BAMM, Barbara Wienecke
King penguins are the second largest penguins alive today in terms of size and body weight. The largest penguins are the King penguins’ cousins, the emperor penguins. The International Union for the Conservation of Nature classifies King penguins as “Least Concern”.
3790 times added
King penguins have a circumpolar distribution and breeding colonies are located on the sub-Antarctic islands: Marion, Prince Edward, Crozet, Kerguelen, Heard, Macquarie, South Georgia and the Falkland Islands. Currently a new colony may be in the process of becoming established in Patagonia. The colonies are densly occupied and are located on flat ground or gently raising slopes. Their at-sea distribution varies with season. As most of the islands occupied by King penguins lay north of the Antarctic Polar Frontal Zone (APFZ), King penguins tend to travel south towards the APFZ during the early breeding season (November to April). In winter, they head even farther south towards the ice-edge of Antarctica.
King penguin are exquisit divers and in the bird world second only to Emperor penguins. Maximal dive depths were recorded to 343 m (Pütz and Cherel 2005) but most of the time King penguins hunt at depths of around 80 to 130 m. Deep dives appear to occur only during daylight hours while night dives tend to be shallow (~ 30-50 m).
The genus name Aptenodytes (“wingless diver”) had been assigned to King penguins. In 1778, John Frederik Miller, a British naturalist, chose the species name patachonica (later patagonica) for King penguins. Monotypic although subspecies were suggested in the past: In 1911, the amateur ornithologist Gregory Mathews suggested that there were three subspecies of King penguins.
One, Aptenodytes patagonicus longirostris, was dismissed but the two others were accepted by James Lee Peters, an American ornithologist who was the curator for birds at the Harvard Museum of Comparative Zoology (Peters 1931). But Peters accepted Mathews’ notion that A. p. patagonicus was characterised by a ring of blue feathers around the tarsus and occurred at the Falkland Islands and South Georgia. In contrast, the tarsi of A. p. halli were supposed to be white at the front and coloured at the back. A. p. halli was thought to breed at the Kerguelen, Crozet, Prince Edward, Heard, and Macquarie islands. However, examination of images of King penguins from different locations quickly shows that the vast majority of King penguins at any location has the two-coloured feathering on their tarsi. In 1936, Robert C Murphy also dismissed Mathew’s second argument for the division into subspecies, namely that the variations of the colouration in the penguins’ flippers were also ‘proof’ for the existence of subspecies (Murphy 1936). Murphy examined many specimens and found that the variations described by Mathew’s commonly occurred in all King penguin populations. In 1960, Bernard Stonehouse also concluded that there were no grounds to postulate sub-species among King penguins (Stonehouse 1960).
In one of the first genetic studies on King penguins French researchers compared DNA of King penguins from the Crozet and the Kerguelen islands. According to Mathews, these two populations should be very similar. However, the genetic distance between them was relatively high (Viot et al. 1993). This is further evidence that the division into subspecies as suggested in 1911 cannot be upheld.
King penguin colonies are located on solid land. Since they incubate their single egg on their feet they prefer the ground to be rather flat and free of large stones. The colonies are often close to the water’s edge of the sub-Antarctic islands the penguin occupy but some are several hundered metres away from the coast. To a degree King penguins generate their own breeding space. For example, some narrow, flat coastal areas of Macquarie Island are covered in tussock grass Poa cookii. In some places, King penguins established themselves among the tussock which over time became sparse because the plants could not thrive in the nitrogen rich faeces the penguins deposited around them. At Heard Island, the King penguin colonies largely occupy broad valleys away from the coast.
King penguins have the longest breeding cycle among penguins. It takes them 14 to 16 months to rear a chick. Hence, a successful pair is unlikely to attempt breeding more than twice in three years. At no time during the year are their colonies void of penguins, ie there are always penguins present. However, their activities vary with time of year. Many breeders gather in the colonies in October/November. They perform extensive courtship behaviours in the search of for a mate. It is common to see King penguins in triads on the beaches where usually two females compete for the same male. Like Emperor penguins, King penguins do not build a nest but they do fiercely defend a small breeding territory inside the colony area.
The females lay their single egg any time from November till March. Both parents take part in the incubation of their eggs which weigh usually 230 to 380 g. The eggs are carried on top of the parents’ feet and are covered by a skin fold.
Chicks hatch after about 54 d and weigh about 220 g; it takes 2-3 days to get out of the eggs. The chicks are nearly naked when they first leave the egg and entirely dependent upon their parents for warmth and food. For about a month the baby penguins are brooded; both parents share this duty. During brooding, one parent stays with the chick while the other goes out and hunts. When the foraging parent returns, he/she relieves the partner who now goes to sea. The returned parent continues to keep the chick warm and safe and feeds it several times per day.
By April, most chicks have grown up to a point at which they now are able to regulate their own body temperature. They start gathering in creches, kindergardens for penguins. To survive the coming winter they need sufficient body reserves because the parents are largely leaving their offspring in April/May and return only in September/October. A healthy fat chick that weighed about 8 kg in April weighs only about 5 kg when its parents return in the next spring. During the winter, they rarely receive food and gather in large creches to stay warm, as well as seek safety from predatory birds, such as skuas Catharacta spp and giant petrels Macronectus spp.
Upon their parents return to the colony, the chicks are fed again and quickly put on body mass. They now have to get ready for the moult during which they exchange their soft down for “real” feathers that will enable them to survive at sea.
Since during the moult every single feather is replaced, it costs a lot of energy. Chicks and adults whose body reserves are insufficient cannot survive because as long as the new feathers grow their plumage is no longer waterproof. It they were to go to sea to feed before their plumage is ready, they will get wet and waterlogged and are likely to die. The well-fed penguins stay out of the water for about a month when they moult. They lose about half their body weight but their new feathers are soft and shiney and able to keep the penguins warm and dry for another year.
There are no items on your guide yet! | http://afg.biodiversity.aq/species/193-aptenodytes-patagonicus |
4.03125 | Serious scientific excavations didn't commence at the La Brea Tar Pits until the beginning of the 20th century, but the history of the pits stretches back long before that. It all started millions of years ago when the area we know of today as Los Angeles was submerged underwater. Marine life and sediments accumulated on the ocean floor and eventually the pressure converted the organisms' remains into fossil fuels. Once the ocean receded, that petroleum started seeping its way to the surface -- all beginning about 40,000 years ago.
The tar in the pits, which is more correctly referred to as asphalt, is what's left over after the lighter components of petroleum evaporate away. Incredibly sticky, especially in warm weather, the asphalt has the adhesive power to entrap even large animals. Of the fossilized remains of mammals that have been pulled from the pits, about 90 percent are carnivores [source: The Natural History Museum of Los Angeles County]. This has led the resident paleontologists to suspect events at Rancho La Brea often played out like this: Prey animals, especially weakened or injured ones, would become trapped in the pits. This would draw large numbers of predators to the scene, where they would often become ensnared as well.
The last census of the La Brea collection took place in 1992, and the results were impressive. At the time, the museum housed more than 3.5 million specimens representing more than 600 plant and animal species [source: The Natural History Museum of Los Angeles County]. Excavations have continued apace since then, and experts at the museum suspect the work on something called Project 23 could potentially double the number of specimens in the collection.
We'll talk more about Project 23 on a later page, but for now, let's look at the tar pits' history.
The History of the Tar Pits
Fossil fuels were used by human populations long before the Industrial Revolution, and that includes the asphalt found in the La Brea Tar Pits. For example, Native American tribes used asphalt from the pits to waterproof everything from canoes to baskets.
When the Spanish later occupied the area, they used the land for cattle ranching. It was eventually sold to the Hancock family in 1870, and they drilled for oil. A few studies and small-scale excavations followed, but it wasn't until after the turn of the century that things really started heating up. In 1913, the Natural History Museum of Los Angeles County (known by a slightly different name at the time) was granted access to the lands, and it initiated an intense two-year investigation that uncovered a large portion of the specimens in the collection today. Ninety-six pits were dug during the course of those excavations, but the working conditions were unsafe and the efforts were haphazard. For example, only bones belonging to larger animals received much attention, while smaller fossils, like those of plants and invertebrates, were often overlooked.
A man named L. E. Wyman led those first major excavations, but it was paleontologist Chester Stock of the California Institute of Technology who would do most of the early research work on the recovered remains. Some of the pits proved more bountiful and provocative than others, and some of the most captivating finds came from Pits 3, 4, 9, 61 and 67. But it was Pit 91 that proved to be the real star of the show over the years and has been excavated on and off ever since. More on that on the next page.
Of those 96 pits we discussed on the last page, the most famous and the most actively dissected has to be Pit 91. In fact, for nearly 40 years, it was the only pit under excavation at La Brea. In the late 1960s, researchers at the pits opted to enhance their excavation technique by harvesting all the fossils available in the pit, not just those that belonged to large vertebrates. Having a broader fossil record would offer a more complete picture of the end of the Pleistocene Epoch.
And so on June 13, 1969 -- a day affectionately referred to as "Asphalt Friday" -- excavations recommenced, only this time the remains of amphibians, reptiles, insects, small birds, shells and plants were among the specimens meticulously collected by diggers. And along with those important, if less flashy fossils, Pit 91 has also offered up a whole host of better-known players of the Pleistocene. These include bones from dire wolves, sabertoothed cats, western horses, ground sloths and mammoths -- and the pit is only about 15 feet (4.5 meters) deep!
The vast majority of these remains have been radiocarbon dated to between about 10,000 to 40,000 years old, and Pit 91, like most of the pits, contains fossils from a broad span of time. Thirty-thousand years is a long stretch of time for animals to become entrapped, but fossil figures in the millions can still be a little surprising. However, researchers say the numbers make sense; based on what they've found in the pits, it would only have taken about 10 large animals every 30 years to provide the wealth of fossilized remains found to date. If an entrapment event like that happened once every decade, that would mean the number of specimens found so far is more than explained.
Work on Pit 91 is currently on hiatus, however, and that's all because of the accidental discovery of what has been codenamed Project 23.
In 2006, Project 23 began with all the glamour of a parking deck. The Los Angeles County Museum of Art (LACMA) intended to construct a new underground parking garage on land adjacent to the tar pits, but being such a historically important area, that sort of work couldn't take place without a salvage archaeologist. And that was a good thing, too, because during the course of construction, 16 deposits chock full of artifacts were unearthed.
Not wanting to unduly delay construction (it would have taken an estimated 20 years onsite to thoroughly dig through all the deposits, and the people at the LACMA weren't thrilled at the idea of that long a wait), salvage archaeologist Robin Turner engineered a solution. Three-and-a-half months later, 23 wooden crates containing the deposits were hauled out of the earth with cranes and delivered to the Page Museum intact. Heavily wrapped in plastic and weighing up to about 125,000 pounds (55,000 kilograms), the boxed deposits were transported to the Page museum's main research facility -- nicknamed the "fish bowl" -- where the public can watch through glass walls as researchers carefully sift through them.
Probably the most exciting find of the project so far is "Zed," an 80 percent complete Colombian mammoth with tusks. Back when most of the mammoths at the tar pits were discovered, their bones were just mixed together and later put back together at random; the process was sort of like jumbling up the pieces of 30 different jigsaw puzzles and then assembling them back together without regard for which originated from which box. Now curators can delve deeper into the life of a Pleistocene mammoth than they ever have before. Microfossils abound in the matrix encasing Zed's fossils, analogous of just how many mysteries are still waiting to be unraveled at one of Pleistocene Epoch's most enigmatic legacies, the La Brea Tar Pits.
Fossils of extinct hominids are fascinating finds. Find out what we've learned from these 10 extinct hominids.
More Great Links
- Chong, Jia-Rui. "Tar Pits' secret bubbles up." The Los Angeles Times. May 14, 2007. (Feb. 24, 2011) http://articles.latimes.com/2007/may/14/local/me-tarpits14
- Griffith, Shirley. "La Brea Tar Pits: Where Animals Lived, and Died, Thousands of Years Ago." Voice of America. Oct. 2, 1007. (Feb. 24, 2011) http://www.voanews.com/learningenglish/home/a-23-2007-10-02-voa1-83131632.html?renderforprint=1
- La Brea Tar Pits Web site. (Feb. 24, 2011) http://www.tarpits.org/
- Kielbasa, John R. "Historic Adobes of Los Angeles County." Dorrace Publishing Company. 1998. (Feb. 24, 2011) http://www.laokay.com/halac/RanchoLaBrea.htm
- Maugh II, Thomas H. "Major cache of fossils unearthed in L.A." The Los Angeles Times. Feb. 18, 2009. (Feb. 24, 2011) http://articles.latimes.com/2009/feb/18/science/sci-fossils18
- Natural History Museum of Los Angeles County Web site. (Feb. 24, 2011) http://www.nhm.org/site/research-collections/rancho-la-brea
- Oliver, Myrna. "George C. Page; Philanthropist Founded La Brea Museum." Los Angeles Times. Nov. 30, 2000. (Feb. 24, 2011) http://articles.latimes.com/2000/nov/30/local/me-59294 | http://science.howstuffworks.com/environmental/earth/geology/la-brea-tar-pits.htm/printable |
4.15625 | Exoplanet (for astronomers) is simply any planet not in this solar system. Are any in that “goldilocks zone”, which give the capability of harboring life as we know it? Can we even see it, given its immense distance?
One must define “goldilocks zone” properly: [Habitable zone – Wikipedia, the free encyclopedia ]. That zone is considered the habitable area just far enough away from its sun, where it’s warm enough life can flourish, but not too cold to freeze water.
Astronomer Michael Hart’s computer simulations describe a habitable planet in the “goldilocks zone”. Its orbit must be almost circular, and must make the right sized orbit. Calculations indicate a 5% smaller orbit point to a runaway “greenhouse effect”, or a 1% larger orbit would have resulted in a glacier effect—the freezing of all oceans.
The solar system must be free of large planets with elliptical orbits, which would eject or destroy other planets. Large planets with circular orbits are needed to clear out rogue asteroids that would strike inner planets much more frequently.
An inhabited planet has to be large enough to hold an atmosphere, while small enough so its gravity doesn’t crush inhabitants. The planet must have a moderate temperature. The planet must have a mass between 0.85 and 1.33 of earth’s mass, or within 2 billion years temperature variations would render the planet uninhabitable. [Extraterrestrials, Where Are They?, Second Edition, Edited by Ben Zuckerman and Michael Hart (Cambridge, England: Cambridge University Press, 1995), p. 217].
More importantly, a habitable planet must have some mechanism to keep CO2 from disappearing from the atmosphere. Liquid water begins a chain reaction depleting the atmosphere of CO2 [Ron Cohen, “Interplanetary Odyssey”, Science News (September 28th, 1996), p. 205].
Parts of earth’s surface continually sink where carbonate decomposes to CO2. It then recycles to the surface from volcanic activity, where it refills the atmosphere. We haven’t observed any other planet with similar tectonic activity.
Most extrasolar planets are too distant to detect their weather. Because exoplanets are invisible to the telescope “eye”, any atmosphere is examined by its infrared light, or heat. Infrared measurements are used to map the temperature of the entire surface.
But an observable exoplanet has got to be a transiting planet—it has to cross directly in front and behind its star when viewed from Earth. As an extrasolar planet passes in front of its star, it blocks out a small fraction of the star’s light, and a host of information about the exoplanet can be learned: size, temperature, orbit, etcetera. Because of their location in the plane of sight relative to their orbited star, billions of exoplanets cannot be detected yet.
Over 300 extrasolar planets have been located and measured by this method, and are called “hot Jupiters” for a reason. Jupiter has many characteristics similar to exoplanets. It is a gas giant, with a crust far beneath the surrounding gas.
The Coriolis Effect cause cyclones and anti-cyclones on Earth. Greatly magnified on Jupiter, these cyclones have a revolution 2.5x faster than Earth’s cyclones. Sheer distance makes cyclones on any exoplanets invisible.
Jupiter has many atmospheric disturbances, with stronger ones absorbing the weaker ones. This may explain the size of the largest spot on Jupiter—the Great Red Spot (GRS). Man has observed this spot for almost 400 years, or as long as the telescope existed. Over two earths would fit within this storm.
The GRS is anti-cyclonic in Jupiter’s southern hemisphere, and high pressure. It seems to be about 5 miles higher than other cloud tops. A hurricane on earth rotates clockwise, being low pressure. GRS however, has been shrinking at 230 miles/yr. But at half the size of the GRS, Jupiter also has the “Oval BA”, which appeared in 2000. This was the result of three smaller spots merging. Scientists determined the Oval BA has winds up to 384 miles per hour.
Jupiter’s Three Red Spots (5/23/08)
CreditNASA, ESA, M. Wong, I. de Pater (UC Berkeley), et al.
All three spots are in this image made on 5/9/98 (Hubble Space Telescope). Jupiter’s spots are probably indicative of large scale climate change. It is getting warmer near the equator. The GRS is also warmer. “Warm”, in this case, translates to -250 oF. Surrounding temperatures are colder at -256 oF. Even that difference generates questions concerning global warming. Changes in Jupiter’s weather give rise to debate over perceived climate change on earth.
In 1998, scientists decided to launch an atmospheric probe into Jupiter . It finally crumpled in 23x higher pressure than earth’s.
On Earth, anticyclones usually indicate fair weather. Jupiter’s anticyclones are also high pressure centers, while cyclones are low pressure. Jupiter is shrinking in size due to gravity. Actually a heat source, it radiates 1.6x more energy than it receives from the Sun.
Juno launched from Cape Canaveral on 8/5/2011, to begin its five-year journey to Jupiter. In 2016, many questions will be answered from Juno’s Jupiter encounter.
Is climate change a normal result of CO2 activity on Jupiter? Obviously not because of CO2, with essentially “zero” content. Runaway “greenhouse effects” can occur with a 5% smaller orbit in a “goldilocks zone”, but it’s never due to CO2. It’s always due to orbiting distance.
One almost has to be an exo-atmospheric-meteorologist to understand planetary weather. Likely, the science may have changed in the past few years.
Kevin M. Roeten can be reached at [email protected]. | http://www.redstate.com/diary/roetenks/2011/11/28/jupiter-gives-a-prelude-to-global-warming/ |
4.0625 | So you think global warming is a big problem? What could happen if a 25-million-ton chunk of rock slammed into Earth? When something similar happened 65 million years ago, the dinosaurs and other forms of life were wiped out.
"A collision with an object of this size traveling at an estimated 30,000 to 40,000 mile per hour would be catastrophic," according to NASA researcher and New York City College of Technology (City Tech) Associate Professor of Physics Gregory L. Matloff. His recommendation? "Either destroy the object or alter its trajectory."
Dr. Matloff, whose research includes the best means to avert such a disaster, believes that diverting such objects is the wisest course of action. In 2029 and 2036, the asteroid Apophis (named after the Egyptian god of darkness and the void), at least 1,100 feet in diameter, 90 stories tall, and weighing an estimated 25 million tons, will make two close passes by Earth at a distance of about 22,600 miles.
"We don't always know this far ahead of time that they're coming," Dr. Matloff says, "but an Apophis impact is very unlikely." If the asteroid did hit Earth, NASA estimates, it would strike with 68,000 times the force of the atom bomb that leveled Hiroshima. A possibility also exists that when Apophis passes in 2029, heating as it approaches the sun, it could fragment or emit a tail, which would act like a rocket, unpredictably changing its course. If Apophis or its remnants enter one of two "keyholes" in space, impact might happen when it returns in 2036.
Large chunks of space debris whizzing by the planet, called Near-Earth Objects (NEOs), are of real concern. NASA defines NEOs as comets and asteroids that enter Earth's neighborhood because the gravitational attraction of nearby planets affects their orbits. Dr. Matloff favors diverting rather than exploding them because the latter could create another problem -- debris might bathe Earth in a radioactive shower.
Dr. Matloff's research indicates that an asteroid could be diverted by heating its surface to create a jet stream, which would alter its trajectory, causing it to veer off course. In 2007, with a team at the NASA Marshall Space Flight Center in Huntsville, Alabama, he investigated methods of deflectingNEOs. The team theorized that a solar collector (SC), which is a two-sail solar sail configured to perform as a concentrator of sunlight, could do the trick. Constructed of sheets of reflective metal less than one-tenth the thickness of a human hair, an SC traveling alongside an NEO for a year would concentrate the sun's rays on the asteroid, burn off part of the surface, and create the jet stream.
To do that, it is necessary to know how deeply the light would need to penetrate the NEO's surface. "A beam that penetrates too deeply would simply heat an asteroid," explains Dr. Matloff, "but a beam that penetrates just the right amount -- perhaps about a tenth of a millimeter -- would create a steerable jet and achieve the purpose of deflecting the asteroid."
For the past year, Dr. Matloff and a team of City Tech scientists have been experimenting with red and green lasers to see how deeply they penetrate asteroidal rock, using solid and powdered (regolith) samples from the Allende meteorite that fell in Chihuahua, Mexico in 1969. Dr. Denton Ebel, meteorite curator at the American Museum of Natural History in New York City, provided the samples.
Assistant Professor of Physics Lufeng Leng, a photonics and fiber optics researcher, along with student Thinh Lê, an applied mathematics senior, used lasers to obtain optical transmission measurements (the fraction of light passing through the asteroidal material). Their research was supported by a Professional Staff Congress-City University of New York research grant.
"To my knowledge," says Dr. Matloff, "this is the first experimental measurement of the optical transmission of asteroid samples. Dr. Ebel is encouraging other researchers to repeat and expand on this work."
In a related study, Dr. Leng and her student (whose research was partially supported by City Tech's Emerging Scholars Program) narrowed the red laser beam and scanned the surface of a thin-section Allende sample, discovering that differences in the depth of transmitted light exist, depending on the composition of the material through which the beam passes. From their results, they concluded that lasers aimed from a space probe positioned near an NEO could help determine its surface composition. Using that information, solar sail technology could more accurately focus the sun's rays to penetrate the asteroid's surface to the proper depth, heating it to the correct degree for generating a jet stream that would re-direct the asteroid.
"For certain types of NEOs, by Newton's Third Law, the jet stream created would alter the object's solar orbit, hopefully converting an Earth impact to a near miss," Dr. Matloff states. However, he cautions, "Before concluding that the SC will work as predicted on an actual NEO, samples from other extraterrestrial sources must be analyzed."
Dr. Matloff presented a paper on the results of the City Tech team's optical transmission experiments, "Optical Transmission of an Allende Meteorite Thin Section and Simulated Regolith," at the 73rd Annual Meeting of the international Meteoritical Society, held at the American Museum of Natural History and the Park Central Hotel in New York City.
"At present," he adds, "a debate is underway between American and Russian space agencies regarding Apophis. The Russians believe that we should schedule a mission to this object probably before the first bypass because Earth-produced gravitational effects during that initial pass could conceivably alter the trajectory and properties of the object. On the other hand, Americans generally believe that while an Apophis impact is very unlikely on either pass, we should conduct experiments on an asteroid that runs no risk of ever threatening our home planet."
Cite This Page: | http://www.sciencedaily.com/releases/2011/01/110129081532.htm |
4.21875 | While the number of confirmed extrasolar planets is now approaching 300, the tally of extrasolar moons so far identified is still a rather disappointing zero.
Planets beyond our solar system are incredibly challenging to find. Moons are nearly impossible with today's technology, given that they are generally expected to be quite small compared to their parent worlds.
Even Earth?s moon is invisible on the famous "pale blue dot" image obtained by Voyager 1 from the comparatively small distance of 3.7 billion miles ? a photograph taken from well within our solar system.
But the search is not impossible, says Darren Williams, associate professor of physics and astronomy at Penn State Erie, the Behrend College. Williams believes a moon in orbit around a known extrasolar planet will also be detectable if we look hard enough with the right techniques.
"It will add a periodic component to the combined infrared signal" of the planet-moon system, he said.
Why it matters
Finding moons is more than just an academic quest to count them up. Planetary satellites can be highly interesting in their own right.
It's possible, for example, that life could exist on extrasolar moons, researchers say.
And it has been suggested that the ocean tides induced by Earth?s moon may have been necessary to create the conditions for life on our planet to begin. At the least, the evolution of life has been affected by our moon's constant tugging.
"We certainly owe our present climate stability to the Moon and its stabilizing influence on the spin axis, but I'm not convinced that big moons are a requirement for simple or advanced life," Williams said. "I do think that Earth would have evolved advance life even with greater seasonal extremes, but it may have taken a different evolutionary path."
How to find them
Williams has modelled an Earth-like planet with moons of varying sizes and concluded that satellites as small as Earth's moon could be detectable in the infrared data, owing to their large surface temperature variations. By studying an extrasolar planet and building up a picture of that world?s infrared output, any sizable moons present should be detectable in this way.
So far, however, no planet as small as Earth has been detected around another star. But astronomers expect that barrier to be broken soon. Future missions, such as NASA's Terrestrial Planet Finder and The European Space Agency's Darwin, will have the ability to return the valuable data required both for finding other Earths and, Williams figures, some moons.
"The present goal is to build instruments capable of seeing something as large as the Earth or possibly Mars. Smaller Mercury- or Titan-sized objects fall below that first-order threshold," Williams said.
So could these missions cut to the chase and spot an extrasolar moon directly?
"They might, if the light collectors are big enough and if the moons are big enough. It will be easier to see moons that happen to transit the face of a star, such as what the space telescope Kepler will attempt to do starting next year," Williams explained. The space-based Kepler observatory will note dips in starlight caused by planets crossing in front of stars. If the planets are aligned in such a favourable manner, then thinking goes, moons ought to transit the stars too.
A similar conclusion is reached by Szab?, Szatm?ry, Div?ki and Simon in a paper published in Astronomy and Astrophysics in 2005. They conclude that the Kepler mission should identify a few extrasolar moons using this method of detection.
Yet even if we are not lucky enough to catch an extrasolar moon in transit, these future space-based planet hunters will be able to do the observational groundwork, in visible light and in the infrared, needed to search for satellites.
These planet finders will even be capable of detecting the glint of starlight reflecting off any oceans of liquid water an extrasolar planet may harbor.
"Water is extremely dark in the infrared except when the light reflects from the surface at a glancing angle," Williams told SPACE.com.
This glint will be most apparent when the planet is in a crescent phase, when the starlight hits the reflective surface at an oblique angle. (Mercury and Venus, as seen from Earth, go through phases similar to our moon. Observations of other planets around distant stars will undergo phasing, too.) Observing such reflections can help map the planet?s thermal output and infer the distribution of oceans and continents.
Indeed the Mars Express spacecraft is set to observe crescent Earth's ocean reflection this summer and in fall of 2009 to help understand the phenomenon.
- Video: A World Like Our Own
- Top 10 Most Intriguing Extrasolar Planets
- Image Gallery: Alien Worlds Through Artists' Eyes | http://www.space.com/5469-find-faraway-moons.html |
4 | The telephone number that you dial to call somebody is basically an address, similar to the IP address of a computer or the street address of your home. The length of the telephone number varies depending on the country you are calling. In many European countries, phone numbers are variable in length, ranging from just five or six digits in small towns to ten or more in large cities.
In the United States, phone numbers are fixed-length, with a total of 10 digits. The 3-3-4 scheme, developed by AT&T in 1947, uses three blocks of numbers arranged in two blocks of three and a single block of four digits. Look at the main phone number for HowStuffWorks as we go through the meaning of the different blocks.
- Area code - Regulated by the Federal Communications Commission (FCC), area codes are used to designate a specific geographic region, such as a city or part of a state.
- Prefix - The prefix originally referred to the specific switch that a phone line connected to. Each switch at a phone carrier's central office had a unique three-digit number. With the arrival of computerized switches, many systems now allow local number portability (LNP). This means that a customer's phone number can be moved to another switch without having to change any part of it, including the prefix, as long as the customer does not move out of the local-rate area.
- Line number - This is the number assigned at the switch level to the phone line that you are using. Since the number is assigned to the line and not to the phone itself, you can easily change phones or add more phones to the same line.
Think of the three parts like a street address, where the area code is the city, the prefix is the street and the line number is the house. You can even go a step further with this analogy by including the country. The "1" that you dial on long-distance calls within the United States is actually the country code.
Every country has a different country code. To make calls to another country, you must first dial 011, which is the international access code, and then the country code. In addition to country codes, some countries also have city codes that you dial after the country code but before the local number.
Learn about area codes on the next page. | http://electronics.howstuffworks.com/question659.htm |
4.03125 | A compression fossil is a fossil preserved in sedimentary rock that has undergone physical compression. While it is uncommon to find animals preserved as good compression fossils, it is very common to find plants preserved this way. The reason for this is that physical compression of the rock often leads to distortion of the fossil.
The best fossils of leaves are found preserved in fine layers of sediment that have been compressed in a direction perpendicular to the plane of the deposited sediment. Since leaves are basically flat, the resulting distortion is minimal. Plant stems and other three-dimensional plant structures do not preserve as well under compression. Typically, only the basic outline and surface features are preserved in compression fossils; internal anatomy is not preserved. These fossils may be studied while still partially entombed in the sedimentary rock matrix where they are preserved, or once lifted out of the matrix by a peel or transfer technique.
Compression fossils are formed most commonly in environments where fine sediment is deposited, such as in river deltas, lagoons, along rivers, and in ponds. The best rocks in which to find these fossils preserved are clay and shale, although volcanic ash may sometimes preserve plant fossils as well.
A slab and counter slab, more often called a part and counterpart in paleoentomology and paleobotany, are the matching halves of a compression fossil, a fossil-bearing matrix formed in sedimentary deposits. When excavated the matrix may be split along the natural grain or cleavage of the rock. A fossil embedded in the sediment may then also split down the middle, with fossil remains sticking to both surfaces, or the counter slab may simply show a negative impression or mould of the fossil. Comparing slab and counter slab has led to the exposure of a number of fossil forgeries.
Differences between the impressions on slab and counterslab led astronomer Fred Hoyle and applied physicist Lee Spetner in 1985 to declare that some Archaeopteryx fossils had been forged, a claim dismissed by most palaeontologists.
In its November 1999 edition, National Geographic magazine announced the discovery of Archaeoraptor, a link between dinosaurs and birds, from a 125 million year-old fossil that had come from the Liaoning Province of China. Chinese palaeontologist Xu Xing came into possession of the counter slab through a fossil hunter. On comparing his fossil with images of Archaeoraptor it became evident that it was a composite fake. His note to National Geographic led to consternation and embarrassment. A certain Lewis Simons investigated the matter on behalf of National Geographic. In October 2000 he reported what he termed:
"... a tale of misguided secrecy and misplaced confidence, of rampant egos clashing, self-aggrandizement, wishful thinking, naïve assumptions, human error, stubbornness, manipulation, backbiting, lying, corruption, and, most of all, abysmal communication. "
In order to increase their profit, fossil hunters and dealers occasionally sell slab and counter slab separately. A reptile fossil also found in Liaoning Province was described and named Sinohydrosaurus in 1999 by the Beijing Natural History Museum. In the same year the Institute of Vertebrate Paleontology and Paleoanthropology in Beijing described and named Hyphalosaurus lingyuanensis, unaware they were working with the counter slab of the same specimen. Hyphalosaurus is now the accepted name.
- Arnold, Chester A. (1947). An Introduction to Paleobotany (1st ed.). New York & London: McGraw-Hill Book Company. pp. 14–40.
- Stewart, Wilson N.; Rothwell, Gar W. (1993). Paleobotany and the Evolution of Plants (2nd ed.). Cambridge: Cambridge University Press. pp. 7–22. ISBN 0-521-38294-7.
- Taylor, Thomas N.; Taylor, Edith L. (1993). The Biology and Evolution of Fossil Plants. Englewood Cliffs, NJ: Prentice Hall. pp. 7–12. ISBN 0-13-651589-4.
- Jepson, J.E.; Ansorge, J.; Jarzembowski, E.A. (2011). "New snakeflies (Insecta: Raphidioptera) from the Lower Cretaceous of the UK, Spain and Brazil". Palaeontology 54 (2): 385–395. doi:10.1111/j.1475-4983.2011.01038.x.
- Channing, A.; Zamuner, A.; Edwards, D.; Guido, D. (2011). "Equisetum thermale sp. nov. (Equisetales) from the Jurassic San Agustin hot spring deposit, Patagonia: Anatomy, paleoecology, and inferred paleoecophysiology.". American Journal of Botany 98 (4): 680–697. doi:10.3732/ajb.1000211. PMID 21613167.
- New Scientist 14 March 1985
- The Interpretive Mind
- Two Guys Fossils | https://en.wikipedia.org/wiki/Counterslab |
4.15625 | This action might not be possible to undo. Are you sure you want to continue?
Refrigeration system diagram
2. Explanation of the diagram The pressure of the refrigerant gas is increased in the compressor and it there by becomes hot. This hot, high pressure gas is passed through into a condenser. The refrigerant gas will be cooled by cooling water, and because it is still at a high pressure it will condense. The liquid refrigerant is then distributed through a pipe network until it reaches control valve alongside an evaporator where the cooling is required. This regulating valve meters the flow of liquid refrigerant into the evaporator, which is at lower pressure. Air from the cooled space or fan is passed over the evaporator and boils off the liquid refrigerant, at the same time cooling the air. The design of the system and evaporator should be such that all liquid refrigerators are boiled off and the gas slightly superheated before it returns to the compressor at a low pressure and be recompressed. Thus it will be seen that heat is transferred from the air to the evaporator is then pumped round the system until it reaches the condenser where it is transferred or rejected to the ambient air or water. There are five main steps to a refrigeration circuit: evaporation, compression, condensing, receiving and expansion.
This evaporation takes place in the evaporator and the circuit is complete. The energy that a compressor requires is called compression input and is transferred to the refrigeration vapor. if the load on the evaporator rises and the refrigerant evaporates quicker. The amount of heat given off is the heat absorbed by the refrigerant in the evaporator plus the heat created by compression input. which gives off heat that is transferred to either air or water having a lower temperature. Refrigerant flow stops and the temperature in both tank and evaporator both rise to ambient. the temperature will be just under the boiling point. the refrigerant moves to the condenser. Suddenly reducing the pressure in the expansion valve causes the liquid to boil and evaporate. a compressor is needed to remove the vapor. That means that if the compressor removes vapor faster than it can be formed. The refrigerant from the evaporator is fed to a tank as a weak or saturated superheated gas. which is then sent to the receiver. which produces cooling.1) Evaporation: Liquid refrigerant enters the evaporator. Because the refrigeration circuit is closed. 3) Condensing: After leaving the compressor. equilibrium is maintained. but in principle there are only two pressures: evaporating pressure and condensing pressure. 2) Compression: To maintain the necessary lower pressures and lower temperatures. and thus must be lowered to match the evaporative pressure. 5) Expansion: Before the liquid enters the expansion valve. This is achieved through the use of an expansion valve. It absorbs heat when it evaporates. Alternately. There are many different temperatures involved in the operation of a refrigeration plant. The pressure in the tank rises until it equals the pressure in the evaporator. the temperature and pressure in the evaporator will rise. The byproduct of this is that the vapor changes to a liquid. the pressure will fall and with it the temperature in the evaporator. . 4) Receiving: The pressure in the receiver is higher than the pressure in the evaporator because of compression.
The compressor also draws vapor away from the evaporator to maintain a lower pressure and lower temperature before sending it to the condenser.3. So iron erosion from compressor will be barred from entering the whole system and avoid maintenance which often. . heating it up. Components in refrigeration system a) Compressor A refrigerator compressor is the center of the refrigeration cycle. and it adds pressure to the refrigerant. This is to improve the efficiency and performance of the refrigeration. b) Oily water separator It is used to separate the mixture of oil and gas from the refrigerant gas. It works as a pump to control the circulation of the refrigerant.
d) Silencer When a fluid is jetted from an orifice having a small diameter into an ample space having a much greater cross-sectional area than the orifice. The obstruction disposed at such a position cause a change of pattern of the fluid flow to effectively suppress the generation of noise. . disc-like form and so forth and can be made from various materials such as rubbers. which absorbs heat from the air blown through a coil by a fan. Because its function is to absorb heat into the refrigeration system (from where you don't want it). as in the case of a refrigerant jetted from a restriction or orifice into a refrigerant pipe of a refrigeration system. Fins and tubes are made of metals with high thermal conductivity to maximize heat transfer. jet noise of high frequency is generated in the region around the orifice. and eventually released to the compressor. In order to suppress the generation of this jet noise. plastics and metals. The refrigerant vaporizes from the heat it absorbs heat in the evaporator. The refrigerant is let into and measured by a flow control device. an orifice of a small diameter arranged in the casing for jetting a refrigerant into the casing and an obstruction disposed in the casing at a position within a distance which is 5 times as large as the diameter of the orifice. The obstruction can have various forms such as bar-like form.c) Condenser This is the part of the refrigeration system that is doing the actual cooling. the evaporator is placed in the area to be cooled. the present invention provides a silencer which includes a casing. The evaporator consists of finned tubes.
Emissions from automotive airconditioning are a growing concern because of their impact on climate change. When the liquid refrigerant enters the accumulator it strikes a deflector plate that causes anything liquefied to rest in the holding tank.e) Accumulator Accumulators are designed to protect against damage to the compressor. in sport and leisure facilities. Carbon dioxide is non-flammable. and non-halogenated hydrocarbons such as methane. Accumulators collect small amounts of oil and refrigerant into the suction line so as not to cause compressor damage. fluorocarbons. Natural refrigerants such as ammonia. a reversible phase change from a gas to a liquid. New applications are opening up for natural refrigerants for example in vehicle air-conditioning. especially chlorofluorocarbons were used as refrigerants. for enhanced efficiency. storage. Its purpose is to store any excess liquid refrigerant and oil that may have not boiled off in the evaporator. but they are being phased out because of their ozone depletion effects.(pipe running between the two components). carbon dioxide and non-halogenated hydrocarbons preserve the ozone layer and have no (ammonia) or only a low (carbon dioxide. One of the most promising alternatives is the natural refrigerant CO2 (R744). Basic of refrigerants A refrigerant is a substance used in a heat cycle usually including. This will ban potent greenhouse gases such as the refrigerant HFC-134a which has a GWP of 1410 to promote safe and energy-efficient refrigerants. Traditionally. in the chemical/pharmaceutical industry. 4. has a global warming potential of . non-ozone depleting. and retailing). They are used in air-conditioning systems for buildings. sulfur dioxide. Other common refrigerants used in various applications are ammonia. hydrocarbons) global warming potential. From 2011 on. the European Union will phase out refrigerants with a global warming potential (GWP) of more than 150 in automotive air conditioning (GWP = 100 year warming potential of one kilogram of a gas relative to one kilogram of CO2). The accumulator is situated between the evaporator and compressor in the suction line. in the automotive industry and above all in the food industry (production.
while R134a is compatible with synthetic oil. 5. This action is similar to an internal combustion engine in a car. This new refrigerant has a GWP rating of 4 and is not a blend. are seeing rising use as recreational drugs. residential air conditioning.1. which is considered to have the highest potential for replacing R-134a. y Scroll Compressors y Scroll compressors work by moving one spiral element inside another stationary spiral to produce gas pockets that. Hydrofluoric olefin (HFO)-1234yf. This type of compressor is efficient at both full. several pockets are compressed at once. y Rotary Screw Compressors y Rotary screw compressors have screw spindles that compress the gas as it enters from the evaporator. There are two types of rotary screw compressors: single and twin. scroll and centrifugal. and air-conditioning applications. the oil filter and the air/oil separator. but is toxic and potentially lethal in concentrations above 5% by volume. such as food processing. heat pumps. During compression. Some refrigerants. as they become smaller. By maintaining an even number of . typically these compressors only require changes in oil. screw. They are used in refrigeration. which allow the rotary to remain loaded 100 percent of the time. Dimethyl ether (DME) is also gaining popularity as a refrigerant. The reciprocating compressor is used in low-horsepower applications. Types of compressor y The main types of refrigeration compressors are reciprocating. allowing the unit to be used at any pressure ratio. and vending machines. Further advantages include simple controls and the ability to control the speed through the use of belt drives. GM has announced that it will start using HFO-1234yf in all of its brands by 2013. Microprocessor-based controllers are also available for standard rotary compressors. commercial refrigeration. ice rinks and arenas.and part-load operation. increase the pressure of the gas. such as tetrafluoroethane. hot water pumps. y Reciprocating Compressors y A reciprocating compressor uses a piston-actuated unloading mechanism with spring-loaded pins to raise the suction valve plate from its seat. R12 is compatible with mineral oil. The screw compressor features smooth operation and minimal maintenance requirements. leading to an extremely dangerous phenomenon known as inhalant abuse. and pharmaceutical manufacturing. R-744 can be used as a working fluid in climate control systems for cars.
google.balanced gas pockets on opposite sides.com/index.e-refrigeration. It does this by expanding the region of the flow volume to slow the flow velocity of the working fluid. which converts a portion of it into increased pressure.ehow.google. the compression forces inside the scroll balance and reduce vibration inside the compressor. This type of compressor uses the scroll design instead of a fixed cylinder or a piston or single-sided compression mechanism.org/wiki/Refrigerant http://www. Centrifugal compressors are suited for compressing large volumes of gas to moderate pressures.php?page=accumulator http://www.google.html http://www. This reduces energy use. Diffusers may use airfoils. References y y y y y y http://en.my/search?hl=en&q=function+of+refrigeration+silencer&aq=f&aqi= &aql=&oq= http://www.com. This energy is then sent to a diffuser. eliminating wasted space in the compression chamber and eliminating the need to compress gas again and again during the cycle (recompression). also known as vanes.my/search?hl=en&q=silencer+for+refrigeration+system&aq=f&aqi= &aql=&oq= http://www. to improve this. y Centrifugal Compressors y Centrifugal compressors compress refrigerant gas through the centrifugal force created by rotors that spin at high speed.wikipedia.com.com/about_5079979_functions-refrigeration-compressor.my/search?sourceid=chrome&ie=UTF8&q=refrigeration+compressor .com.
This action might not be possible to undo. Are you sure you want to continue? | https://www.scribd.com/doc/51205500/Refrigeration-system-diagram |
4.125 | * This is the Consumer Version. *
Overview of Rickettsial Infections
Rickettsial infections and related infections (such as anaplasmosis, ehrlichiosis, and Q fever) are caused by an unusual type of bacteria that can live only in another organism.
Most of these infections are spread through ticks, mites, fleas, or lice.
A fever, a severe headache, and usually a rash develop, and people feel generally ill.
Symptoms suggest the diagnosis, and to confirm it, doctors do special tests that use a sample from the rash or blood.
Antibiotics are given as soon as doctors suspect one of these infections.
Rickettsiae and rickettsia-like bacteria are an unusual type of bacteria that cause several diseases, including Rocky Mountain spotted fever and epidemic typhus. These bacteria differ from most other bacteria in that they can live and multiply only inside the cells of another organism (host) and cannot survive on their own in the environment. Ehrlichia, Anaplasma, and Coxiella burnetii bacteria are similar to rickettsiae and cause similar diseases.
For many species of these bacteria, small animals (such as rats and mice) are the usual host. Cattle, sheep, or goats are the hosts for Coxiella burnetii, which causes Q fever. Humans are the usual host for Rickettsia prowazekii, which causes epidemic typhus. These animals and humans—the hosts—are called the reservoir of infection. Host animals may or may not be ill from the infection. Rickettsiae and rickettsia-like bacteria are usually spread to people through the bites of ticks, mites, fleas, or lice that previously fed on an infected animal. Ticks, mites, fleas, and lice are called vectors because they convey (transmit) organisms that cause disease. Q fever, caused by Coxiella burnetii, can be spread through the air or in contaminated food and water and do not require a vector. Each species of rickettsiae and rickettsia-like bacteria has its own hosts and usually vectors.
Some of these bacteria (and the diseases they cause) occur worldwide. Others occur only in certain geographic regions.
A sore covered by black scab (eschar) often forms at the site of the bite. Nearby lymph nodes may be swollen. Some of these bacteria infect the cells lining small blood vessels, causing the blood vessels to become inflamed or blocked or to bleed into the surrounding tissue. Other bacteria ( Ehrlichia and Anaplasma) enter white blood cells. Where damage occurs and how the body responds determine which symptoms develop.
Different rickettsial infections tend to cause similar symptoms:
Because the rash often does not appear for several days, early rickettsial infection is often mistaken for a common viral infection, such as influenza. People may have swollen lymph nodes.
As the infection progresses, people typically experience confusion and severe weakness—often with cough, difficulty breathing, and sometimes vomiting. When the infection is advanced, gangrene may develop, the liver or spleen may enlarge, the kidneys may malfunction, and blood pressure may fall dangerously low (causing shock). Death can result.
Because rickettsiae and rickettsia-like bacteria are transmitted by ticks, mites, fleas, and lice, doctors ask people whether they have been bitten by a tick or another vector and whether they have traveled to an area where these infections are common. Being bitten is a helpful clue—particularly in geographic areas where rickettsial or a related infection is common. However, many people do not recall such a bite. If doctors suspect Q fever, they may also ask whether people were at or near a farm (because cattle, sheep, and goats are the host for the bacteria that cause this infection).
Symptoms also help doctors diagnose these infections. Doctors ask people how long it took for the rash to appear after they were bitten (if known) and whether they have other symptoms. A physical examination is done to determine which parts of the body are affected and what the rash looks like. Doctors also look for an eschar that people may not have noticed and for swollen lymph nodes.
Testing is usually needed to confirm the diagnosis. Often, doctors cannot confirm an infection with rickettsiae or rickettsia-like bacteria quickly because these bacteria cannot be identified using commonly available laboratory tests. Special blood tests for these bacteria are not routinely available and take so long to process that people usually need to be treated before test results are available. Doctors base their decision to treat on the person's symptoms and the likelihood of possible exposure.
Useful tests include blood tests that detect antibodies to rickettsiae or rickettsia-like bacteria. If people have a rash, doctors sometimes remove a small sample of affected skin for testing. The polymerase chain reaction (PCR) technique can be used to increase the amount of the bacteria's DNA, so that the bacteria can be detected more rapidly.
Rickettsial infections respond promptly to early treatment with the antibiotics doxycycline (preferred) or chloramphenicol. These antibiotics are given by mouth unless people are very sick. In such cases, antibiotics are given intravenously. Most people noticeably improve in 1 or 2 days, and fever usually disappears in 2 to 3 days. People take the antibiotic for a minimum of 1 week—longer if the fever persists. When treatment begins late, improvement is slower and the fever lasts longer. If the infection is untreated or if treatment is begun too late, people may die, especially if they have epidemic typhus, scrub typhus, or Rocky Mountain spotted fever.
Ciprofloxacin and other similar antibiotics may be used to treat Mediterranean spotted fever but are usually not used to treat other rickettsial or related infections.
Some Rickettsial and Related Infections
Generic NameSelect Brand Names
chloramphenicolNo US brand name
* This is the Consumer Version. * | http://www.merckmanuals.com/home/infections/rickettsial-and-related-infections/overview-of-rickettsial-infections |
4.03125 | Photosynthesis is a process used by plants and other organisms to convert light energy, normally from the Sun, into chemical energy that can be later released to fuel the organisms' activities (energy transformation). This chemical energy is stored in carbohydrate molecules, such as sugars, which are synthesized from carbon dioxide and water – hence the name photosynthesis, from the Greek φῶς, phōs, "light", and σύνθεσις, synthesis, "putting together". In most cases, oxygen is also released as a waste product. Most plants, most algae, and cyanobacteria perform photosynthesis; such organisms are called photoautotrophs. Photosynthesis maintains atmospheric oxygen levels and supplies all of the organic compounds and most of the energy necessary for life on Earth.
Although photosynthesis is performed differently by different species, the process always begins when energy from light is absorbed by proteins called reaction centres that contain green chlorophyll pigments. In plants, these proteins are held inside organelles called chloroplasts, which are most abundant in leaf cells, while in bacteria they are embedded in the plasma membrane. In these light-dependent reactions, some energy is used to strip electrons from suitable substances, such as water, producing oxygen gas. The hydrogen freed by water splitting is used in the creation of two further compounds: reduced nicotinamide adenine dinucleotide phosphate (NADPH) and adenosine triphosphate (ATP), the "energy currency" of cells.
In plants, algae and cyanobacteria, sugars are produced by a subsequent sequence of light-independent reactions called the Calvin cycle, but some bacteria use different mechanisms, such as the reverse Krebs cycle. In the Calvin cycle, atmospheric carbon dioxide is incorporated into already existing organic carbon compounds, such as ribulose bisphosphate (RuBP). Using the ATP and NADPH produced by the light-dependent reactions, the resulting compounds are then reduced and removed to form further carbohydrates, such as glucose.
The first photosynthetic organisms probably evolved early in the evolutionary history of life and most likely used reducing agents such as hydrogen or hydrogen sulfide, rather than water, as sources of electrons. Cyanobacteria appeared later; the excess oxygen they produced contributed to the oxygen catastrophe, which rendered the evolution of complex life possible. Today, the average rate of energy capture by photosynthesis globally is approximately 130 terawatts, which is about three times the current power consumption of human civilization. Photosynthetic organisms also convert around 100–115 thousand million metric tonnes of carbon into biomass per year.
- 1 Overview
- 2 Photosynthetic membranes and organelles
- 3 Light-dependent reactions
- 4 Light-independent reactions
- 5 Order and kinetics
- 6 Efficiency
- 7 Evolution
- 8 Discovery
- 9 Factors
- 10 See also
- 11 References
- 12 Further reading
- 13 External links
Photosynthetic organisms are photoautotrophs, which means that they are able to synthesize food directly from carbon dioxide and water using energy from light. However, not all organisms that use light as a source of energy carry out photosynthesis, since photoheterotrophs use organic compounds, rather than carbon dioxide, as a source of carbon. In plants, algae and cyanobacteria, photosynthesis releases oxygen. This is called oxygenic photosynthesis. Although there are some differences between oxygenic photosynthesis in plants, algae, and cyanobacteria, the overall process is quite similar in these organisms. However, there are some types of bacteria that carry out anoxygenic photosynthesis. These consume carbon dioxide but do not release oxygen.
Carbon dioxide is converted into sugars in a process called carbon fixation. Carbon fixation is an endothermic redox reaction, so photosynthesis needs to supply both a source of energy to drive this process, and the electrons needed to convert carbon dioxide into a carbohydrate via a reduction reaction. The addition of an electrons to a chemical species is called a reduction reaction. In general outline and in effect, photosynthesis is the opposite of cellular respiration, in which glucose and other compounds are oxidized to produce carbon dioxide and water, and to release chemical energy (an exothermic reaction) to drive the organism's metabolism. The two processes, of reduction of carbon dioxide to carbohydrate and then the later oxidation of the carbohydrate, take place through a different sequence of chemical reactions and in different cellular compartments.
- CO2 + 2H2A + photons → [CH2O] + 2A + H2O
- carbon dioxide + electron donor + light energy → carbohydrate + oxidized electron donor + water
Since water is used as the electron donor in oxygenic photosynthesis, the equation for this process is:
- n CO2 + 2n H2O + photons → (CH2O)n + n O2 + n H2O
- carbon dioxide + water + light energy → carbohydrate + oxygen + water
This equation emphasizes that water is a reactant in the light-dependent reaction) and a product of the light-independent reaction, but canceling n water molecules from each side gives the net equation:
- CO2 + 2 H2O + photons → CH2O + O2
- carbon dioxide + water + light energy → carbohydrate + oxygen
Other processes substitute other compounds (such as arsenite) for water in the electron-supply role; for example some microbes use sunlight to oxidize arsenite to arsenate: The equation for this reaction is:
- CO2 + (AsO33−) + photons → (AsO43−) + CO
- carbon dioxide + arsenite + light energy → arsenate + carbon monoxide (used to build other compounds in subsequent reactions)
Photosynthesis occurs in two stages. In the first stage, light-dependent reactions or light reactions capture the energy of light and use it to make the energy-storage molecules ATP and NADPH. During the second stage, the light-independent reactions use these products to capture and reduce carbon dioxide.
Archaeobacteria use a simpler method using a pigment similar to the pigments used for vision. The archaearhodopsin changes its configuration in response to sunlight, acting as a proton pump. This produces a proton gradient more directly which is then converted to chemical energy. The process does not involve carbon dioxide fixation and does not release oxygen. It seems to have evolved separately.
Photosynthetic membranes and organelles
In photosynthetic bacteria, the proteins that gather light for photosynthesis are embedded in cell membranes. In its simplest form, this involves the membrane surrounding the cell itself. However, the membrane may be tightly folded into cylindrical sheets called thylakoids, or bunched up into round vesicles called intracytoplasmic membranes. These structures can fill most of the interior of a cell, giving the membrane a very large surface area and therefore increasing the amount of light that the bacteria can absorb.
In plants and algae, photosynthesis takes place in organelles called chloroplasts. A typical plant cell contains about 10 to 100 chloroplasts. The chloroplast is enclosed by a membrane. This membrane is composed of a phospholipid inner membrane, a phospholipid outer membrane, and an intermembrane space between them. Enclosed by the membrane is an aqueous fluid called the stroma. Embedded within the stroma are stacks of thylakoids (grana), which are the site of photosynthesis. The thylakoids appear as flattened disks. The thylakoid itself is enclosed by the thylakoid membrane, and within the enclosed volume is a lumen or thylakoid space. Embedded in the thylakoid membrane are integral and peripheral membrane protein complexes of the photosynthetic system, including the pigments that absorb light energy.
Plants absorb light primarily using the pigment chlorophyll. The green part of the light spectrum is not absorbed but is reflected which is the reason that most plants have a green color. Besides chlorophyll, plants also use pigments such as carotenes and xanthophylls. Algae also use chlorophyll, but various other pigments are present, such as phycocyanin, carotenes, and xanthophylls in green algae, phycoerythrin in red algae (rhodophytes) and fucoxanthin in brown algae and diatoms resulting in a wide variety of colors.
These pigments are embedded in plants and algae in complexes called antenna proteins. In such proteins, the pigments are arranged to work together. Such a combination of proteins is also called a light-harvesting complex.
Although all cells in the green parts of a plant have chloroplasts, the majority of those are found in specially adapted structures called leaves. Certain species adapted to conditions of strong sunlight and aridity, such as many Euphorbia and cactus species, have their main photosynthetic organs in their stems. The cells in the interior tissues of a leaf, called the mesophyll, can contain between 450,000 and 800,000 chloroplasts for every square millimeter of leaf. The surface of the leaf is coated with a water-resistant waxy cuticle that protects the leaf from excessive evaporation of water and decreases the absorption of ultraviolet or blue light to reduce heating. The transparent epidermis layer allows light to pass through to the palisade mesophyll cells where most of the photosynthesis takes place.
In the light-dependent reactions, one molecule of the pigment chlorophyll absorbs one photon and loses one electron. This electron is passed to a modified form of chlorophyll called pheophytin, which passes the electron to a quinone molecule, starting the flow of electrons down an electron transport chain that leads to the ultimate reduction of NADP to NADPH. In addition, this creates a proton gradient (energy gradient) across the chloroplast membrane, which is used by ATP synthase in the synthesis of ATP. The chlorophyll molecule ultimately regains the electron it lost when a water molecule is spit in a process called photolysis, which releases a dioxygen (O2) molecule as a waste product.
The overall equation for the light-dependent reactions under the conditions of non-cyclic electron flow in green plants is:
- 2 H2O + 2 NADP+ + 3 ADP + 3 Pi + light → 2 NADPH + 2 H+ + 3 ATP + O2
Not all wavelengths of light can support photosynthesis. The photosynthetic action spectrum depends on the type of accessory pigments present. For example, in green plants, the action spectrum resembles the absorption spectrum for chlorophylls and carotenoids with peaks for violet-blue and red light. In red algae, the action spectrum is blue-green light, which allows these algae to use the blue end of the spectrum to grow in the deeper waters that filter out the longer wavelengths (red light) used by above ground green plants. The non-absorbed part of the light spectrum is what gives photosynthetic organisms their color (e.g., green plants, red algae, purple bacteria) and is the least effective for photosynthesis in the respective organisms.
In plants, light-dependent reactions occur in the thylakoid membranes of the chloroplasts where they drive the synthesis of ATP and NADPH. The light-dependent reactions are of two forms: cyclic and non-cyclic.
In the non-cyclic reaction, the photons are captured in the light-harvesting antenna complexes of photosystem II by chlorophyll and other accessory pigments (see diagram at right). The absorption of a photon by the antenna complex frees an electron by a process called photoinduced charge separation. The antenna system is at the core of the chlorophyll molecule of the photosystem II reaction center. That freed electron is transferred to the primary electron-acceptor molecule, pheophytin. As the electrons are shuttled through an electron transport chain (the so-called Z-scheme shown in the diagram), it initially functions to generate a chemiosmotic potential by pumping proton cations (H+) across the membrane and into the thylakoid space. An ATP synthase enzyme uses that chemiosmotic potential to make ATP during photophosphorylation, whereas NADPH is a product of the terminal redox reaction in the Z-scheme. The electron enters a chlorophyll molecule in Photosystem I. There it is further excited by the light absorbed by that photosystem. The electron is then passed along a chain of electron acceptors to which it transfers some of its energy. The energy delivered to the electron acceptors is used to move hydrogen ions across the thylakoid membrane into the lumen. The electron is eventually used to reduce the co-enzyme NADP with a H+ to NADPH (which has functions in the light-independent reaction); at that point, the path of that electron ends.
The cyclic reaction is similar to that of the non-cyclic, but differs in that it generates only ATP, and no reduced NADP (NADPH) is created. The cyclic reaction takes place only at photosystem I. Once the electron is displaced from the photosystem, the electron is passed down the electron acceptor molecules and returns to photosystem I, from where it was emitted, hence the name cyclic reaction.
The NADPH is the main reducing agent produced by chloroplasts, which then goes on to provide a source of energetic electrons in other cellular reactions. Its production leaves chlorophyll in photosystem I with a deficit of electrons (chlorophyll has been oxidized), which must be balanced by some other reducing agent that will supply the missing electron. The excited electrons lost from chlorophyll from photosystem I are supplied from the electron transport chain by plastocyanin. However, since photosystem II is the first step of the Z-scheme, an external source of electrons is required to reduce its oxidized chlorophyll a molecules. The source of electrons in green-plant and cyanobacterial photosynthesis is water. Two water molecules are oxidized by four successive charge-separation reactions by photosystem II to yield a molecule of diatomic oxygen and four hydrogen ions; the electrons yielded are transferred to a redox-active tyrosine residue that then reduces the oxidized chlorophyll a (called P680) that serves as the primary light-driven electron donor in the photosystem II reaction center. That photo receptor is in effect reset and is then able to repeat the absorption of another photon and the release of another photo-dissociated electron. The oxidation of water is catalyzed in photosystem II by a redox-active structure that contains four manganese ions and a calcium ion; this oxygen-evolving complex binds two water molecules and contains the four oxidizing equivalents that are used to drive the water-oxidizing reaction. Photosystem II is the only known biological enzyme that carries out this oxidation of water. The hydrogen ions released contribute to the transmembrane chemiosmotic potential that leads to ATP synthesis. Oxygen is a waste product of light-dependent reactions, but the majority of organisms on Earth use oxygen for cellular respiration, including photosynthetic organisms.
In the light-independent (or "dark") reactions, the enzyme RuBisCO captures CO2 from the atmosphere and, in a process called the Calvin-Benson cycle, it uses the newly formed NADPH and releases three-carbon sugars, which are later combined to form sucrose and starch. The overall equation for the light-independent reactions in green plants is:128
- 3 CO2 + 9 ATP + 6 NADPH + 6 H+ → C3H6O3-phosphate + 9 ADP + 8 Pi + 6 NADP+ + 3 H2O
Carbon fixation produces the intermediate three-carbon sugar product, which is then converted to the final carbohydrate products. The simple carbon sugars produced by photosynthesis are then used in the forming of other organic compounds, such as the building material cellulose, the precursors for lipid and amino acid biosynthesis, or as a fuel in cellular respiration. The latter occurs not only in plants but also in animals when the energy from plants is passed through a food chain.
The fixation or reduction of carbon dioxide is a process in which carbon dioxide combines with a five-carbon sugar, ribulose 1,5-bisphosphate, to yield two molecules of a three-carbon compound, glycerate 3-phosphate, also known as 3-phosphoglycerate. Glycerate 3-phosphate, in the presence of ATP and NADPH produced during the light-dependent stages, is reduced to glyceraldehyde 3-phosphate. This product is also referred to as 3-phosphoglyceraldehyde (PGAL) or, more generically, as triose phosphate. Most (5 out of 6 molecules) of the glyceraldehyde 3-phosphate produced is used to regenerate ribulose 1,5-bisphosphate so the process can continue. The triose phosphates not thus "recycled" often condense to form hexose phosphates, which ultimately yield sucrose, starch and cellulose. The sugars produced during carbon metabolism yield carbon skeletons that can be used for other metabolic reactions like the production of amino acids and lipids.
Carbon concentrating mechanisms
In hot and dry conditions, plants close their stomata to prevent water loss. Under these conditions, CO2 will decrease and oxygen gas, produced by the light reactions of photosynthesis, will increase, causing an increase of photorespiration by the oxygenase activity of ribulose-1,5-bisphosphate carboxylase/oxygenase and decrease in carbon fixation. Some plants have evolved mechanisms to increase the CO2 concentration in the leaves under these conditions.
Plants that use the C4 carbon fixation process chemically fix carbon dioxide in the cells of the mesophyll by adding it to the three-carbon molecule phosphoenolpyruvate (PEP), a reaction catalyzed by an enzyme called PEP carboxylase, creating the four-carbon organic acid oxaloacetic acid. Oxaloacetic acid or malate synthesized by this process is then translocated to specialized bundle sheath cells where the enzyme RuBisCO and other Calvin cycle enzymes are located, and where CO2 released by decarboxylation of the four-carbon acids is then fixed by RuBisCO activity to the three-carbon 3-phosphoglyceric acids. The physical separation of RuBisCO from the oxygen-generating light reactions reduces photorespiration and increases CO2 fixation and, thus, the photosynthetic capacity of the leaf. C4 plants can produce more sugar than C3 plants in conditions of high light and temperature. Many important crop plants are C4 plants, including maize, sorghum, sugarcane, and millet. Plants that do not use PEP-carboxylase in carbon fixation are called C3 plants because the primary carboxylation reaction, catalyzed by RuBisCO, produces the three-carbon 3-phosphoglyceric acids directly in the Calvin-Benson cycle. Over 90% of plants use C3 carbon fixation, compared to 3% that use C4 carbon fixation; however, the evolution of C4 in over 60 plant lineages makes it a striking example of convergent evolution.
Xerophytes, such as cacti and most succulents, also use PEP carboxylase to capture carbon dioxide in a process called Crassulacean acid metabolism (CAM). In contrast to C4 metabolism, which physically separates the CO2 fixation to PEP from the Calvin cycle, CAM temporally separates these two processes. CAM plants have a different leaf anatomy from C3 plants, and fix the CO2 at night, when their stomata are open. CAM plants store the CO2 mostly in the form of malic acid via carboxylation of phosphoenolpyruvate to oxaloacetate, which is then reduced to malate. Decarboxylation of malate during the day releases CO2 inside the leaves, thus allowing carbon fixation to 3-phosphoglycerate by RuBisCO. Sixteen thousand species of plants use CAM.
Cyanobacteria possess carboxysomes, which increase the concentration of CO2 around RuBisCO to increase the rate of photosynthesis. An enzyme, carbonic anhydrase, located within the carboxysome releases CO2 from the dissolved hydrocarbonate ions (HCO3−). Before the CO2 diffuses out it is quickly sponged up by RuBisCO, which is concentrated within the carboxysomes. HCO3− ions are made from CO2 outside the cell by another carbonic anhydrase and are actively pumped into the cell by a membrane protein. They cannot cross the membrane as they are charged, and within the cytosol they turn back into CO2 very slowly without the help of carbonic anhydrase. This causes the HCO3− ions to accumulate within the cell from where they diffuse into the carboxysomes. Pyrenoids in algae and hornworts also act to concentrate CO2 around rubisco.
Order and kinetics
The overall process of photosynthesis takes place in four stages:
|1||Energy transfer in antenna chlorophyll (thylakoid membranes)||femtosecond to picosecond|
|2||Transfer of electrons in photochemical reactions (thylakoid membranes)||picosecond to nanosecond|
|3||Electron transport chain and ATP synthesis (thylakoid membranes)||microsecond to millisecond|
|4||Carbon fixation and export of stable products||millisecond to second|
Plants usually convert light into chemical energy with a photosynthetic efficiency of 3–6%. Absorbed light that is unconverted is dissipated primarily as heat, with a small fraction (1-2%) re-emitted as chlorophyll fluorescence at longer (redder) wavelengths.
Actual plants' photosynthetic efficiency varies with the frequency of the light being converted, light intensity, temperature and proportion of carbon dioxide in the atmosphere, and can vary from 0.1% to 8%. By comparison, solar panels convert light into electric energy at an efficiency of approximately 6–20% for mass-produced panels, and above 40% in laboratory devices.
Photosynthesis measurement systems are not designed to directly measure the amount of light absorbed by the leaf. But analysis of chlorophyll-fluorescence, P700- and P515-absorbance and gas exchange measurements reveal detailed information about e.g. the photosystems, quantum efficiency and the CO2 assimilation rates. With some instruments even wavelength-dependency of the photosynthetic efficiency can be analyzed.
A phenomenon known as quantum walk increases the efficiency of the energy transport of light significantly. In the photosynthetic cell of an algae, bacterium, or plant, there are light-sensitive molecules called chromophores arranged in an antenna-shaped structure named a photocomplex. When a photon is absorbed by a chromophore, it is converted into a quasiparticle referred to as an exciton, which jumps from chromophore to chromophore towards the reaction center of the photocomplex, a collection of molecules that traps its energy in a chemical form that makes it accessible for the cell's metabolism. The exciton's wave properties enable it to cover a wider area and try out several possible paths simultaneously, allowing it to instantaneously "choose" the most efficient route, where it will have the highest probability of arriving at its destination in the minimum possible time. Because that quantum walking takes place at temperatures far higher than quantum phenomena usually occur, it is only possible over very short distances, due to obstacles in the form of destructive interference that come into play. These obstacles cause the particle to lose its wave properties for an instant before it regains them once again after it is freed from its locked position through a classic "hop". The movement of the electron towards the photo center is therefore covered in a series of conventional hops and quantum walks.
Early photosynthetic systems, such as those in green and purple sulfur and green and purple nonsulfur bacteria, are thought to have been anoxygenic, and used various other molecules as electron donors rather than water. Green and purple sulfur bacteria are thought to have used hydrogen and sulfur as electron donors. Green nonsulfur bacteria used various amino and other organic acids as an electron donor. Purple nonsulfur bacteria used a variety of nonspecific organic molecules. The use of these molecules is consistent with the geological evidence that Earth's early atmosphere was highly reducing at that time.
The main source of oxygen in the Earth's atmosphere derives from oxygenic photosynthesis, and its first appearance is sometimes referred to as the oxygen catastrophe. Geological evidence suggests that oxygenic photosynthesis, such as that in cyanobacteria, became important during the Paleoproterozoic era around 2 billion years ago. Modern photosynthesis in plants and most photosynthetic prokaryotes is oxygenic. Oxygenic photosynthesis uses water as an electron donor, which is oxidized to molecular oxygen (O
2) in the photosynthetic reaction center.
Symbiosis and the origin of chloroplasts
Several groups of animals have formed symbiotic relationships with photosynthetic algae. These are most common in corals, sponges and sea anemones. It is presumed that this is due to the particularly simple body plans and large surface areas of these animals compared to their volumes. In addition, a few marine mollusks Elysia viridis and Elysia chlorotica also maintain a symbiotic relationship with chloroplasts they capture from the algae in their diet and then store in their bodies. This allows the mollusks to survive solely by photosynthesis for several months at a time. Some of the genes from the plant cell nucleus have even been transferred to the slugs, so that the chloroplasts can be supplied with proteins that they need to survive.
An even closer form of symbiosis may explain the origin of chloroplasts. Chloroplasts have many similarities with photosynthetic bacteria, including a circular chromosome, prokaryotic-type ribosome, and similar proteins in the photosynthetic reaction center. The endosymbiotic theory suggests that photosynthetic bacteria were acquired (by endocytosis) by early eukaryotic cells to form the first plant cells. Therefore, chloroplasts may be photosynthetic bacteria that adapted to life inside plant cells. Like mitochondria, chloroplasts possess their own DNA, separate from the nuclear DNA of their plant host cells and the genes in this chloroplast DNA resemble those found in cyanobacteria. DNA in chloroplasts codes for redox proteins such as those found in the photosynthetic reaction centers. The CoRR Hypothesis proposes that this Co-location is required for Redox Regulation.[clarification needed]
Cyanobacteria and the evolution of photosynthesis
The biochemical capacity to use water as the source for electrons in photosynthesis evolved once, in a common ancestor of extant cyanobacteria. The geological record indicates that this transforming event took place early in Earth's history, at least 2450–2320 million years ago (Ma), and, it is speculated, much earlier. Because the Earth's atmosphere contained almost no oxygen during the estimated development of photosynthesis, it is believed that the first photosynthetic cyanobacteria did not generate oxygen. Available evidence from geobiological studies of Archean (>2500 Ma) sedimentary rocks indicates that life existed 3500 Ma, but the question of when oxygenic photosynthesis evolved is still unanswered. A clear paleontological window on cyanobacterial evolution opened about 2000 Ma, revealing an already-diverse biota of blue-green algae. Cyanobacteria remained the principal primary producers of oxygen throughout the Proterozoic Eon (2500–543 Ma), in part because the redox structure of the oceans favored photoautotrophs capable of nitrogen fixation. Green algae joined blue-green algae as the major primary producers of oxygen on continental shelves near the end of the Proterozoic, but it was only with the Mesozoic (251–65 Ma) radiations of dinoflagellates, coccolithophorids, and diatoms did the primary production of oxygen in marine shelf waters take modern form. Cyanobacteria remain critical to marine ecosystems as primary producers of oxygen in oceanic gyres, as agents of biological nitrogen fixation, and, in modified form, as the plastids of marine algae.
The Oriental hornet (Vespa orientalis) converts sunlight into electric power using a pigment called xanthopterin. This is the first evidence of a member of the animal kingdom engaging in photosynthesis.
Although some of the steps in photosynthesis are still not completely understood, the overall photosynthetic equation has been known since the 19th century.
Jan van Helmont began the research of the process in the mid-17th century when he carefully measured the mass of the soil used by a plant and the mass of the plant as it grew. After noticing that the soil mass changed very little, he hypothesized that the mass of the growing plant must come from the water, the only substance he added to the potted plant. His hypothesis was partially accurate — much of the gained mass also comes from carbon dioxide as well as water. However, this was a signaling point to the idea that the bulk of a plant's biomass comes from the inputs of photosynthesis, not the soil itself.
Joseph Priestley, a chemist and minister, discovered that, when he isolated a volume of air under an inverted jar, and burned a candle in it, the candle would burn out very quickly, much before it ran out of wax. He further discovered that a mouse could similarly "injure" air. He then showed that the air that had been "injured" by the candle and the mouse could be restored by a plant.
In 1778, Jan Ingenhousz, repeated Priestley's experiments. He discovered that it was the influence of sunlight on the plant that could cause it to revive a mouse in a matter of hours.
In 1796, Jean Senebier, a Swiss pastor, botanist, and naturalist, demonstrated that green plants consume carbon dioxide and release oxygen under the influence of light. Soon afterward, Nicolas-Théodore de Saussure showed that the increase in mass of the plant as it grows could not be due only to uptake of CO2 but also to the incorporation of water. Thus, the basic reaction by which photosynthesis is used to produce food (such as glucose) was outlined.
Cornelis Van Niel made key discoveries explaining the chemistry of photosynthesis. By studying purple sulfur bacteria and green bacteria he was the first to demonstrate that photosynthesis is a light-dependent redox reaction, in which hydrogen reduces carbon dioxide.
Robert Emerson discovered two light reactions by testing plant productivity using different wavelengths of light. With the red alone, the light reactions were suppressed. When blue and red were combined, the output was much more substantial. Thus, there were two photosystems, one absorbing up to 600 nm wavelengths, the other up to 700 nm. The former is known as PSII, the latter is PSI. PSI contains only chlorophyll "a", PSII contains primarily chlorophyll "a" with most of the available chlorophyll "b", among other pigment. These include phycobilins, which are the red and blue pigments of red and blue algae respectively, and fucoxanthol for brown algae and diatoms. The process is most productive when the absorption of quanta are equal in both the PSII and PSI, assuring that input energy from the antenna complex is divided between the PSI and PSII system, which in turn powers the photochemistry.
Robert Hill thought that a complex of reactions consisting of an intermediate to cytochrome b6 (now a plastoquinone), another is from cytochrome f to a step in the carbohydrate-generating mechanisms. These are linked by plastoquinone, which does require energy to reduce cytochrome f for it is a sufficient reductant. Further experiments to prove that the oxygen developed during the photosynthesis of green plants came from water, were performed by Hill in 1937 and 1939. He showed that isolated chloroplasts give off oxygen in the presence of unnatural reducing agents like iron oxalate, ferricyanide or benzoquinone after exposure to light. The Hill reaction is as follows:
- 2 H2O + 2 A + (light, chloroplasts) → 2 AH2 + O2
where A is the electron acceptor. Therefore, in light, the electron acceptor is reduced and oxygen is evolved.
Melvin Calvin and Andrew Benson, along with James Bassham, elucidated the path of carbon assimilation (the photosynthetic carbon reduction cycle) in plants. The carbon reduction cycle is known as the Calvin cycle, which ignores the contribution of Bassham and Benson. Many scientists refer to the cycle as the Calvin-Benson Cycle, Benson-Calvin, and some even call it the Calvin-Benson-Bassham (or CBB) Cycle.
Louis N.M. Duysens and Jan Amesz discovered that chlorophyll a will absorb one light, oxidize cytochrome f, chlorophyll a (and other pigments) will absorb another light, but will reduce this same oxidized cytochrome, stating the two light reactions are in series.
Development of the concept
In 1893, Charles Reid Barnes proposed two terms, photosyntax and photosynthesis, for the biological process of synthesis of complex carbon compounds out of carbonic acid, in the presence of chlorophyll, under the influence of light. Over time, the term photosynthesis came into common usage as the term of choice. Later discovery of anoxygenic photosynthetic bacteria and photophosphorylation necessitated redefinition of the term.
|This section does not cite any sources. (January 2013)|
There are three main factors affecting photosynthesis and several corollary factors. The three main are:
Light intensity (irradiance), wavelength and temperature
The process of photosynthesis provides the main input of free energy into the biosphere, and is one of four main ways in which radiation is important for plant life .
The radiation climate within plant communities is extremely variable, with both time and space.
- At constant temperature, the rate of carbon assimilation varies with irradiance, increasing as the irradiance increases, but reaching a plateau at higher irradiance.
- At low irradiance, increasing the temperature has little influence on the rate of carbon assimilation. At constant high irradiance, the rate of carbon assimilation increases as the temperature is increased.
These two experiments illustrate several important points: First, it is known that, in general, photochemical reactions are not affected by temperature. However, these experiments clearly show that temperature affects the rate of carbon assimilation, so there must be two sets of reactions in the full process of carbon assimilation. These are, of course, the light-dependent 'photochemical' temperature-independent stage, and the light-independent, temperature-dependent stage. Second, Blackman's experiments illustrate the concept of limiting factors. Another limiting factor is the wavelength of light. Cyanobacteria, which reside several meters underwater, cannot receive the correct wavelengths required to cause photoinduced charge separation in conventional photosynthetic pigments. To combat this problem, a series of proteins with different pigments surround the reaction center. This unit is called a phycobilisome.[clarification needed]
Carbon dioxide levels and photorespiration
As carbon dioxide concentrations rise, the rate at which sugars are made by the light-independent reactions increases until limited by other factors. RuBisCO, the enzyme that captures carbon dioxide in the light-independent reactions, has a binding affinity for both carbon dioxide and oxygen. When the concentration of carbon dioxide is high, RuBisCO will fix carbon dioxide. However, if the carbon dioxide concentration is low, RuBisCO will bind oxygen instead of carbon dioxide. This process, called photorespiration, uses energy, but does not produce sugars.
RuBisCO oxygenase activity is disadvantageous to plants for several reasons:
- One product of oxygenase activity is phosphoglycolate (2 carbon) instead of 3-phosphoglycerate (3 carbon). Phosphoglycolate cannot be metabolized by the Calvin-Benson cycle and represents carbon lost from the cycle. A high oxygenase activity, therefore, drains the sugars that are required to recycle ribulose 5-bisphosphate and for the continuation of the Calvin-Benson cycle.
- Phosphoglycolate is quickly metabolized to glycolate that is toxic to a plant at a high concentration; it inhibits photosynthesis.
- Salvaging glycolate is an energetically expensive process that uses the glycolate pathway, and only 75% of the carbon is returned to the Calvin-Benson cycle as 3-phosphoglycerate. The reactions also produce ammonia (NH3), which is able to diffuse out of the plant, leading to a loss of nitrogen.
- A highly simplified summary is:
- 2 glycolate + ATP → 3-phosphoglycerate + carbon dioxide + ADP + NH3
The salvaging pathway for the products of RuBisCO oxygenase activity is more commonly known as photorespiration, since it is characterized by light-dependent oxygen consumption and the release of carbon dioxide.
- Jan Anderson (scientist)
- Artificial photosynthesis
- Calvin-Benson cycle
- Carbon fixation
- Cellular respiration
- Integrated fluorometer
- Light-dependent reaction
- Organic reaction
- Photosynthetic reaction center
- Photosynthetically active radiation
- Photosystem I
- Photosystem II
- Quantum biology
- Red edge
- Vitamin D
- Hill reaction
- "photosynthesis". Online Etymology Dictionary.
- φῶς. Liddell, Henry George; Scott, Robert; A Greek–English Lexicon at the Perseus Project
- σύνθεσις. Liddell, Henry George; Scott, Robert; A Greek–English Lexicon at the Perseus Project
- Bryant DA, Frigaard NU (Nov 2006). "Prokaryotic photosynthesis and phototrophy illuminated". Trends in Microbiology 14 (11): 488–96. doi:10.1016/j.tim.2006.09.001. PMID 16997562.
- Reece J, Urry L, Cain M, Wasserman S, Minorsky P, Jackson R. Biology (International ed.). Upper Saddle River, NJ: Pearson Education. pp. 235, 244. ISBN 0-321-73975-2.
This initial incorporation of carbon into organic compounds is known as carbon fixation.
- Olson JM (May 2006). "Photosynthesis in the Archean era". Photosynthesis Research 88 (2): 109–17. doi:10.1007/s11120-006-9040-5. PMID 16453059.
- Buick R (Aug 2008). "When did oxygenic photosynthesis evolve?". Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences 363 (1504): 2731–43. doi:10.1098/rstb.2008.0041. PMC 2606769. PMID 18468984.
- Nealson KH, Conrad PG (Dec 1999). "Life: past, present and future". Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences 354 (1392): 1923–39. doi:10.1098/rstb.1999.0532. PMC 1692713. PMID 10670014.
- Whitmarsh J, Govindjee (1999). "The photosynthetic process". In Singhal GS, Renger G, Sopory SK, Irrgang KD, Govindjee. Concepts in photobiology: photosynthesis and photomorphogenesis. Boston: Kluwer Academic Publishers. pp. 11–51. ISBN 0-7923-5519-9.
100 x 1015 grams of carbon/year fixed by photosynthetic organisms, which is equivalent to 4 x 1018 kJ/yr = 4 x 1021J/yr of free energy stored as reduced carbon; (4 x 1018 kJ/yr) / (31,556,900 sec/yr) = 1.27 x 1014 J/yr; (1.27 x 1014 J/yr) / (1012 J/sec / TW) = 127 TW.
- Steger U, Achterberg W, Blok K, Bode H, Frenz W, Gather C, Hanekamp G, Imboden D, Jahnke M, Kost M, Kurz R, Nutzinger HG, Ziesemer T (2005). Sustainable development and innovation in the energy sector. Berlin: Springer. p. 32. ISBN 3-540-23103-X.
The average global rate of photosynthesis is 130 TW (1 TW = 1 terawatt = 1012 watt).
- "World Consumption of Primary Energy by Energy Type and Selected Country Groups, 1980–2004" (XLS). Energy Information Administration. July 31, 2006. Retrieved 2007-01-20.
- Field CB, Behrenfeld MJ, Randerson JT, Falkowski P (Jul 1998). "Primary production of the biosphere: integrating terrestrial and oceanic components". Science 281 (5374): 237–40. Bibcode:1998Sci...281..237F. doi:10.1126/science.281.5374.237. PMID 9657713.
- "Photosynthesis". McGraw-Hill Encyclopedia of Science & Technology 13. New York: McGraw-Hill. 2007. ISBN 0-07-144143-3.
- Whitmarsh J, Govindjee (1999). "Chapter 2: The Basic Photosynthetic Process". In Singhal GS, Renger G, Sopory SK, Irrgang KD, Govindjee. Concepts in Photobiology: Photosynthesis and Photomorphogenesis. Boston: Kluwer Academic Publishers. p. 13. ISBN 978-0792355199.
- Anaerobic Photosynthesis, Chemical & Engineering News, 86, 33, August 18, 2008, p. 36
- Kulp TR, Hoeft SE, Asao M, Madigan MT, Hollibaugh JT, Fisher JC, Stolz JF, Culbertson CW, Miller LG, Oremland RS (Aug 2008). "Arsenic(III) fuels anoxygenic photosynthesis in hot spring biofilms from Mono Lake, California". Science 321 (5891): 967–70. Bibcode:2008Sci...321..967K. doi:10.1126/science.1160799. PMID 18703741.
- "Scientists discover unique microbe in California's largest lake". Retrieved 2009-07-20.
- Plants: Diversity and Evolution, page 14, Martin Ingrouille, Bill Eddie
- Evolution of Photosynthesis
- Tavano CL, Donohue TJ (Dec 2006). "Development of the bacterial photosynthetic apparatus". Current Opinion in Microbiology 9 (6): 625–31. doi:10.1016/j.mib.2006.10.005. PMC 2765710. PMID 17055774.
- Mullineaux CW (1999). "The thylakoid membranes of cyanobacteria: structure, dynamics and function". Australian Journal of Plant Physiology 26 (7): 671–677. doi:10.1071/PP99027.
- Sener MK, Olsen JD, Hunter CN, Schulten K (Oct 2007). "Atomic-level structural and functional model of a bacterial photosynthetic membrane vesicle". Proceedings of the National Academy of Sciences of the United States of America 104 (40): 15723–8. Bibcode:2007PNAS..10415723S. doi:10.1073/pnas.0706861104. PMC 2000399. PMID 17895378.
- Campbell NA, Williamson B, Heyden RJ (2006). Biology Exploring Life. Upper Saddle River, NJ: Pearson Prentice Hall. ISBN 0-13-250882-6.
- Raven PH, Evert RF, Eichhorn SE (2005). Biology of Plants, (7th ed.). New York: W.H. Freeman and Company Publishers. pp. 124–127. ISBN 0-7167-1007-2.
- "Yachandra Group Home page".
- Pushkar Y, Yano J, Sauer K, Boussac A, Yachandra VK (Feb 2008). "Structural changes in the Mn4Ca cluster and the mechanism of photosynthetic water splitting". Proceedings of the National Academy of Sciences of the United States of America 105 (6): 1879–84. Bibcode:2008PNAS..105.1879P. doi:10.1073/pnas.0707092105. PMC 2542863. PMID 18250316.
- Williams BP, Johnston IG, Covshoff S, Hibberd JM (September 2013). "Phenotypic landscape inference reveals multiple evolutionary paths to C4 photosynthesis". eLife 2: e00961. doi:10.7554/eLife.00961. PMID 24082995.
- L. Taiz, E. Zeiger (2006). Plant Physiology (4 ed.). Sinauer Associates. ISBN 978-0-87893-856-8.
- Monson RK, Sage RF (1999). "16". C₄ plant biology. Boston: Academic Press. pp. 551–580. ISBN 0-12-614440-0.
- Dodd AN, Borland AM, Haslam RP, Griffiths H, Maxwell K (Apr 2002). "Crassulacean acid metabolism: plastic, fantastic". Journal of Experimental Botany 53 (369): 569–80. doi:10.1093/jexbot/53.369.569. PMID 11886877.
- Badger MR, Price GD (Feb 2003). "CO2 concentrating mechanisms in cyanobacteria: molecular components, their diversity and evolution". Journal of Experimental Botany 54 (383): 609–22. doi:10.1093/jxb/erg076. PMID 12554704.
- Badger MR, Andrews JT, Whitney SM, Ludwig M, Yellowlees DC, Leggat W, Price GD (1998). "The diversity and coevolution of Rubisco, plastids, pyrenoids, and chloroplast-based CO2-concentrating mechanisms in algae". Canadian Journal of Botany 76 (6): 1052–1071. doi:10.1139/b98-074. ISSN 0008-4026.
- Miyamoto K. "Chapter 1 – Biological energy production". Renewable biological systems for alternative sustainable energy production (FAO Agricultural Services Bulletin – 128). Food and Agriculture Organization of the United Nations. Retrieved 2009-01-04.
- Maxwell K, Johnson GN (Apr 2000). "Chlorophyll fluorescence--a practical guide". Journal of Experimental Botany 51 (345): 659–68. doi:10.1093/jexbot/51.345.659. PMID 10938857.
- Govindjee R. "What is Photosynthesis". Biology at Illinois.
- Schreiber U, Klughammer C, Kolbowski J (2012). "Assessment of wavelength-dependent parameters of photosynthetic electron transport with a new type of multi-color PAM chlorophyll fluorometer". Photosynthesis research 113 (1-3): 127–144. doi:10.1007/s11120-012-9758-1.
- Palmer J (21 June 2013). "Plants 'seen doing quantum physics'". BBC News.
- Lloyd S (10 March 2014). "Quantum Biology: Better Living Through Quantum Mechanics - The Nature of Reality". Nova: PBS Online, WGBH Boston.
- Hildner R, Brinks D, Nieder JB, Cogdell RJ, van Hulst NF (Jun 2013). "Quantum coherent energy transfer over varying pathways in single light-harvesting complexes". Science 340 (6139): 1448–51. doi:10.1126/science.1235820. PMID 23788794.
- Photosynthesis got a really early start, New Scientist, 2 October 2004
- Revealing the dawn of photosynthesis, New Scientist, 19 August 2006
- Venn AA, Loram JE, Douglas AE (2008). "Photosynthetic symbioses in animals". Journal of Experimental Botany 59 (5): 1069–80. doi:10.1093/jxb/erm328. PMID 18267943.
- Rumpho ME, Summer EJ, Manhart JR (May 2000). "Solar-powered sea slugs. Mollusc/algal chloroplast symbiosis". Plant Physiology 123 (1): 29–38. doi:10.1104/pp.123.1.29. PMC 1539252. PMID 10806222.
- Muscatine L, Greene RW (1973). "Chloroplasts and algae as symbionts in molluscs". International Review of Cytology. International Review of Cytology 36: 137–69. doi:10.1016/S0074-7696(08)60217-X. ISBN 9780123643360. PMID 4587388.
- Rumpho ME, Worful JM, Lee J, Kannan K, Tyler MS, Bhattacharya D, Moustafa A, Manhart JR (Nov 2008). "Horizontal gene transfer of the algal nuclear gene psbO to the photosynthetic sea slug Elysia chlorotica". Proceedings of the National Academy of Sciences of the United States of America 105 (46): 17867–71. Bibcode:2008PNAS..10517867R. doi:10.1073/pnas.0804968105. PMC 2584685. PMID 19004808.
- Douglas SE (Dec 1998). "Plastid evolution: origins, diversity, trends". Current Opinion in Genetics & Development 8 (6): 655–61. doi:10.1016/S0959-437X(98)80033-6. PMID 9914199.
- Reyes-Prieto A, Weber AP, Bhattacharya D (2007). "The origin and establishment of the plastid in algae and plants". Annual Review of Genetics 41: 147–68. doi:10.1146/annurev.genet.41.110306.130134. PMID 17600460.
- Raven JA, Allen JF (2003). "Genomics and chloroplast evolution: what did cyanobacteria do for plants?". Genome Biology 4 (3): 209. doi:10.1186/gb-2003-4-3-209. PMC 153454. PMID 12620099.
- Tomitani A, Knoll AH, Cavanaugh CM, Ohno T (Apr 2006). "The evolutionary diversification of cyanobacteria: molecular-phylogenetic and paleontological perspectives". Proceedings of the National Academy of Sciences of the United States of America 103 (14): 5442–7. doi:10.1073/pnas.0600999103. PMC 1459374. PMID 16569695.
- "Cyanobacteria: Fossil Record". Ucmp.berkeley.edu. Retrieved 2010-08-26.
- Smith, Alison (2010). Plant biology. New York, NY: Garland Science. p. 5. ISBN 0815340257.
- Herrero A, Flores E (2008). The Cyanobacteria: Molecular Biology, Genomics and Evolution (1st ed.). Caister Academic Press. ISBN 978-1-904455-15-8.
- Plotkin M, Hod I, Zaban A, Boden SA, Bagnall DM, Galushko D, Bergman DJ (Dec 2010). "Solar energy harvesting in the epicuticle of the oriental hornet (Vespa orientalis)". Die Naturwissenschaften 97 (12): 1067–76. doi:10.1007/s00114-010-0728-1. PMID 21052618.
- Walker DA (2002). ";And whose bright presence' - an appreciation of Robert Hill and his reaction" (PDF). Photosynthesis Research 73 (1-3): 51–4. doi:10.1023/A:1020479620680. PMID 16245102.
- Otto Warburg – Biography. Nobelprize.org (1970-08-01). Retrieved on 2011-11-03.
- Gest, Howard (2002). "History of the word photosynthesis and evolution of its definition.". Photosynthesis Research 73 (1-3): 7–10. doi:10.1023/A:1020419417954.
- Jones, H.G. 1992. Plants and Microclimate: A Quantitative Approach to Environmental Plant Physiology. Cambridge Univ. Press, Cambridge, U.K. 428 p.
- Bidlack JE, Stern KR, Jansky S (2003). Introductory plant biology. New York: McGraw-Hill. ISBN 0-07-290941-2.
- Blankenship RE (2014). Molecular Mechanisms of Photosynthesis (2nd ed.). John Wiley & Sons. ISBN 978-1-4051-8975-0.
- Govindjee, Beatty JT, Gest H, Allen JF (2006). Discoveries in Photosynthesis. Advances in Photosynthesis and Respiration 20. Berlin: Springer. ISBN 1-4020-3323-0.
- Reece JB, et al. (2013). Campbell Biology. Benjamin Cummings. ISBN 978-0321775658.
- Gupta RS, Mukhtar T, Singh B (Jun 1999). "Evolutionary relationships among photosynthetic prokaryotes (Heliobacterium chlorum, Chloroflexus aurantiacus, cyanobacteria, Chlorobium tepidum and proteobacteria): implications regarding the origin of photosynthesis". Molecular Microbiology 32 (5): 893–906. doi:10.1046/j.1365-2958.1999.01417.x. PMID 10361294.
implications regarding the origin of photosynthesis
- Rutherford AW, Faller P (Jan 2003). "Photosystem II: evolutionary perspectives". Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences 358 (1429): 245–53. doi:10.1098/rstb.2002.1186. PMC 1693113. PMID 12594932.
|Wikimedia Commons has media related to Photosynthesis.|
- A collection of photosynthesis pages for all levels from a renowned expert (Govindjee)
- In depth, advanced treatment of photosynthesis, also from Govindjee
- Science Aid: Photosynthesis Article appropriate for high school science
- Metabolism, Cellular Respiration and Photosynthesis – The Virtual Library of Biochemistry and Cell Biology
- Overall examination of Photosynthesis at an intermediate level
- Overall Energetics of Photosynthesis
- Photosynthesis Discovery Milestones – experiments and background
- The source of oxygen produced by photosynthesis Interactive animation, a textbook tutorial
- Jessica Marshall (2011-03-29). "First practical artificial leaf makes debut". Discovery News.
- Photosynthesis – Light Dependent & Light Independent Stages
- Khan Academy, video introduction
|Library resources about | https://en.wikipedia.org/wiki/Photosynthesis |
4.125 | As with other geologic periods, the rock beds that define the period's start and end are well identified, but the exact dates are uncertain by several million years. The base of the Silurian is set at a major extinction when 60% of marine species were wiped out, the Ordovician-Silurian extinction events.
First terrestrial biota[change | change source]
The Silurian was the first period to see macrofossils of biota on land, in the form of moss forests along lakes and streams as well as millipedes and scorpions colonizing the land later in the period. The fossil record of Sea Scorpions reached it's greatest extent in the middle silurian,about 430 million years ago.[source?]
The first fossil records of vascular plants, that is, land plants with tissues that carry food, appeared in the second half of the Silurian period. The earliest known representatives of this group are the Cooksonia (mostly from the northern hemisphere) and Baragwanathia (from Australia). A primitive Silurian land plant with xylem and phloem but no differentiation in root, stem or leaf, was much-branched Psilophyton. This plant reproducied by spores and respired through stomata on every surface, and probably photosynthesizing in every tissue exposed to light.
Some evidence suggests the presence of primitive predatory arachnids and myriapods in Late Silurian rocks. Predatory invertebrates would indicate that simple food webs were in place that included non-predatory prey animals.
References[change | change source]
- Andrew J. Jeram, Paul A. Selden and Dianne Edwards 1990. Land animals in the Silurian: Arachnids and Myriapods from Shropshire, England. Science 658-61.
- Anna K. Behrensmeyer, John D. Damuth et al. 1992. Terrestrial ecosystems through time. University of Chicago Press.
|Precambrian (4.567 gya – 541 mya)|
|In the left column are Eons, bold are Eras, not bold are Periods. gya = billion years ago, mya = million years ago|
|Hadean (4.567 gya – 4 gya)||Chaotian Zirconian|
|Archaean (4 gya – 2.5 gya)||Eoarchaean (4 gya – 3.6 gya)|
|Proterozoic (2.5 gya – 541 mya)||Palaeoproterozoic (2.5 gya – 1.6 gya) Siderian (2.5 gya – 2.3 gya) Rhyacian (2.3 gya – 2.05 gya) Orosirian (2.05 gya – 1.8 gya) Statherian (1.8 gya – 1.6 gya)|
|Phanerozoic (541 mya – today)|
|In the left column are Eras, bold are Periods, not bold or italics are Epochs, Italics are stages. kya = thousand years ago, mya = million years ago|
|Palaeozoic (541 mya – 252.17 mya)||Cambrian (541 mya – 485.4 mya)
Silurian (443.4 mya – 419.2 mya)
|Mesozoic (252.17 mya – 66.0 mya)||Triassic (252.17 mya – 201.3 mya) Lower Triassic (252.17 mya – 247.2 mya) Middle Triassic (247.2 mya – 237 mya) Upper Triassic (237 mya – 201.3 mya)|
|Cainozoic (66.0 mya – today)||Palaeogene (66.0 mya – 23.03 mya) Palaeocene (66.0 mya – 56 mya) Eocene (56 mya - 33.9 mya) Oligocene (33.9 mya – 23.03 mya)| | https://simple.wikipedia.org/wiki/Silurian |
4.21875 | * This is the Consumer Version. *
Cholera is a serious infection of the intestine that is caused by the bacteria Vibrio cholerae and that causes severe diarrhea.
People are infected when they consume contaminated food, often seafood, or water.
Cholera is rare except in areas where sanitation is inadequate.
People have watery diarrhea and vomit, usually with no fever.
Identifying the bacteria in a stool sample confirms the diagnosis.
Replacing lost fluids and giving antibiotics treat the infection effectively .
Several species of Vibrio bacteria cause diarrhea (see Table: Microorganisms That Cause Gastroenteritis). The most serious illness, cholera, is caused by Vibrio cholerae. Cholera may occur in large outbreaks.
Vibrio cholerae normally lives in aquatic environments along the coast. People acquire the infection by consuming contaminated water, seafood, or other foods. Once infected, people excrete the bacteria in stool. Thus, the infection can spread rapidly, particularly in areas where human waste is untreated.
Once common throughout the world, cholera is now largely confined to developing countries in the tropics and subtropics. It is common (endemic) in parts of Asia, the Middle East, Africa, and South and Central America. Small outbreaks have occurred in Europe, Japan, and Australia. In the United States, cholera can occur along the coast of the Gulf of Mexico.
In endemic areas, outbreaks usually occur when war or civil unrest disrupts public sanitation services. Infection is most common during warm months and among children. In newly affected areas, outbreaks may occur during any season and affect all ages equally.
For infection to develop, many bacteria must be consumed. Then, there may be too many for stomach acid to kill, and some bacteria can reach the small intestine, where they grow and produce a toxin. The toxin causes the small intestine to secrete enormous amounts of salt and water. The body loses this fluid as watery diarrhea. It is the loss of water and salt that causes death. The bacteria remain in the small intestine and do not invade tissues.
Because stomach acid kills the bacteria, people who produce less stomach acid are more likely to get cholera. Such people include
People living in endemic areas gradually acquire some immunity.
Most infected people have no symptoms. When cholera symptoms occur, they begin 1 to 3 days after exposure, usually with sudden, painless, watery diarrhea and vomiting. Usually, fever is absent.
Diarrhea and vomiting may be mild to severe. In severe infections, more than 1 quart of water and salts is lost per hour. The stool looks gray and has flecks of mucus in it. Within hours, dehydration can become severe, causing intense thirst, muscle cramps, and weakness. Very little urine is produced. The eyes may become sunken, and the skin on the fingers may become very wrinkled. If dehydration is not treated, loss of water and salts can lead to kidney failure, shock, coma, and death.
In people who survive, cholera symptoms usually subside in 3 to 6 days. Most people are free of the bacteria in 2 weeks. The bacteria remain in a few people indefinitely without causing symptoms. Such people are called carriers.
Doctors take a sample of stool or use a swab to obtain a sample from the rectum. It is sent to a laboratory where cholera bacteria can be grown (cultured). Identifying Vibrio cholerae in the sample confirms the diagnosis.
Blood and urine tests to evaluate dehydration and kidney function are done.
The following are essential to cholera prevention:
Other precautions include
Shellfish tend to carry other forms of Vibrio as well.
Rapid replacement of lost body water and salts is lifesaving. Most people can be treated effectively with a solution given by mouth. These solutions are designed to replace the fluids the body has lost. For severely dehydrated people who cannot drink, a salt solution is given intravenously. In epidemics, if the intravenous solution is not available, people are sometimes given a salt solution through a tube inserted through the nose into the stomach. After enough fluids are replaced to relieve symptoms, people should drink at least enough of the salt solution to replace the fluids they have lost through diarrhea and vomiting. People are also encouraged to drink as much water as they want. Solid foods can be eaten after vomiting stops and appetite returns.
An antibiotic is usually given to reduce the severity of diarrhea and make it stop sooner. Also, people who take an antibiotic are slightly less likely to spread the infection during an outbreak. Antibiotics that may be used include doxycycline, azithromycin, trimethoprim/sulfamethoxazole, and ciprofloxacin. Doctors choose antibiotics that are known to be effective against the bacteria causing cholera in the local community. Because doxycycline discolors the teeth in children under 8 years old, azithromycin or trimethoprim-sulfamethoxazole is used instead. These antibiotics are taken by mouth.
More than 50% of untreated people with severe cholera die. Fewer than 1% of people who receive prompt, adequate fluid replacement die.
Generic NameSelect Brand Names
trimethoprimNo US brand name
* This is the Consumer Version. * | http://www.merckmanuals.com/home/infections/bacterial-infections/cholera |
4.25 | A polygon is any closed figure with sides made from straight lines. At each vertex of a polygon, there is both an interior and exterior angle, corresponding to the angles on the inside and outside of the closed figure. Understanding the relationships that govern these angles is useful in various geometrical problems. In particular, it is helpful to know how to calculate the sum of interior angles in a polygon. This can be done using a simple formula.
1Count the number of sides your polygon has. The method for calculating the sum of interior angles is based on how many sides the polygon has. Remember that a polygon must have at least 3 sides (a triangle), and each side must be a straight line.
2Subtract 2 from the number of sides. For example, subtracting 2 from a triangle gives you the number 1. Subtracting 2 from a pentagon (which has 5 sides) gives you the number 3. Subtracting 2 from a hexagon (which has 6 sides) gives you the number 4.
3Multiply this number by 180. Multiply the number arrived at in the previous step by 180. This will yield the sum of the polygon's interior angles, expressed in degrees. For example, consider a hexagon. Subtracting 2 from a hexagon's 6 sides yields 4. Multiplying 4 by 180 yields 720. Therefore, a hexagon (regular or irregular) has interior angles that add up to 720 degrees.
4Review the formula used to calculate this sum. Building a formula from the steps above yields: s = 180(n - 2), where "s" is the sum of the interior angles and "n" is the polygon's number of sides. This formula can be used for a polygon with any number of sides. It does not matter whether the polygon is regular or irregular, nor does it matter what the individual interior angles measure. Given a certain number of sides, a polygon's interior angles can always be summed using the formula above.
5Derive the formula for further understanding. If you forget the formula above, or simply want to understand why it works, you can derive it fairly easily. This is done by segmenting any polygon into triangles.
- Remember that a triangle's 3 interior angles will add up to 180 degrees, no matter what the shape of the triangle. This is the premise upon which the formula above can be built.
- Draw a polygon on a sheet of paper. For example, consider a square, which is a 4-sided polygon. You can split the square up into 2 triangles by drawing a line connecting 2 of its opposite corners. Note that the interior angles of each triangle can be added to equal the interior angles of the original polygon.
- Once you have split your polygon up into separate triangles, count the number of triangles in the polygon. Multiplying this number by 180 will yield the sum of the polygon's interior angles. The formula presented above makes use of this method.
Questions and Answers
Give us 3 minutes of knowledge!
- To check your work on a piece of paper, you can use a protractor to sum the interior angles manually. When doing this, be careful to draw the polygon's sides very straight.
Things You'll Need
- Protractor (optional)
In other languages:
Español: calcular la suma de los ángulos internos, Italiano: Calcolare la Somma degli Angoli Interni, Русский: вычислить сумму внутренних углов, Português: Calcular a Soma dos Ângulos Internos de um Polígono
Thanks to all authors for creating a page that has been read 91,072 times. | http://www.wikihow.com/Calculate-the-Sum-of-Interior-Angles |
4.0625 | From Earth Science On-Site
Park Hall Country Park, Stoke-on-Trent
KEY STAGE 3 SITE B
© GeoconservationUK ESO-S Project, 2016
It is anticipated that the ideas and materials presented here will be adapted by schools, and others, to be more appropriate for their own purposes and programmes of study.
In such circumstances please acknowledge the source as the Earth Science On-Site project.
HULME QUARRIES: “PLAY CANYON”
Locality B: Quarry face at the south east margin of the quarry (20 minutes)
Figure 1: The Quarry Face From Site B
|Take the group around the track to the SE corner of the quarry and take the track below the eastern face to Site B. Stand on the track below the face, looking East and North at the quarry face. (See Figure 1 left)
(words in brackets indicate need or opportunity for further teaching)
|Q1 The rocks in this quarry face show layers. How many layers can you see here?
||At least 4 layers
|Q2 Describe the layering in the rocks in the quarry face.
||Horizontal, varied thicknesses/some thin, some thick beds
(The layers are called beds; layers are separated by bedding planes)
|Q3 Which of the main groups of rocks occurs in this quarry? (Igneous, sedimentary or metamorphic).
|Q4 How can you tell? (Evidence - may need reminder)
Was your prediction correct?
|Accumulation of grains/pebbles|
|Q5 Observe the rocks in the different beds.
What differences do you see?
|Weather/break up in different ways|
Some beds contain more pebbles
Rocks show different colours
Two different rock types
(The two sedimentary rocks are sandstone and conglomerate)
|Q6 Would these beds have been formed as horizontal beds?
(Principle of Original Horizontality)
(The beds appear to be horizontal but they are tilted to the east by about 5 degrees. Demonstrate by holding up a tilted book and ask pupils to look at the appearance of the book from different directions)
|Q7 Now relate what we saw in the class demonstration to the rocks in the quarry face. Which beds were formed first (or laid down first) and are therefore the oldest in this quarry?
||Lower beds formed first
(Principle of Superposition)
|Q8 Again relate to what we saw in the class demonstration to the layers in the quarry.
How could these layers have been deposited?
|Deposited by water.
|Q9 Can you see any cracks across the layers?
(The cracks are called joints or fractures)
|Q10 Can you suggest how the fractures might have formed?
||Most fractures probably formed during earth movements (uplift)
(Take other suggestions e.g. quarry blasting)
|T1 Estimate the height of the quarry here (Clue the average height of a teacher is 1.7m)
||Estimated height = 3.5 to 4 metres
|T2 Students asked to complete and label the field sketch
||A completed worksheet is shown at the end of this document.
n.b. Sketch of quarry to show bed, bedding plane, joint, sandstone, conglomerate, oldest bed | http://www.ukrigs.org.uk/esos/wiki/index.php5?title=PH/KS3/Ex2 |
4.40625 | Keep motivation intrinsic
Young children are generally motivated to learn about everything. Unless they have been made fun of regularly, when investigating or presenting their knowledge, they usually have a strong desire to find out and share information.
One of the best teaching methods is to motivate children by modeling enthusiasm and curiosity. Motivation comes from within (intrinsic) and from outside (extrinsic). Making too much fuss of any one child can result in a competitive attitude in the class. Model curiosity and asking questions about the topics studied.
Reinforce thinking processes rather than praising the child. Try, “That is an interesting way that you sorted your blocks. Tell me what you were thinking.” Then, “Sarah sorted her blocks in a different way. Both ways of sorting are interesting.”
Have children describe or share their new knowledge regularly
When children have an opportunity to communicate their new knowledge to patient adults it helps solidify concepts. It often takes children time to find the correct words to explain their thinking.
- Supply the students with descriptive words as they are playing or working, e.g. “Notice how dull those rocks are, the other ones are shiny”. This extends their vocabulary and increases their ability to share new discoveries.
Remember that children need to be active
If kindergarten students have been sitting still too long, they will quickly let you know when it’s time to move.
- Well-planned, interesting learning plans fail if the children need a break.
- Go for walks around the school, jump up and down, act out a story, do anything that gets the blood pumping around. It results in good circulation and more alert studentsScheduling lots of movement breaks throughout the day is an invaluable best teaching practice.
Be Sensitive to Children’s Needs
One thing I learned early in my teaching career is that learning doesn’t happen if a child is over tired, hungry, upset, scared or worried. Learning to be flexible and understanding with young children is a skill that will serve you well in your educational career. At times, children need to get away from everyone and be left alone.
A small space, such as under your desk, works well for some students who are too overwhelmed by home or other circumstances, to cope with their peers or their teacher.
If a student is hungry, it’s easier to let her eat part of her lunch early or to provide a snack, than to try to force the child to concentrate on a task until the scheduled eating time.
Inexperienced teachers sometimes misinterpret a child’s unwillingness to participate as stubbornness or bad behavior. It’s good to remember…
- That children often do not have the vocabulary to express themselves.
- To use reflective listening to help children understand what is upsetting them.
- That sometimes children work well in groups and this helps them learn to share and develop ideas and at other times they need to be alone with ample time to figure things out.
- To relax and have fun with your students!
Maintain a classroom atmosphere of warmth and acceptance.For some kindergarten children, your classroom will be one of the few places where their opinions and ideas have been heard and valued. | http://www.kindergarten-lessons.com/best-teaching-methods-kindergarten/ |
4.28125 | Finding an object's center of mass.
An overview of gravity.
How to simulate gravity using centripetal force.
How to define a chord; how to describe the effect of a perpendicular bisector of a chord and the distance from the center of the circle.
How to define the apothem and center of a polygon; how to divide a regular polygon into congruent triangles.
How gravity creates tides.
Overview of how to calculate and when to use mean, median, and mode
How to identify the centroid and the way it divides each of the medians.
How projectile motion works.
How linear motion works.
How parabolic motion works.
How black holes work.
How to calculate the mean, or average, of a list of numbers.
How to calculate the median of a data set.
How to find the mode of a data set.
How to transform the graph of a hyperbola.
Understanding the behavior of an object in free fall.
Applying Newton's Law of Universal Gravitation.
How to derive the equation for a circle using the distance formula. | https://www.brightstorm.com/tag/center-of-gravity/ |
4.125 | Self-esteem is the core belief people have about themselves. Healthy self-esteem helps a person to act responsibly, cooperate well with others, deal with difficulties, and have the confidence to try new things.
The foundation of self-esteem is established in childhood, although it is a lifelong process of development.
Parents are the most significant influences on a child's self-esteem. Parents promote a child's healthy self-esteem by initiating a cycle of belonging, learning, and contributing. A sense of belonging helps a child to participate in learning new things. Learning makes a child feel confident in making contributions. And making contributions helps secure a feeling of belonging.
An unhealthy self-esteem causes problems throughout life. Mental health problems, problems with other people, and lack of confidence are some of the possible consequences of low self-esteem.
eMedicineHealth Medical Reference from Healthwise
To learn more visit Healthwise.org
© 1995-2014 Healthwise, Incorporated. Healthwise, Healthwise for every health decision, and the Healthwise logo are trademarks of Healthwise, Incorporated.
Mental Health Resources
- Could I Have Binge Eating Disorder?
- 10 Problem That Could Mean Adult ADHD
- Are We Close to a Cure for Cancer?
- Early Care for Your Premature Baby
- What to Eat When You Have Cancer
- When to Take More Pain Medication | http://www.emedicinehealth.com/script/main/art.asp?articlekey=134486&ref=135027 |
4 | Pavlof Volcano Introduction
Pavlof is one of the most active volcanoes in North America. In the past 100 years, Pavlof has erupted at least 24 times and
may have erupted on several other occasions. The remote location and weather with limited visibility, combined with the fact that there are few local inhabitants, may have allowed some
eruptions to go unconfirmed. Today, daily satellite monitoring and real-time data from instruments around the volcano bring a
continuous stream of information to scientists.
Although there is very little human activity on the land immediately surrounding Pavlof, the sky above is heavily travelled. Each
day at least 20,000 international airline passengers and dozens of flights loaded with freight fly above the volcano. An eruption at Pavlof that puts large amounts of volcanic ash high into the atmosphere produces air traffic safety concerns and significant financial
losses as flights must be rerouted. This is why the volcano receives so much attention from scientists.
Related: Pavlof Eruption Photos from the Space Station
Pavlof Volcano: Plate Tectonic Setting
Pavlof is located near the western end of the Alaska Peninsula. The convergent boundary between the North American Plate and the
Pacific Plate is located to the south and east of Pavlof as shown in the map below. The North American Plate is moving in a southerly
direction and the Pacific Plate is moving towards the northwest.
|Map: Where is Pavlov Volcano?|
|Map showing the location of Pavlof Volcano near the end of the Alaska Peninsula. The boundary between the North American Plate and the Pacific Plate is shown by the gray toothed line. The Pacific Plate is to the south of the boundary, and the North American plate is to the north of this boundary. The A-B line shows the location of the cross section below.
At this location both plates consist of oceanic lithosphere. At the plate boundary, the Pacific Plate is forced under the North
American Plate to form the Aleutian Trench and a subduction zone. A diagram of this plate boundary situation is shown in the
simplified cross section below.
|Simplified plate tectonics cross section (A to B)|
|Simplified plate tectonics cross-section showing how Pavlof Volcano is located on the Alaska Peninsula. A subduction zone formed where the Pacific Plate descends beneath the North American Plate is directly below the volcano. Magma produced from the melting mantle and Pacific Plate rises to the surface and causes eruptions.
Pavlof Volcano: Eruptive History
The diagram below summarizes the eruptive frequency of Pavlov for which there is a written record.
The small number of eruptions in the early portion of this record reflects the remote location of the volcano, the lack of
local population and the poor weather conditions that limited observation. Eruption frequencies in
the 1700s, 1800s and early 1900s are underrepresented.
Some of the eruptions are marked as "questionable." At times it was impossible to attribute
an eruption to a specific volcano because vents are so numerous and close together in the Eammons
Lake Volcanic Center.
Most of Pavlof's eruptions have involved low energy ash releases, minor andesite lava flows or minor lava fountaining. These
sometimes produce lahars when ash and lava melt portions of Pavlof's snow cap. Some of these lahars
have been large enough to reach the Pacific Ocean to the south or the Bering Sea to the north.
Occasionally, Pavlof produces a strong explosive eruption or a number of
smaller explosive events in a single eruptive episode.
The 1983, 1981, 1974/1975, 1936/1948, and 1906/1911 eruptions produced enough ejecta to be rated at level 3 on the Volcanic Explosivity Index. The 1762/1786 eruption has been rated at VEI 4.
|Chart of the eruptive history of Pavlof Volcano by century. The greater
frequency of eruptions over the last two centuries can mainly be attributed to improved observation abilities and greater
interest in the volcano. Data in this chart is from the Alaska Volcano Observatory,
where more specific details for most of these eruptions are available for public view. Some of the eruptions extended in
time across two or more calendar years.
Pavlof: Geology and Hazards
Although eruptions at Pavlof have been numerous, they have fortunately been small to moderate in size. They are often Strombolian
eruptions that produce local falls of tephra. Pavlof also produces ash plumes that can be carried hundreds of miles by the wind.
Pavlof has not been a deadly threat to people on the ground because very few people venture near the volcano. The nearest community is
Cold Bay, about 35 miles to the southwest. Other nearby communities include King Cove, Nelson Lagoon and Sand Point. All of these are
beyond the reach of lahars and pyroclastic flows; however, each of these communities has experienced ash falls from eruptions
Ash plumes are the most significant hazard associated with eruptions at Pavlof. They are a major hazard to local aircraft
and a threat to international air traffic when they reach significant height. This is why the volcano is monitored with instruments
and why satellite images of the volcano are examined daily.
Pavlof is usually covered by snow and ice. Eruptions can quickly melt significant amounts of snow and ice to produce volcanic mudflows
known as lahars. These lahars are fast-moving slurries. They can fill stream valleys with hot water, sand, gravel, boulders and volcanic
debris. They destroy stream habitat, which can be lost for many years after an eruption. They travel at very high speeds, and anyone in
stream valleys below the volcano when an eruption occurs must quickly move to high ground to escape the deadly flow.
Pavlof eruptions often produce pyroclastic flows. These are hot clouds of rock, gas and ash that sweep down the flanks of the volcano at
speeds of up to 100 miles per hour. They are dense enough to knock down trees and hot enough to incinerate everything in their path.
Lava flows are produced by many Pavlof eruptions. They are generally not a hazard to humans because they move slowly, their flow path is predictable,
and they generally do not travel far from the volcano.
Pavlof Volcano gets a lot of attention because it produces a small eruption every few years, making it one of the most active volcanoes
in North America. It has the ability to cause temporary air traffic disruptions, but it ranks far below a major threat to local populations
and the planet in general.
The eruptive history of the Emmons Lake Volcanic Center includes several large caldera-forming
eruptions. Between three and six major caldera-forming eruptions have occurred there in the past 400,000 years. Estimated dates of these major eruptions are around 294,000, 234,000, 123,000, 100,000, 30-50,000, and 26,000 years ago.
Some of these eruptions have been powerful enough to cover up to 1000 square miles with pyroclastic flows of dacite and rhyolite. In some eruptions they were hot enough to produce welded deposits at distances of up to 20 miles from the vent! Fortunately, these caldera-forming eruptions are extremely rare, and there is no indication that one will occur in the foreseeable future.
Contributor: Hobart King
Find it on Geology.com
More from Geology.com
|Gold - An important metal for thousands of years - uses, prospecting, mining, production.
|Mount Etna: The most active volcano in Europe, continues an eruption that started in 2001.
|Ruby and Sapphire are the 2nd and 3rd most popular colored stones in the United States.
|Expansive Soil: Causes more damage than floods, hurricanes & tornadoes combined.
|Red Beryl is one of the rarest gems. Small amounts are mined at one locality in Utah.
|Ash Plume from Pavlof's 2007 Eruption
|Pavlof volcano and an eruption plume photographed from a commercial flight on August 30, 2007. The plume is about 17,000 feet tall. Little Pavlof is the smaller peak on Pavlof's right shoulder. Eruptions like this are a severe hazard to local and international air traffic. Photograph by Chris Waythomas, Alaska Volcano Observatory / U.S. Geological Survey.
| Photograph of the three Pavlofs. From left: Pavlof Sister, Pavlof, and Little Pavlof (small peak on the right shoulder of Pavlof) as observed from Trader Mountain in August 2005 by Chris Waythomas. Pavlof Sister and Little Pavlof have not erupted during recorded history but have probably erupted within the past 10,000 years. Alaska Volcano Observatory image.
|Pavlof Volcano - 1996 Eruption
| A photo of Pavlof Volcano taken on November 13, 1996. This image shows Pavlof's steep stratovolcano geometry. This eruption began on September 15, 1996 and ended on January 3, 1997. It produced numerous steam and ash eruptions, strombolian eruptions, lava fountains and lava flows. USGS image by Elgin Cook.|
|Facts About Pavlof Volcano
||Near the end of the Alaska Peninsula
||55° 25′ 0" N 161° 53′ 15" W
||2,519 meters (8,264 feet)
|Pavlof Volcano - 2007 Eruption
| Photograph of Pavlof Volcano (erupting), Pavlof Sister (left) and Little Pavlof (small peak on the right shoulder of Pavlof) taken on August 29, 2007 by Guy Tygat. Alaska Volcano Observatory image.
|Lahar runout deposit produced during the 2007 eruption at Pavlof. It is a sandy matrix-support deposit with a mix of volcanic ejecta and stream pebbles. Image by Chris Waythomas. USGS image. Enlarge.
|Map showing the geographic extent and locations of pyroclastic flow, surge and blast hazards around Pavlof and neighboring volcanoes. USGS image. Enlarge. Additional maps of lahar, debris-avalanche, ash fallout and other hazards are part of the Preliminary Volcano-Hazard Assessment for the Emmons Lake Volcanic Center report and map set.
|Video of a lahar produced during the 2007 eruption of Pavlof. In the video you can observe the front of the lahar sweeeping down the channel. Other larger lahars exceeded the capacity of the channel and produced the sediment-covered landscape around the channel. Filmed by pilot Jeff Linscott of JL Aviation. Alaska Volcano Observatory video.|
|USGS topographic map of Pavlof and surrounding volcanic features. Enlarge.|
|More Information About Pavlof | http://geology.com/volcanoes/pavlof/ |
4.21875 | 1 Answer | Add Yours
The Anglo-Saxons had strict codes of conduct related to their fellow "soldiers" and battle-mates. Being "heroic" meant living by certain codes of conduct.
Honor was one of the most important codes that Anglo-Saxon warriors lived by. They chose to live and fight with great honor and sacrifice for their fellow "brothers-in-arms." They fought until the death for their brethren. They believed that in death in battle, one made the ultimate sacrifice for their beliefs. The bond that these warriors had with their fellow "soldiers" was often stronger than ones they had with their families, as well.
Being heroic also meant loyalty to one's fellow countrymen and "soldiers." To not be loyal was to punishable by death or banishment. eNotes states:
Loyalty is one of the greatest virtues in the world depicted in Beowulf. It is the glue holding Anglo-Saxon Society together, but it brought with it the darker duties of vengeance and feud. ("Beowulf Themes")
Loyalty also included, as the quote references, seeking vengeance for wrongs done against them...murders of loved ones, etc.
Another important code of conduct involved bravery. These warriors didn't back down from dangerous battles and "missions." They were expected to never run from a fight and to fight until the death if need be.
Beowulf seeks personal glory, as well, for his acts of valor and bravery, etc., and he has no problem bragging about his own abilities!
All of these encompass what the Anglo-Saxons would have considered a true "hero."
We’ve answered 302,220 questions. We can answer yours, too.Ask a question | http://www.enotes.com/homework-help/what-traits-do-anglo-saxons-consider-heroic-316940 |
4.03125 | Volumes of Solids with Known Cross-Sections
We already mentioned that slicing a solid with known cross-section is like slicing a loaf of cinnamon raisin bread. We could also think of it like we're slicing carrots for pot roast or potato slices for potatoes au gratin. Hungry for more?
When asked to find the volume of one of these solids, we're given a few things to start. We'll be told what the base of the solid is, which way it's being sliced,
and what the slices look like.
We can call this one the stick of butter example. Let R be the region enclosed by the x-axis, the graph y = x2, and the line x = 4. Write an integral expression for the volume of the solid whose base is Rand whose slices perpendicular to the x-axis are squares.
Let's use our attack plan.
1) First we have to understand what the solid is.
The region R looks like this:
Let's turn R on its side to make it easier to think of as the base of the solid.
We're told that if the solid is sliced perpendicularly to the x-axis the slices are squares. Since the base of the solid stretches from the x-axis up to the graph y = x2, the side-length of the slice at position x is y = x2. This means if we take slices near x = 0, they'll be tiny. If we take slices near x = 4, they'll be much bigger.
2) Now that we understand what the solid looks like, we need to slice it and find the approximate volume of a slice. We've already sliced it, and we know that each slice is a square with side-length y = x2. Each square has a tiny little bit of thickness Δ x.
To find the volume of the slice we multiply the area of the square by the thickness of the slice to get
(x2)2 Δ x.
3) The variable x goes from 0 to 4 within this solid. When we add the volumes of all the slices and take the limit as the number of slices approaches ∞, we find the volume of the solid is
We recommend drawing pictures. Lots of them. Don't be stingy. You'll use less paper and time drawing a couple extra images than you would by getting the wrong answer and having to start all over. At a minimum, three of them:
1) the region that forms the base of the solid
2) the region with at least one slice sitting on it
3) a slice all by itself
The best way to get better at these is to practice. Feel free to have your favorite 3-D sweet treat while you go through these exercises. | http://www.shmoop.com/area-volume-arc-length/solid-volume.html |
4.34375 | If you like us, please share us on social media, tell your friends, tell your professor or consider building or adopting a Wikitext for your course.
Ozone is an unstable compound. Pure ozone decomposes explosively, while ozonised oxygen decomposes slowly at room temperature. Decomposition is instantaneous at about 573 K.
The decomposition is accelerated by the presence of manganese dioxide, platinum black and copper oxide etc.
Ozone acts as a powerful oxidizing agent due to the reaction,
The nascent oxygen formed due to its decomposition is responsible for the oxidation of a number of substances. Typical oxidation reactions are given below:
It oxidises lead sulphide to lead sulphate
It liberates iodine from a solution of potassium iodide
Halogen acids are oxidized to corresponding halogens (e.g. hydrochloric acid is oxidized to chlorine.)
It oxidizes sulphur dioxide to sulphur trioxide
It oxidizes moist iodine to iodic acid
It oxidizes potassium ferrocyanide solution to potassium ferricyanide
It oxidizes acidified stannous chloride to stannic chloride
Silver metal when warmed with ozone gets blackened due to reduction of the oxide formed in the initial stages of the reaction.
When ozone is passed through mercury, it loses its meniscus and sticks to the glass due to the formation of mercurous oxide. This is called tailing of mercury. The meniscus can be restored by shaking it with water.
Ozone acts as a good bleaching agent for vegetable coloring matter (due to its oxidizing nature)
Ozone reduces peroxides to oxides and in turn gets reduced to oxygen. For example, with H2O2 and BaO2, it gives H2O and BaO respectively.
Ozonides are addition products, which are formed when unsaturated organic compounds containing double bond react with ozone.
These ozonides are decomposed by water or dilute acids giving aldehydes and hydrogen peroxide in most of the cases.
The position of the double bond can be located in the original unsaturated molecule by this reaction. This reaction is termed 'ozonolysis'.
Ozone is used
Either of these three tests help in identifiying ozone. | http://chemwiki.ucdavis.edu/Inorganic_Chemistry/Descriptive_Chemistry/Elements_Organized_by_Block/2_p-Block_Elements/Group_16%3A_The_Oxygen_Family/Chemistry_of_Oxygen/Ozone/Important_properties_of_ozone |
4.09375 | The star GD61 is a white dwarf. As such, it’s insanely dense—similar in diameter to Earth, but with a mass roughly that of the Sun, so that a teaspoon of it is estimated to weigh about 5.5 tons. All things considered, it’s not a particularly promising stellar locale to find evidence of life.
But a new analysis of the debris surrounding the star suggests that, long ago, GD61 may have provided a much more hospitable environment. As part of a study published today in Science, scientists found that the crushed rock and dust near the star were once part of a small planet or asteroid made up of 26 precent water by volume. The discovery is the first time we’ve found water in a rocky, Earth-like planetary body (as opposed to a gas giant) in another star system.
“Those two ingredients—a rocky surface and water—are key in the hunt for habitable planets,” Boris Gänsicke of the University of Warwick in the UK, one of the study’s authors, said in a press statement. “So it’s very exciting to find them together for the first time outside our solar system.”
Why was water found in such a seemingly unhospitable place? Because once upon a time, GD61 wasn’t so different from our Sun, scientists speculate. But roughly 200 million years ago, when it exhausted its supply of fuel and could no longer sustain fusion reactions, its outer layers were blown out as part of a nebula, and its inner core collapsed inward, forming a white dwarf. (Incidentally, this fate will befall an estimated 97 percent of the stars in the Milky Way, including the Sun.)
When that happened, the tiny planet or asteroid in question—along with all the other bodies orbiting GD61—were violently knocked out of orbit, sucked inward, and ripped apart by the force of the star’s gravity. The clouds of dust, broken rock and water that the scientists recently discovered near the star are the remnants of these planets.
Continue reading about this amazing discovery at Smithsonian.com. | https://www.tumblr.com/search/gd61 |
4 | , in U.S. history, famed wagon trail from Independence, Mo., to Santa Fe, N.M., an important commercial route (1821–80). Opened by William Becknell, a trader, the trail was used by merchant wagon caravans travelling in parallel columns, which, when Indians attacked, as they did frequently between 1864 and 1869, could quickly form a circular line of defense. From the Missouri River the trail followed the divide between the tributaries of the Arkansas and Kansas rivers to the site of present Great Bend, Kan., then proceeded along the Arkansas River. At the western end, several routes trended southwest to Santa Fe, the shortest being the “Cimarron Cutoff ” through the valley of the Cimarron River.
The importance of the eastward silver and fur trade and westward transport of manufactured goods over the trail was a contributing cause of U.S. seizure of New Mexico in the Mexican War. Use of the trail increased under U.S. rule, especially after the introduction of mail delivery service via stagecoach (1849), but ceased with the completion of the Santa Fe railroad in 1880. | http://www.britannica.com/topic/Santa-Fe-Trail |
4.03125 | Area of a rhombus
Three different ways to calculate the area of a
are given below, with a formula for each.
Drag the orange dots on each vertex
to reshape the rhombus. The area will be continuously calculated using the "base times height" method.
is actually just a special type of parallelogram.
Many of the area calculations can be applied to them also. Choose a formula based on the values you know to begin with.
1. The "base times height" method
First pick one side to be the base. Any one will do, they are all the same length.
Then determine the altitude - the perpendicular distance from the chosen base to the opposite side.
The area is the product of these two, or, as a formula:
b is the length of the base
a is the altitude (height).
Use the calculator below to calculate the area of the trapezoid given base (side) length and
altitude (perpendicular height).
Enter any two values and the missing one will be calculated.
For example, enter the area and base length, and the height needed to get that area is calculated.
2. The "diagonals" method
Another simple formula for the area of a rhombus when you know the lengths of the diagonals.
The area is half the product of the diagonals. As a formula:
d1 is the length of a diagonal
d2 is the length of the other diagonal
3. Using trigonometry
If you are familiar with trigonometry, there is a handy formula when you know the length of a side and any angle:
s is the length of any side
a is any interior angle
sin is the sine function
(see Trigonometry Overview)
It may seem odd at first that you can use any angle since they are not all equal. But the angles are either equal or
and supplementary angles have the same sine.
Other polygon topics
Types of polygon
Area of various polygon types
Perimeter of various polygon types
Angles associated with polygons
(C) 2009 Copyright Math Open Reference. All rights reserved | http://www.mathopenref.com/rhombusarea.html |
4.09375 | Alphabet Letter V Vulture
Preschool Lesson Plan Printable Activities, Worksheets and Crafts
Alphabet > Letter V > V is for Vulture
Animals > Birds | Desert > Endangered > Vulture
Crafts > two printable crafts
Holidays and Events >
*Jan. 5th > National Bird Day
*May 4th > Bird day
Online Jigsaw Puzzles > Alphabet
Here are printable materials and some suggestions to present letter V. The presentation ideally should be part of birds and vulture theme theme activities and crafts.
Science > Animals > Birds > The Vulture
A vulture is large bird that usually has dark feathers and bald head and neck. There are many kinds of vultures. These birds are related to hawks and feed mostly on dead animals. Some species of vultures are endangered.
* Vulture information and images at Wikipedia.org
Alphabet Activity: Alphabet Letter V is for Vulture
Present the letter V Vulture six-piece online jigsaw puzzle to practice problem solving and view letter V in upper and lower case. Adjust number or pieces using the Change Cut button on the left.
Present and display your option of alphabet printable materials listed in the materials column.
* Finger and Pencil Tracing:
Trace letter V's in upper and lower case with your finger as you also sound out the letter. Invite the children to do the same on their coloring page.
Encourage the children to trace the dotted letter with your choice of sharpened crayon, fine tip marker, coloring or regular pencil and demonstrate the direction of the arrows and numbers that help them trace the letter correctly. During the demonstration, you may want to count out loud as you trace so children become aware of how the number order aids them in the writing process.
*Find the letter V's: Have the children find all the letter V's in upper and lower case on the page and encourage them to circle or trace/shade them first. Visit each child to make sure they have identified the letter V's and then discuss the locations with the poster.
*Coloring Activity: Encourage the children to color the image in the coloring page or worksheet. Idea: after coloring paste a few craft feathers the coloring page.
Letter V words: Letter V Activity Page and Mini Book This page and matching mini book can be used as part of Letter V program of activities to reinforce letter practice and to identify related V words. Read suggested instructions for using the worksheet and mini-book.
Discuss other letter V words and images: First 'brainstorm' and ask the children about other words that have that beginning sound and write them on a board (dry erase board) as the children come up with example. You can print letter V in a different bright color to make it stand out. If you have illustrated alphabet books you can also use images in them. You can also display other V posters and coloring pages or even make a letter V classroom book using coloring images or color posters. Visit Letter V Printable Materials to make your choice.
Letter V Word Search & Handwriting Practice
The four word search game features a vulture and letter V words with pictures and handwriting practice.
Advanced independent handwriting practice:
1. Print your choice of printable lined-paper and encourage children to draw a vulture behind the page or print a vulture coloring page > #1, #2 or #3.
2. Drawing and writing paper Encourage children to draw, color and decorate a vulture and write letter V.
Craft activity: Select a printable vulture craft from the materials column. External links at dltk-kids.com
|To view updates to these activities visit: http://www.first-school.ws/activities/alpha/v/vulture..htm| | http://first-school.ws/activities/alpha/v/vulture.htm |
4.0625 | Arithmetic Operators in Visual Basic
Arithmetic operators are used to perform many of the familiar arithmetic operations that involve the calculation of numeric values represented by literals, variables, other expressions, function and property calls, and constants. Also classified with arithmetic operators are the bit-shift operators, which act at the level of the individual bits of the operands and shift their bit patterns to the left or right.
Negation also uses the - Operator (Visual Basic), but with only one operand, as the following example demonstrates.
Exponentiation uses the ^ Operator (Visual Basic), as the following example demonstrates.
Integer division is carried out using the \ Operator (Visual Basic). Integer division returns the quotient, that is, the integer that represents the number of times the divisor can divide into the dividend without consideration of any remainder. Both the divisor and the dividend must be integral types (SByte, Byte, Short, UShort, Integer, UInteger, Long, and ULong) for this operator. All other types must be converted to an integral type first. The following example demonstrates integer division.
Modulus arithmetic is performed using the Mod Operator (Visual Basic). This operator returns the remainder after dividing the divisor into the dividend an integral number of times. If both divisor and dividend are integral types, the returned value is integral. If divisor and dividend are floating-point types, the returned value is also floating-point. The following example demonstrates this behavior.
Dim x As Integer = 100 Dim y As Integer = 6 Dim z As Integer z = x Mod y ' The preceding statement sets z to 4.
Division by zero has different results depending on the data types involved. In integral divisions (SByte, Byte, Short, UShort, Integer, UInteger, Long, ULong), the .NET Framework throws a DivideByZeroException exception. In division operations on the Decimal or Single data type, the .NET Framework also throws a DivideByZeroException exception.
In floating-point divisions involving the Double data type, no exception is thrown, and the result is the class member representing NaN, PositiveInfinity, or NegativeInfinity, depending on the dividend. The following table summarizes the various results of attempting to divide a Double value by zero.
Dividend data type
Divisor data type
NaN (not a mathematically defined number)
When you catch a DivideByZeroException exception, you can use its members to help you handle it. For example, the Message property holds the message text for the exception. For more information, see Try...Catch...Finally Statement (Visual Basic).
A bit-shift operation performs an arithmetic shift on a bit pattern. The pattern is contained in the operand on the left, while the operand on the right specifies the number of positions to shift the pattern. You can shift the pattern to the right with the >> Operator (Visual Basic) or to the left with the << Operator (Visual Basic).
The data type of the pattern operand must be SByte, Byte, Short, UShort, Integer, UInteger, Long, or ULong. The data type of the shift amount operand must be Integer or must widen to Integer.
Arithmetic shifts are not circular, which means the bits shifted off one end of the result are not reintroduced at the other end. The bit positions vacated by a shift are set as follows:
0 for an arithmetic left shift
0 for an arithmetic right shift of a positive number
0 for an arithmetic right shift of an unsigned data type (Byte, UShort, UInteger, ULong)
1 for an arithmetic right shift of a negative number (SByte, Short, Integer, or Long)
The following example shifts an Integer value both left and right.
Dim lResult, rResult As Integer Dim pattern As Integer = 12 ' The low-order bits of pattern are 0000 1100. lResult = pattern << 3 ' A left shift of 3 bits produces a value of 96. rResult = pattern >> 2 ' A right shift of 2 bits produces value of 3.
Arithmetic shifts never generate overflow exceptions.
In addition to being logical operators, Not, Or, And, and Xor also perform bitwise arithmetic when used on numeric values. For more information, see "Bitwise Operations" in Logical and Bitwise Operators in Visual Basic.
Operands should normally be of the same type. For example, if you are doing addition with an Integer variable, you should add it to another Integer variable, and you should assign the result to a variable of type Integer as well.
One way to ensure good type-safe coding practice is to use the Option Strict Statement. If you set Option Strict On, Visual Basic automatically performs type-safe conversions. For example, if you try to add an Integer variable to a Double variable and assign the value to a Double variable, the operation proceeds normally, because an Integer value can be converted to Double without loss of data. Type-unsafe conversions, on the other hand, cause a compiler error with Option Strict On. For example, if you try to add an Integer variable to a Double variable and assign the value to an Integer variable, a compiler error results, because a Double variable cannot be implicitly converted to type Integer.
If you set Option Strict Off, however, Visual Basic allows implicit narrowing conversions to take place, although they can result in the unexpected loss of data or precision. For this reason, we recommend that you use Option Strict On when writing production code. For more information, see Widening and Narrowing Conversions (Visual Basic). | https://msdn.microsoft.com/library/b6ex274z.aspx |
4.03125 | 03.09.12 6:09 PM ET
Japanese Debris Plume From Tsunami Migrating Across Pacific Ocean
One year after the nuclear meltdown at Japan’s Fukushima Daiichi nuclear power plant, scientists are beginning to understand exactly what happened. Concerns about nuclear fallout have dissipated, and safe parts of the area around the plant have slowly begun to be repopulated by people and wildlife.
Yet as one problem subsides, another arises. The pile of garbage carried back to sea after the tsunami, officially known as the storm’s debris field, has been kept in motion by ocean currents over the past 12 months. Now it’s a line of trash thousands of miles long, migrating quickly toward Guam, Hawaii, and the West Coast of the U.S. Junk is one concern, but health officials are wondering how much water that came in contact with the nuclear plant’s radioactive core has been washed out to sea, potentially making its way through the ocean’s food chain and remaining toxic for hundreds of years.
Debris is what you might expect from the 3 million tons of Pacific Ocean water that flooded Japan's east coast, then slowly receded: boats, car parts, lumber, scrap metal. The heavier objects are believed to have sunk not far from there—although a Russian fishing crew spotted a refrigerator last fall. Much of the remaining material won’t easily break down in the water column, nor from the complex friction created by ocean currents. It also won’t stay together. Modeling teams at the International Pacific Research Center in Hawaii have mapped the spread of the plume, noting that it has traveled 2,000 miles since the tsunami struck land in March 2011. An animation shows how it might soon engulf Midway Island, a U.S. military hub.
Follow the path as the plume spreads and the ultimate destination becomes clear. As ocean currents head eastward across the Pacific, the plume is expected eventually to hit the West Coast of the United States. The National Oceanic and Atmospheric Administration has led surveying missions over the western Pacific to chart the path of the debris, using advanced computer models to track ocean currents, wind variabilities, and other geographic metrics. “We’re preparing for the best- and worst-case scenarios—and everything in between,” Nancy Wallace, director of NOAA’s Marine Debris Program, said in a statement.
The models show that the sludge may touch the north shore of the Hawaiian archipelago as soon as next month, then continue its meander to the northwest U.S. in early 2013. Circular momentum caused by both ocean currents and the wind-based Coriolis force fueled by the Earth’s rotation will then lead the debris back toward the central Pacific in 2014.
There are certainly limits to computer models, and scientists have difficulty forecasting weather more than a few weeks in advance, let alone ocean movements years from now. While stray objects may wash up on beaches, there’s reason to believe the broader collection of floating junk will end up in an area of ocean about 1,000 miles north of Hawaii known as the Great Pacific Garbage Patch. Since it was discovered in 1997, the area, hundreds of miles across and at the very center of rotating currents, has long been considered a dead zone for marine wildlife, a virtual landfill of millions of tons the world’s trash, washed up over decades and kept in slow, perpetual rotation.
When Newsweek reported on the GPGP in 2009, ocean conservationists with the Ocean Voyages Institute in Sausalito, Calif., were trying to imagine ways to clean it up, ideally through recycling the material into usable oil (much of it is floating plastic), . Yet with the tsunami debris joining the gyre, the problem is compounding more quickly than anyone can measure. Charles Moore, a ship captain who discovered the gyre in 1997, gave up long ago, saying that the effort to clean it up would bankrupt any nation.
Once ocean advocates accept the degrading impact of aging garbage infesting the waters, questions remain about a longer-lasting threat: radiation. Much of the tsunami water refilled the Pacific days before the meltdown of nuclear reactors at the Fukushima plant. But in the days after the meltdown, thousands of gallons of radioactive water flowed into the ocean as well, according to several on-site incident reports. Plant officials at Fukushima even dumped highly radioactive water into the Pacific to make room for even more radioactive water needed to cool storage containers. According to the Union of Concerned Scientists, in April 2011, levels of radioactive iodine-131 and cesium-137 in seawater off the Japan coast were measured at 5 million and 1 million times what most governments consider an acceptable level of exposure. Different compounds have different radioactive half-lives, but the most potent will stay toxic for several centuries.
Nuclear scientists have been deluged with government and industry reports marking the one-year anniversary of the Fukushima meltdown. But among the most respected is one from the American Nuclear Society, the leading group of nuclear professionals. ANS researchers found that all off-site health consequences of the Fukushima Daiichi accident may ultimately be negligible. “From what we know now, there will be no major measurable health impacts,” says Dale Klein, former chairman of the Nuclear Regulatory Commission, who helped write the ANS report. NOAA and nuclear scientists have also dismissed concerns of health impacts on contaminated water that came in contact with the reactor.
Wildlife biologists and environmentalists, however, aren’t as quick to dismiss the impending risks. “It’s one thing to have radioactivity to humans, it’s another thing to have little teeny amounts that bioaccumulate in the food chain,” says Miyoko Sakashita, oceans director with the Center for Biological Diversity. (That process occurs when small amounts of radiation make their way into bigger and bigger organisms, eventually ingested by humans.) The environmental group Greenpeace has questioned government surveying of radiation, at times conducting toxicity tests of its own. “It’s a concern that you’ve done some serious damage to the marine field around Japan,” says Jim Riccio, a nuclear policy analyst with the group.
Japan, largely powerless to stop the spread of its debris, has with limited success monitored the threat of radiation it may export. Demand for seafood from Japan dropped by 47 percent in South Korea, Japan’s closest trading partner. Inspectors with the U.S. Food and Drug Administration, meanwhile, have kept watch on seafood caught in the west Pacific that is sold within America's borders. As of December, the regulatory agency said it had detected no radionuclides in any fish imported from Japan. | http://www.thedailybeast.com/articles/2012/03/09/japanese-debris-plume-from-tsunami-migrating-across-pacific-ocean.html |
4.0625 | ||This article does not have any sources. (September 2009)|
Romanization or Latinization is the process by which words and languages that normally use alphabets other than the Latin alphabet are converted into Latin letters so that people who do not know the original alphabet can still read the sounds of the language. It is one way to show pronunciation of words from a non-Latin writing system.
Methods of Romanization[change | change source]
Transcription[change | change source]
Transcription occurs when the effect at the end is that both the original and the transcribed version sound the same, whether or not each letter alone in one text matches the corresponding letter in the other one.
Transliteration[change | change source]
Transliteration occurs when the effect at the end is for the letters to match one to one, whether or not the sound is the same.
Other websites[change | change source]
- UNGEGN Working Group on Romanization Systems
- U.S. Library of Congress Romanization Tables in PDF format
- Java romanization app
- One of the few books with lists of romanizations is ALA-LC Romanization Tables, Randall Barry (ed.), U.S. Library of Congress, 1997, ISBN 0-8444-0940-5.
- Microsoft Transliteration Utility – A free tool for making and using transliteration systems from any alphabet to any other alphabet.
Asian auto link | https://simple.wikipedia.org/wiki/Romanization |
4.0625 | Earth's atmosphere is a layered mixture of gases, mainly nitrogen (78%) and oxygen (21%). Argon, water vapour, carbon dioxide and methane are among the other gases present in small amounts. The atmosphere helps to protect our planet from asteroid impacts and solar radiation.
The innermost layer, the troposphere, contains most of the planet's weather and extends out to 10–15km above the surface.
The next layer out, the stratosphere, is drier and less dense and extends out to about 50km. The Sun's UV light breaks down oxygen in the stratosphere to form the Earth's protective ozone layer.
The mesosphere, thermosphere, and ionosphere make up the remaining outer layers that extend out to about 100km.
Tiny organisms in the oceans produce about half the planet's oxygen.
Dr Iain Stewart explains how phytoplankton produce about half of the Earth's oxygen.
Stromatolites pump oxygen into the early atmosphere.
Dr Iain Stewart explains how stromatolites, one of the earliest forms of life, first released oxygen over three billion years ago when they turned sunlight into energy. Oxygen was initially soaked up by iron in the seas but eventually entered the atmosphere.
Wind transports large amounts of nutrient-rich dust around the planet.
Dr Iain Stewart explains how wind transports large amounts of nutrient-rich dust around the globe. This dust fertilizes the oceans and plants on land.
Iain Stewart flies through some of the atmosphere's layers.
Dr Iain Stewart takes a ride through some of the atmosphere's layers in an English Electric Lightning jet. He flies through the troposphere and stratosphere to an altitude of 15km.
Iain Stewart explains how the troposphere behaves like a fluid.
Dr Iain Stewart explains how the troposphere, the innermost layer of the Earth's atmosphere, behaves like a fluid.
The atmosphere of Earth is the layer of gases surrounding the planet Earth that is retained by Earth's gravity. The atmosphere protects life on Earth by absorbing ultraviolet solar radiation, warming the surface through heat retention (greenhouse effect), and reducing temperature extremes between day and night (the diurnal temperature variation).
The common name air (English pronunciation: /ɛər/) is given to the atmospheric gases used in breathing and photosynthesis. By volume, dry air contains 78.09% nitrogen, 20.95% oxygen, 0.93% argon, 0.039% carbon dioxide, and small amounts of other gases. Air also contains a variable amount of water vapor, on average around 1% at sea level, and 0.4% over the entire atmosphere. Air content and atmospheric pressure vary at different layers, and air suitable for the survival of terrestrial plants and terrestrial animals is found only in Earth's troposphere and artificial atmospheres.
The atmosphere has a mass of about 5.15×1018 kg, three quarters of which is within about 11 km (6.8 mi; 36,000 ft) of the surface. The atmosphere becomes thinner and thinner with increasing altitude, with no definite boundary between the atmosphere and outer space. The Kármán line, at 100 km (62 mi), or 1.57% of Earth's radius, is often used as the border between the atmosphere and outer space. Atmospheric effects become noticeable during atmospheric reentry of spacecraft at an altitude of around 120 km (75 mi). Several layers can be distinguished in the atmosphere, based on characteristics such as temperature and composition.
The study of Earth's atmosphere and its processes is called atmospheric science (aerology). Early pioneers in the field include Léon Teisserenc de Bort and Richard Assmann. | http://www.bbc.co.uk/science/earth/atmosphere_and_climate/atmosphere |
4.4375 | Use the Doppler Effect expression to calculate speed of the observer.
The Doppler Effect is the change in frequency detected by an observer because the sound source and the observer have different velocities with respect to the medium of sound propagation.
The Doppler Effect expression which relates observer frequency and source frequency is,
Here, is observer speed,is source speed , is speed of sound,is frequency of the source, and is observed frequency.
In the numerator the plus sign is used when the observer moves toward the source, and minus sign is used when observer moves away from the source. In the denominator, the minus sign is used when the source moves toward the observer, and plus sign is used when the source moves away from the observer. | http://www.chegg.com/homework-help/physics-6th-edition-chapter-16-problem-71p-solution-9780471151838 |
4.15625 | Bibliography Teacher Resources
Find Bibliography educational ideas and activities
Showing 101 - 120 of 3,656 resources
How to Draw Caricatures
Caricature drawing is fun, and can help learners explore the principle of design and content specific vocabulary. They view a video and books that use character drawings, discuss vocabulary such as exaggeration, proportion, and symmetry,...
6th - 12th Visual & Performing Arts
The Strange Case of Dr. Jekyll and Mr. Hyde: Getting Started with Literature Circles
Students complete novel analysis activities for The Strange Case of Dr. Jekyll and Mr. Hyde. In this novel analysis lesson, students work in literature circle groups to complete close reading activities for the novel. Students keep...
8th English Language Arts
Reading Between the Lines
New Review Learning to read between the lines, to recognize the on-the-surface meaning as well as the implied or inferred meaning of text, is an important skill for all readers. The materials and activities in this 73-page packet are designed to...
7th - 9th English Language Arts CCSS: Adaptable
African-Americans and the Military
Students study the key figures in African-American military history. They discover how African-American military history reflect both discrimination and the often heroic struggle to overcome discrimination. They examine the key periods...
9th - 12th Social Studies & History
Iroquois Indians: A League of Their Own
Sixth graders explore Native American politics. In this early American culture lesson, 6th graders research the political organization of the Iroquois tribe. Students will then create an ABC fact book, where each letter represents a...
6th Social Studies & History
A Picture Book of George Washington
Students discuss the character traits of George Washington. In this George Washington lesson, students read A Picture Book of George Washington, discuss the book, and complete worksheet activities about Washington's self-discipline and...
1st - 3rd English Language Arts
With Detective Fiction in the Urban Classroom
This abstract for an instructional unit using three-minute mysteries, stories by Sir Arthur Canon Doyle, and Edgar Allan Poe includes a short history of detective fiction, sample plans, and suggestions for exercises and activities...
6th - 8th English Language Arts CCSS: Adaptable
“You Will Always Be Mango Street”: Sense of Place and The House on Mango Street
Vignettes from Sandra Cisneros's The House on Mango Street provide readers with an opportunity to observe how writers use dialogue and details to create a sense of time and place.
7th - 9th English Language Arts CCSS: Designed
Using Picture Books to Teach the Holocaust
Students compare a photo of a child's room during the Holocaust to their room. In this WWII lesson, students read picture books and evaluate the roles of characters in the book. Students create either a poster about the roles, a movie...
5th - 12th English Language Arts
Critically Examining, Analyzing and Evaluating Picture Books on Aboriginal Canada
Students combat pervasive stereotypes. In this Critical Analysis lesson, students examine and evaluate the stereotypes of Aboriginal groups, as depicted in a picture book. Students will use primary and secondary sources to compose...
6th - 10th Social Studies & History
Australian Geography Unit
At the heart of this resource is a beautifully detailed PowerPoint presentation (provided in PDF form) on the overall physical geography of Australia, basic facts about the country, Aboriginal history, and Australia culture and lifestyle.
7th - 9th Social Studies & History CCSS: Adaptable
The Crucible Project Sheet
Provide your pupils with 12 project options for The Crucible by Arthur Miller. The first six are creative writing assignments ranging from a letter to the editor to a memoir. Each option comes with a brief description and there is a...
8th - 11th English Language Arts CCSS: Adaptable | http://www.lessonplanet.com/lesson-plans/bibliography/6 |
4.15625 | The recipe for making diamonds is no secret: Take carbon and squeeze it under the extremely high temperatures and pressures found deep inside the Earth.
The mystery lies in how the prized gemstones then get delivered from the depths to parts of Earth's crust that are accessible to miners.
According to a new study, diamonds can be carried up through the lithosphere—the crust and uppermost layer of the mantle—by dense magmas rich in carbonate.
"These melts are really quite special, because they can hold a huge amount of dissolved carbon dioxide, up to 40 to 45 percent by weight," said study leader James "Kelly" Russell, a petrologist from the University of British Columbia in Vancouver.
Previous models had suggested that gases in the magma would increase its buoyancy, helping to push the diamond-laden melt closer to the surface without destroying the precious gems.
The new lab experiments now show how molten carbonate reacts with other chemicals in Earth's lithosphere to release the gas, offering a likely mechanism for speeding up the dense magma.
"Let There Be a Gas Phase"
Natural diamond production begins deep beneath the planet's oldest continents, where Earth's lithosphere can extend to depths of 75 miles (120 kilometers), Russell said.
There, a type of material called kimberlite magma forces its way up from deeper in Earth's mantle, cracking the solid rock.
As it rises, the magma collects fragments of rocks, like floodwaters picking up silt and gravel. Some of these fragments contain diamonds.
(Related: "World's Oldest Diamonds Discovered in Australia.")
But the diamond-containing rocks are heavy, and the magma picks up enough of them that its progress should be substantially slowed, Russell said.
Diamonds, however, have to rise quickly, or they will be destroyed as they pass through zones of intermediate pressure, where the gems can be rapidly consumed by high-temperature oxidation.
The best estimates are that, in order for the diamonds to make it, the magma must travel all the way to the surface in about 10 to 45 hours—moving at about 3 to 13 feet (1 to 4 meters) a second.
The only way for magma to rise so quickly, Russell and others have long believed, is if the melt is supercharged with gas—but nobody knew where such gas might come from.
"Prior models have been [rather] deus ex machina—let there be a gas phase," he said.
Diamonds Caught in Volcano Plumbing
In the new paper, Russell and colleagues found that as carbonate-rich magma passes through overlying rocks on its way toward the surface, it quickly dissolves those rocks' silica-rich minerals.
In high-temperature and high-pressure lab experiments, this process can start happening within tens of minutes.
The resulting mixture of molten silica and carbonate can't carry as much dissolved carbon dioxide as the original magma.
Large quantities of gas therefore bubble out, causing the magma to rise even quicker, until it reaches the surface in an explosive eruption.
More importantly for miners, long after the resulting volcano has been eroded into invisibility on the surface, its interior plumbing remains, leaving behind kimberlite "pipes" that may be rich in diamonds.
A First Step in Better Diamond Hunting?
Whether the findings will help prospectors find new diamond deposits is unclear, Russell said.
"These people are pretty smart," he said of diamond miners, noting that years of experience have taught them many rules of thumb regarding the most likely places to look.
Still, he noted, the new study might point the way to future research into mineralogical signals that could help differentiate fast-rising kimberlite deposits, which might contain diamonds, from slower-rising ones that are unlikely to bear any gems.
The study might also help increase prospectors' confidence by explaining why their current strategies work, Russell added.
The new diamond-transport study appears this week in the journal Nature. | http://news.nationalgeographic.com/news/2012/01/120119-diamonds-gems-earth-magma-carbonate-rocks-science/ |
4.1875 | Power Teacher Resources
Find Power educational ideas and activities
Showing 1 - 20 of 17,807 resources
How Does Work...Work?
What makes a clock tick or a bulb light up? The concept of work is explained to a backdrop of clever animation. Physics fans learn that the amount of work equals the product of the force and distance, and that the rate equals the amount...
5 mins 7th - 12th Science CCSS: Adaptable
While this instructional activity includes several nice worksheets to identify and discuss the various limits on government (i.e. a constitution, the rule of law, separation of powers, consent of the governed, etc.), its main value lies...
6th - 12th Social Studies & History CCSS: Designed
Constitutional Principles: Separation of Powers
Why is separation of powers within a government important for protecting freedom? How does the United States Constitution organize the nation's governing bodies in order to ensure powers are limited and balanced? This video illustrates...
6 mins 9th - 12th Social Studies & History CCSS: Adaptable
Running the Stairs: Measuring Work, Energy, and Power
When the class runs for the hills in this activity, they return with a deeper understanding of the math behind the physics concept of work. Beginning with a physical experiment and ending with a thoughtful analysis of results, learners...
6th - 8th Math CCSS: Designed
Simple Machines Make Things Go
Trying to plan a physical science unit on forces and motion? Make your work a little easier with this comprehensive collection of simple machines resources. Offering a wide variety worksheets, activities, and projects, these materials...
2nd - 5th Science CCSS: Adaptable
Power Systems & Efficiency
Are you looking for a reading resource about the efficiency of power systems? Here is one that introduces the output/input ratio, measurement of energy by joules or calories, and efficiency ratings. For STEM classes that are learning...
6th - 9th Technology & Engineering CCSS: Adaptable
Itaipu Dam and Power Plant (Brazil and Paraguay)
Learners study South America's Itaipu Dam and Power Plant in order to gain an understanding that hydroelectric power is a major means of generating electricity throughout the world. They also look into the environmental impacts that...
6th - 8th Science
Lesson 3: Branches of Government
Young historians climb through the three branches of the US government in the third instructional activity of this five-part series. While reading the first three Articles of the Constitution in small groups, children write facts on...
3rd - 6th Social Studies & History CCSS: Adaptable
The Emperor’s New Clothes
The well-known fairy tale "The Emperor's New Clothes" is at the center of this exercise that asks learners to consider the nature of power and corruption. Class members read the tale and then respond to three questions related to the text.
5th - 7th English Language Arts CCSS: Adaptable
Submarines and Aircraft Carriers: The Science of Nuclear Power
As physics masters view this presentation, they learn how nuclear power is used in submarines. They use Google Maps to plot a course through the ocean and calculate the time required for surfacing and traveling. They learn about fission,...
9th - 12th Science CCSS: Designed | http://www.lessonplanet.com/lesson-plans/power |
4.0625 | CHURCH ARCHITECTURE. In transplanting their native religions, immigrants brought to Texas particular requirements for houses of worship, as well as building traditions. Whether on a large or small scale, their chapels and churches were designed functionally to accommodate their practices in worship and aesthetically to satisfy certain values, within the economic means of the builders.
Spanish colonists erected chapels for their missions, presidios, and secular parishes. Regardless of size or location, a chapel invariably had a central hall either oblong or cruciform in shape, with a focus upon the altar. The naves were often flanked by sacristies or other rooms serving religious uses. The first chapels for the missions and presidial establishments were temporary palisaded shelters. However, at the missions that prospered, these were replaced by durable and handsome edifices reflecting the stylistic traditions of Spain and Mexico. While none of the ephemeral palisaded works remains, several durably built structures survive, including the chapel at San José y San Miguel de Aguayo Mission, a work in Churrigueresque style erected in 1768–82, and San Antonio de Valero Mission (the Alamo), a work in Baroque (Salamonica) style begun in 1744, both in San Antonio. The presidial chapels, mandated by official regulations, ordinarily were prominently located within a complex of shelters or an enclosure and were commonly of temporary wood or adobe construction. Some, however, were more permanent; Nuestra Señora de Loreto Presidio near Goliad is a durable masonry work that has been restored. Secular parish churches were erected in San Antonio (Nuestra Señora de la Candelaria y Guadalupe Church) and Laredo (San Agustín Church) in 1728–49 and 1761–67, respectively. The former has been restored and is somewhat plain in appearance, but the latter is gone. (The present San Agustín Church was begun in 1871.)
Though the chapels surviving from Spanish Texas served the needs of residents of Mexican Texas, Anglo-American colonists brought Protestant religion-predominantly Baptist, Methodist, and Presbyterian-to Texas, along with customs for the construction of houses of worship. After meeting under trees, on porches, or in homes, they erected churches to serve basic needs. A church was often a simple box of logs, frame, or stone, with three or four openings per side and a gabled roof, above which rose a simple cross or belfry that identified the function of the building. Interiors were plain, often with only pews, benches, or chairs, a stove, a small piano or organ, and a pulpit. In time and with prosperity, numerous congregations and parishes began constructing churches with stylistic distinction. In imitation of early nineteenth-century fashions in the East, many edifices were in Greek Revival style. Although still on a simple rectangular plan similar to their predecessors, they were embellished with simple classical entablatures and porticoes. Among such churches built in antebellum Texas is the First Methodist Episcopal Church, South, in Marshall (1860–61), a brick-walled building that has been much remodeled over the years. The Gothic Revival style, characterized by pointed arches, projection buttresses, and steeply pitched roofs, also appeared in many churches both before and after the Civil War. Included among these were a number of Carpenter Gothic works, with board-and-batten walls, many of which are now gone; these were often built from plan prototypes developed in the eastern United States. However, numerous masonry-walled churches were also built, including St. Mary's Cathedral (1847–48) and Trinity Episcopal Churchqqv (1855–57), both in Galveston.
After the Civil War, African Americans, who previously had worshipped in makeshift shelters, also erected buildings serving their religious needs. Located in segregated neighborhoods and central to their societies, numerous churches were, at first, executed in plain box-like forms with frame constructions. Baptist and Methodist churches were common, although Cumberland Presbyterian congregations also constructed buildings, sometimes with assistance from the national church. Eventually, numerous black houses of worship were built with substantial masonry walls, but designs remained staightforward, with simple stylistic details.
During the prosperous years of the late nineteenth century, large new Victorian Gothic churches, often designed by prominent architects, appeared, although small buildings continued to be constructed by small or rural congregations and parishes. Ornate buildings with polychromatic stone or brick walls, high towers, stained glass windows, and large naves or auditoriums were common. Catholic churches generally had traditional long, narrow naves; so did Lutheran and Episcopalian churches, though other Protestant churches commonly had wide auditoriums designed for good acoustics and sight lines. Christ Episcopal Church, Houston (1893), is a fine example of the traditional plan with beautiful interior woodwork, and the First Baptist Church of Dallasqv (1890) is a noteworthy example of the new Protestant form. During the nineteenth and early twentieth centuries various ethnic groups also introduced their customs into church building. Particularly noteworthy are the edifices with painted interiors. Walls finished with wood were painted with patterns and forms representing architectural ornamentation and religious symbols meaningful to worshipers. Among the churches with painted interiors is the Praha Catholic Church (1891) in rural Fayette County.
Around 1900 the Richardsonian Romanesque style characterized many churches, particularly those of the Baptist, Methodist, and Presbyterian communions. Round-arched openings, polychromatic stonework, and lofty towers were typical features. Among the fine examples is the First Baptist Church, Beaumont (ca. 1900), now the home of the Tyrrell Historical Library. During the early decades of the twentieth century, although Gothic and Romanesque churches continued to appear, numerous Classical edifices were erected. Conforming to national trends in design, buildings were commonly crowned with domes and usually were entered through monumental porticoes supported by Classical columns. After World War II modern concepts of space and form appeared. Rejecting traditional historical styles, architects used new forms, spaces, and decorative modes for numerous new churches. Nonetheless, traditional styles continued in popularity.
Rex E. Gerald, Spanish Presidios of the Late Eighteenth Century in Northern New Spain (Santa Fe: Museum of New Mexico, 1968). Marion A. Habig, The Alamo Chain of Missions (Chicago: Franciscan Herald Press, 1968; rev. ed. 1976). Historic American Buildings Survey, Texas Catalog, comp. Paul Goeldner (San Antonio: Trinity University Press, 1974?). Terry G. Jordan, "The Traditional Southern Rural Chapel in Texas, " Ecumene 7 (March 1976). Max L. Moorhead, The Presidio: Bastion of the Spanish Borderlands (Norman: University of Oklahoma Press, 1975). Willard B. Robinson, "Houses of Worship in Nineteenth-Century Texas," Southwestern Historical Quarterly 85 (January 1982).
Image Use Disclaimer
All copyrighted materials included within the Handbook of Texas Online are in accordance with Title 17 U.S.C. Section 107 related to Copyright and “Fair Use” for Non-Profit educational institutions, which permits the Texas State Historical Association (TSHA), to utilize copyrighted materials to further scholarship, education, and inform the public. The TSHA makes every effort to conform to the principles of fair use and to comply with copyright law.
For more information go to: http://www.law.cornell.edu/uscode/17/107.shtml
If you wish to use copyrighted material from this site for purposes of your own that go beyond fair use, you must obtain permission from the copyright owner.
The following, adapted from the Chicago Manual of Style, 15th edition, is the preferred citation for this article.Willard B. Robinson, "CHURCH ARCHITECTURE," Handbook of Texas Online (http://www.tshaonline.org/handbook/online/articles/cgc02), accessed February 08, 2016. Uploaded on June 12, 2010. Published by the Texas State Historical Association.
Get Texas history everyday,
with day by day
Each day's email tells a little bit more of the story of Texas and links to our collection of more than 27,000 articles | https://tshaonline.org/handbook/online/articles/cgc02 |
4.03125 | Climate change feedback
Climate change feedback is important in the understanding of global warming because feedback processes may amplify or diminish the effect of each climate forcing, and so play an important part in determining the climate sensitivity and future climate state. Feedback in general is the process in which changing one quantity changes a second quantity, and the change in the second quantity in turn changes the first. Positive feedback amplifies the change in the first quantity while negative feedback reduces it.
The term "forcing" means a change which may "push" the climate system in the direction of warming or cooling. An example of a climate forcing is increased atmospheric concentrations of greenhouse gases. By definition, forcings are external to the climate system while feedbacks are internal; in essence, feedbacks represent the internal processes of the system. Some feedbacks may act in relative isolation to the rest of the climate system; others may be tightly coupled; hence it may be difficult to tell just how much a particular process contributes. Forcings, feedbacks and the dynamics of the climate system determine how much and how fast the climate changes. The main positive feedback in global warming is the tendency of warming to increase the amount of water vapor in the atmosphere, which in turn leads to further warming. The main negative feedback comes from the Stefan–Boltzmann law, the amount of heat radiated from the Earth into space changes with the fourth power of the temperature of Earth's surface and atmosphere.
Some observed and potential effects of global warming are positive feedbacks, which contribute directly to further global warming. The Intergovernmental Panel on Climate Change's (IPCC) Fourth Assessment Report states that "Anthropogenic warming could lead to some effects that are abrupt or irreversible, depending upon the rate and magnitude of the climate change."
- 1 Positive
- 1.1 Carbon cycle feedbacks
- 1.1.1 Arctic methane release
- 1.1.2 Abrupt increases in atmospheric methane
- 1.1.3 Decomposition
- 1.1.4 Peat decomposition
- 1.1.5 Rainforest drying
- 1.1.6 Forest fires
- 1.1.7 Desertification
- 1.1.8 Modelling results
- 1.2 Cloud feedback
- 1.3 Gas release
- 1.4 Ice-albedo feedback
- 1.5 Water vapor feedback
- 1.1 Carbon cycle feedbacks
- 2 Negative
- 3 See also
- 4 Notes
- 5 References
- 6 External links
Carbon cycle feedbacks
There have been predictions, and some evidence, that global warming might cause loss of carbon from terrestrial ecosystems, leading to an increase of atmospheric CO2 levels. Several climate models indicate that global warming through the 21st century could be accelerated by the response of the terrestrial carbon cycle to such warming. All 11 models in the C4MIP study found that a larger fraction of anthropogenic CO2 will stay airborne if climate change is accounted for. By the end of the twenty-first century, this additional CO2 varied between 20 and 200 ppm for the two extreme models, the majority of the models lying between 50 and 100 ppm. The higher CO2 levels led to an additional climate warming ranging between 0.1° and 1.5 °C. However, there was still a large uncertainty on the magnitude of these sensitivities. Eight models attributed most of the changes to the land, while three attributed it to the ocean. The strongest feedbacks in these cases are due to increased respiration of carbon from soils throughout the high latitude boreal forests of the Northern Hemisphere. One model in particular (HadCM3) indicates a secondary carbon cycle feedback due to the loss of much of the Amazon Rainforest in response to significantly reduced precipitation over tropical South America. While models disagree on the strength of any terrestrial carbon cycle feedback, they each suggest any such feedback would accelerate global warming.
Observations show that soils in the U.K have been losing carbon at the rate of four million tonnes a year for the past 25 years according to a paper in Nature by Bellamy et al. in September 2005, who note that these results are unlikely to be explained by land use changes. Results such as this rely on a dense sampling network and thus are not available on a global scale. Extrapolating to all of the United Kingdom, they estimate annual losses of 13 million tons per year. This is as much as the annual reductions in carbon dioxide emissions achieved by the UK under the Kyoto Treaty (12.7 million tons of carbon per year).
It has also been suggested (by Chris Freeman) that the release of dissolved organic carbon (DOC) from peat bogs into water courses (from which it would in turn enter the atmosphere) constitutes a positive feedback for global warming. The carbon currently stored in peatlands (390–455 gigatonnes, one-third of the total land-based carbon store) is over half the amount of carbon already in the atmosphere. DOC levels in water courses are observably rising; Freeman's hypothesis is that, not elevated temperatures, but elevated levels of atmospheric CO2 are responsible, through stimulation of primary productivity.
Tree deaths are believed to be increasing as a result of climate change, which is a positive feedback effect. This contradicts the previously widely held view that increased natural vegetation would lead to a negative-feedback effect.
Arctic methane release
Warming is also the triggering variable for the release of carbon (potentially as methane) in the arctic. Methane released from thawing permafrost such as the frozen peat bogs in Siberia, and from methane clathrate on the sea floor, creates a positive feedback.
Methane release from melting permafrost peat bogs
|Wikinews has related news: Scientists warn thawing Siberia may trigger global meltdown|
Western Siberia is the world's largest peat bog, a one million square kilometer region of permafrost peat bog that was formed 11,000 years ago at the end of the last ice age. The melting of its permafrost is likely to lead to the release, over decades, of large quantities of methane. As much as 70,000 million tonnes of methane, an extremely effective greenhouse gas, might be released over the next few decades, creating an additional source of greenhouse gas emissions. Similar melting has been observed in eastern Siberia. Lawrence et al. (2008) suggest that a rapid melting of Arctic sea ice may start a feedback loop that rapidly melts Arctic permafrost, triggering further warming.
Methane release from hydrates
Methane clathrate, also called methane hydrate, is a form of water ice that contains a large amount of methane within its crystal structure. Extremely large deposits of methane clathrate have been found under sediments on the sea and ocean floors of Earth. The sudden release of large amounts of natural gas from methane clathrate deposits, in a runaway global warming event, has been hypothesized as a cause of past and possibly future climate changes. The release of this trapped methane is a potential major outcome of a rise in temperature; it is thought that this might increase the global temperature by an additional 5° in itself, as methane is much more powerful as a greenhouse gas than carbon dioxide. The theory also predicts this will greatly affect available oxygen content of the atmosphere. This theory has been proposed to explain the most severe mass extinction event on earth known as the Permian–Triassic extinction event, and also the Paleocene-Eocene Thermal Maximum climate change event. In 2008, a research expedition for the American Geophysical Union detected levels of methane up to 100 times above normal in the Siberian Arctic, likely being released by methane clathrates being released by holes in a frozen 'lid' of seabed permafrost, around the outfall of the Lena River and the area between the Laptev Sea and East Siberian Sea.
Abrupt increases in atmospheric methane
Literature assessments by the Intergovernmental Panel on Climate Change (IPCC) and the US Climate Change Science Program (CCSP) have considered the possibility of future projected climate change leading to a rapid increase in atmospheric methane. The IPCC Third Assessment Report, published in 2001, looked at possible rapid increases in methane due either to reductions in the atmospheric chemical sink or from the release of buried methane reservoirs. In both cases, it was judged that such a release would be "exceptionally unlikely" (less than a 1% chance, based on expert judgement). The CCSP assessment, published in 2008, concluded that an abrupt release of methane into the atmosphere appeared "very unlikely" (less than 10% probability, based on expert judgement). The CCSP assessment, however, noted that climate change would "very likely" (greater than 90% probability, based on expert judgement) accelerate the pace of persistent emissions from both hydrate sources and wetlands.
Peat, occurring naturally in peat bogs, is a store of carbon significant on a global scale. When peat dries it decomposes, and may additionally burn. Water table adjustment due to global warming may cause significant excursions of carbon from peat bogs. This may be released as methane, which can exacerbate the feedback effect, due to its high global warming potential.
Rainforests, most notably tropical rainforests, are particularly vulnerable to global warming. There are a number of effects which may occur, but two are particularly concerning. Firstly, the drier vegetation may cause total collapse of the rainforest ecosystem. For example, the Amazon rainforest would tend to be replaced by caatinga ecosystems. Further, even tropical rainforests ecosystems which do not collapse entirely may lose significant proportions of their stored carbon as a result of drying, due to changes in vegetation.
The IPCC Fourth Assessment Report predicts that many mid-latitude regions, such as Mediterranean Europe, will experience decreased rainfall and an increased risk of drought, which in turn would allow forest fires to occur on larger scale, and more regularly. This releases more stored carbon into the atmosphere than the carbon cycle can naturally re-absorb, as well as reducing the overall forest area on the planet, creating a positive feedback loop. Part of that feedback loop is more rapid growth of replacement forests and a northward migration of forests as northern latitudes become more suitable climates for sustaining forests. There is a question of whether the burning of renewable fuels such as forests should be counted as contributing to global warming. Cook & Vizy also found that forest fires were likely in the Amazon Rainforest, eventually resulting in a transition to Caatinga vegetation in the Eastern Amazon region.
Desertification is a consequence of global warming in some environments. Desert soils contain little humus, and support little vegetation. As a result, transition to desert ecosystems is typically associated with excursions of carbon.
The global warming projections contained in the IPCC's Fourth Assessment Report (AR4) include carbon cycle feedbacks. Authors of AR4, however, noted that scientific understanding of carbon cycle feedbacks was poor. Projections in AR4 were based on a range of greenhouse gas emissions scenarios, and suggested warming between the late 20th and late 21st century of 1.1 to 6.4 °C. This is the "likely" range (greater than 66% probability), based on the expert judgement of the IPCC's authors. Authors noted that the lower end of the "likely" range appeared to be better constrained than the upper end of the "likely" range, in part due to carbon cycle feedbacks. The American Meteorological Society has commented that more research is needed to model the effects of carbon cycle feedbacks in climate change projections.
Isaken et al. (2010) considered how future methane release from the Arctic might contribute to global warming. Their study suggested that if global methane emissions were to increase by a factor of 2.5 to 5.2 above (then) current emissions, the indirect contribution to radiative forcing would be about 250% and 400% respectively, of the forcing that can be directly attributed to methane. This amplification of methane warming is due to projected changes in atmospheric chemistry.
Schaefer et al. (2011) considered how carbon released from permafrost might contribute to global warming. Their study projected changes in permafrost based on a medium greenhouse gas emissions scenario (SRES A1B). According to the study, by 2200, the permafrost feedback might contribute 190 (+/- 64) gigatons of carbon cumulatively to the atmosphere. Schaefer et al. (2011) commented that this estimate may be low.
Implications for climate policy
Uncertainty over climate change feedbacks has implications for climate policy. For instance, uncertainty over carbon cycle feedbacks may affect targets for reducing greenhouse gas emissions. Emissions targets are often based on a target stabilization level of atmospheric greenhouse gas concentrations, or on a target for limiting global warming to a particular magnitude. Both of these targets (concentrations or temperatures) require an understanding of future changes in the carbon cycle. If models incorrectly project future changes in the carbon cycle, then concentration or temperature targets could be missed. For example, if models underestimate the amount of carbon released into the atmosphere due to positive feedbacks (e.g., due to melting permafrost), then they may also underestimate the extent of emissions reductions necessary to meet a concentration or temperature target.
Warming is expected to change the distribution and type of clouds. Seen from below, clouds emit infrared radiation back to the surface, and so exert a warming effect; seen from above, clouds reflect sunlight and emit infrared radiation to space, and so exert a cooling effect. Whether the net effect is warming or cooling depends on details such as the type and altitude of the cloud. High clouds tend to trap more heat and therefore have a positive feedback, low clouds normally reflect more sunlight so they have a negative feedback. These details were poorly observed before the advent of satellite data and are difficult to represent in climate models.
Release of gases of biological origin may be affected by global warming, but research into such effects is at an early stage. Some of these gases, such as nitrous oxide released from peat, directly affect climate. Others, such as dimethyl sulfide released from oceans, have indirect effects.
When ice melts, land or open water takes its place. Both land and open water are on average less reflective than ice and thus absorb more solar radiation. This causes more warming, which in turn causes more melting, and this cycle continues. During times of global cooling, additional ice increases the reflectivity which reduces the absorption of solar radiation which results in more cooling in a continuing cycle. Considered a faster feedback mechanism.
Albedo change is also the main reason why IPCC predict polar temperatures in the northern hemisphere to rise up to twice as much as those of the rest of the world, in a process known as polar amplification. In September 2007, the Arctic sea ice area reached about half the size of the average summer minimum area between 1979 to 2000. Also in September 2007, Arctic sea ice retreated far enough for the Northwest Passage to become navigable to shipping for the first time in recorded history. The record losses of 2007 and 2008 may, however, be temporary. Mark Serreze of the US National Snow and Ice Data Center views 2030 as a "reasonable estimate" for when the summertime Arctic ice cap might be ice-free. The polar amplification of global warming is not predicted to occur in the southern hemisphere. The Antarctic sea ice reached its greatest extent on record since the beginning of observation in 1979, but the gain in ice in the south is exceeded by the loss in the north. The trend for global sea ice, northern hemisphere and southern hemisphere combined is clearly a decline.
Ice loss may have internal feedback processes, as melting of ice over land can cause eustatic sea level rise, potentially causing instability of ice shelves and inundating coastal ice masses, such as glacier tongues. Further, a potential feedback cycle exists due to earthquakes caused by isostatic rebound further destabilising ice shelves, glaciers and ice caps.
The ice-albedo in some sub-arctic forests is also changing, as stands of larch (which shed their needles in winter, allowing sunlight to reflect off the snow in spring and fall) are being replaced by spruce trees (which retain their dark needles all year).
Water vapor feedback
If the atmospheres are warmed, the saturation vapor pressure increases, and the amount of water vapor in the atmosphere will tend to increase. Since water vapor is a greenhouse gas, the increase in water vapor content makes the atmosphere warm further; this warming causes the atmosphere to hold still more water vapor (a positive feedback), and so on until other processes stop the feedback loop. The result is a much larger greenhouse effect than that due to CO2 alone. Although this feedback process causes an increase in the absolute moisture content of the air, the relative humidity stays nearly constant or even decreases slightly because the air is warmer. Climate models incorporate this feedback. Water vapor feedback is strongly positive, with most evidence supporting a magnitude of 1.5 to 2.0 W/m2/K, sufficient to roughly double the warming that would otherwise occur. Considered a faster feedback mechanism.
Le Chatelier's principle
Following Le Chatelier's principle, the chemical equilibrium of the Earth's carbon cycle will shift in response to anthropogenic CO2 emissions. The primary driver of this is the ocean, which absorbs anthropogenic CO2 via the so-called solubility pump. At present this accounts for only about one third of the current emissions, but ultimately most (~75%) of the CO2 emitted by human activities will dissolve in the ocean over a period of centuries: "A better approximation of the lifetime of fossil fuel CO2 for public discussion might be 300 years, plus 25% that lasts forever". However, the rate at which the ocean will take it up in the future is less certain, and will be affected by stratification induced by warming and, potentially, changes in the ocean's thermohaline circulation.
Chemical weathering over the geological long term acts to remove CO2 from the atmosphere. With current global warming, weathering is increasing, demonstrating significant feedbacks between climate and Earth surface. Biosequestration also captures and stores CO2 by biological processes. The formation of shells by organisms in the ocean, over a very long time, removes CO2 from the oceans. The complete conversion of CO2 to limestone takes thousands to hundreds of thousands of years.
Net Primary Productivity
Net primary productivity changes in response to increased CO2, as plants photosynthesis increased in response to increasing concentrations. However, this effect is swamped by other changes in the biosphere due to global warming.
The atmosphere's temperature decreases with height in the troposphere. Since emission of infrared radiation varies with temperature, longwave radiation escaping to space from the relatively cold upper atmosphere is less than that emitted toward the ground from the lower atmosphere. Thus, the strength of the greenhouse effect depends on the atmosphere's rate of temperature decrease with height. Both theory and climate models indicate that global warming will reduce the rate of temperature decrease with height, producing a negative lapse rate feedback that weakens the greenhouse effect. Measurements of the rate of temperature change with height are very sensitive to small errors in observations, making it difficult to establish whether the models agree with observations.
As the temperature of a black body increases, the emission of infrared radiation back into space increases with the fourth power of its absolute temperature according to Stefan–Boltzmann law. This increases the amount of outgoing radiation as the Earth warms. The impact of this negative feedback effect is included in global climate models summarized by the IPCC. This is also called the Planck feedback.
- Larry D. Dyke, Wendy E. Sladen (2010). "Permafrost and Peatland Evolution in the Northern Hudson Bay Lowland, Manitoba". ARCTIC 63. doi:10.14430/arctic3332.
- Climate feedback IPCC Third Assessment Report, Appendix I - Glossary
- US NRC (2012), Climate Change: Evidence, Impacts, and Choices, US National Research Council (US NRC), p.9. Also available as PDF
- Understanding Climate Change Feedbacks, U.S. National Academy of Sciences
- IPCC. "Climate Change 2007: Synthesis Report. Contribution of Working Groups I, II and III to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. Pg 53" (PDF).
- Cox, Peter M.; Richard A. Betts; Chris D. Jones; Steven A. Spall; Ian J. Totterdell (November 9, 2000). "Acceleration of global warming due to carbon-cycle feedbacks in a coupled climate model" (abstract). Nature 408 (6809): 184–7. doi:10.1038/35041539. PMID 11089968. Retrieved 2008-01-02.
- Friedlingstein, P.; P. Cox; R. Betts; L. Bopp; W. von Bloh; V. Brovkin; P. Cadule; S. Doney; M. Eby; I. Fung; G. Bala; J. John; C. Jones; F. Joos; T. Kato; M. Kawamiya; W. Knorr; K. Lindsay; H.D. Matthews; T. Raddatz; P. Rayner; C. Reick; E. Roeckner; K.G. Schnitzler; R. Schnur; K. Strassmann; A.J. Weaver; C. Yoshikawa; N. Zeng (2006). "Climate–Carbon Cycle Feedback Analysis: Results from the C4MIP Model Intercomparison". Journal of Climate 19 (14): 3337–53. Bibcode:2006JCli...19.3337F. doi:10.1175/JCLI3800.1. Retrieved 2008-01-02. (subscription required (. ))
- "5.5C temperature rise in next century". The Guardian. 2003-05-29. Retrieved 2008-01-02.
- Tim Radford (2005-09-08). "Loss of soil carbon 'will speed global warming'". The Guardian. Retrieved 2008-01-02.
- Schulze, E. Detlef; Annette Freibauer (September 8, 2005). "Environmental science: Carbon unlocked from soils". Nature 437 (7056): 205–6. Bibcode:2005Natur.437..205S. doi:10.1038/437205a. PMID 16148922. Retrieved 2008-01-02.
- Freeman, Chris; Ostle, Nick; Kang, Hojeong (2001). "An enzymic 'latch' on a global carbon store". Nature 409 (6817): 149. doi:10.1038/35051650. PMID 11196627.
- Freeman, Chris; et al. (2004). "Export of dissolved organic carbon from peatlands under elevated carbon dioxide levels". Nature 430 (6996): 195–8. Bibcode:2004Natur.430..195F. doi:10.1038/nature02707. PMID 15241411.
- Connor, Steve (2004-07-08). "Peat bog gases 'accelerate global warming'". The Independent.
- Kvenvolden, K. A. (1988). "Methane Hydrates and Global Climate". Global Biogeochemical Cycles 2 (3): 221. Bibcode:1988GBioC...2..221K. doi:10.1029/GB002i003p00221.
- Zimov, A.; Schuur, A.; Chapin Fs, D. (Jun 2006). "Climate change. Permafrost and the global carbon budget". Science 312 (5780): 1612–1613. doi:10.1126/science.1128908. ISSN 0036-8075. PMID 16778046.
- Archer, D (2007). "Methane hydrate stability and anthropogenic climate change". Biogeosciences Discuss 4: 993–1057. doi:10.5194/bgd-4-993-2007.
- Fred Pearce (2005-08-11). "Climate warning as Siberia melts". New Scientist. Retrieved 2007-12-30.
- Ian Sample (2005-08-11). "Warming Hits 'Tipping Point'". Guardian. Retrieved 2007-12-30.
- "Permafrost Threatened by Rapid Retreat of Arctic Sea Ice, NCAR Study Finds" (Press release). UCAR. 10 June 2008. Retrieved 2009-05-25.
- Lawrence, D. M.; Slater, A. G.; Tomas, R. A.; Holland, M. M.; Deser, C. (2008). "Accelerated Arctic land warming and permafrost degradation during rapid sea ice loss" (PDF). Geophysical Research Letters 35 (11): L11506. Bibcode:2008GeoRL..3511506L. doi:10.1029/2008GL033985.
- Connor, Steve (September 23, 2008). "Exclusive: The methane time bomb". The Independent. Retrieved 2008-10-03.
- Connor, Steve (September 25, 2008). "Hundreds of methane 'plumes' discovered". The Independent. Retrieved 2008-10-03.
- N. Shakhova, I. Semiletov, A. Salyuk, D. Kosmach, and N. Bel’cheva (2007). "Methane release on the Arctic East Siberian shelf" (PDF). Geophysical Research Abstracts 9: 01071.
- IPCC (2001d). "4.14". In R.T. Watson and the Core Writing Team (eds.). Question 4. Climate Change 2001: Synthesis Report. A Contribution of Working Groups I, II, and III to the Third Assessment Report of the Intergovernmental Panel on Climate Change. Print version: Cambridge University Press, Cambridge, U.K., and New York, N.Y., U.S.A.. This version: GRID-Arendal website. Retrieved 2011-05-18.
- IPCC (2001d). "Box 2-1: Confidence and likelihood statements". In R.T. Watson and the Core Writing Team (eds.). Question 2. Climate Change 2001: Synthesis Report. A Contribution of Working Groups I, II, and III to the Third Assessment Report of the Intergovernmental Panel on Climate Change. Print version: Cambridge University Press, Cambridge, U.K., and New York, N.Y., U.S.A.. This version: GRID-Arendal website. Retrieved 2011-05-18.
- Clark, P.U.; et al. (2008). "Executive Summary" (PDF). Abrupt Climate Change. A Report by the U.S. Climate Change Science Program and the Subcommittee on Global Change Research (PDF). U.S. Geological Survey, Reston, VA. p. 2. Retrieved 2011-05-18.
- Clark, P.U.; et al. (2008). "Chapter 1: Introduction: Abrupt Changes in the Earth's Climate System" (PDF). Abrupt Climate Change. A Report by the U.S. Climate Change Science Program and the Subcommittee on Global Change Research (PDF). U.S. Geological Survey, Reston, VA. p. 12. Retrieved 2011-05-18.
- Heimann, Martin; Markus Reichstein (200-01-17). "Terrestrial ecosystem carbon dynamics and climate feedbacks". Nature 451 (7176): 289–292. Bibcode:2008Natur.451..289H. doi:10.1038/nature06591. PMID 18202646. Retrieved 2010-03-15. Check date values in:
- Ise, T.; Dunn, A. L.; Wofsy, S. C.; Moorcroft, P. R. (2008). "High sensitivity of peat decomposition to climate change through water-table feedback". Nature Geoscience 1 (11): 763. Bibcode:2008NatGe...1..763I. doi:10.1038/ngeo331.
- Cook, K. H.; Vizy, E. K. (2008). "Effects of Twenty-First-Century Climate Change on the Amazon Rain Forest". Journal of Climate 21 (3): 542–821. Bibcode:2008JCli...21..542C. doi:10.1175/2007JCLI1838.1.
- Enquist, B. J.; Enquist, C. A. F. (2011). "Long-term change within a Neotropical forest: assessing differential functional and floristic responses to disturbance and drought". Global Change Biology 17 (3): 1408. doi:10.1111/j.1365-2486.2010.02326.x.
- "Climate Change and Fire". David Suzuki Foundation. Retrieved 2007-12-02.
- "Global warming : Impacts : Forests". United States Environmental Protection Agency. 2000-01-07. Archived from the original on 2007-02-19. Retrieved 2007-12-02.
- "Feedback Cycles: linking forests, climate and landuse activities". Woods Hole Research Center. Archived from the original on 2007-10-25. Retrieved 2007-12-02.
- Schlesinger, W. H.; Reynolds, J. F.; Cunningham, G. L.; Huenneke, L. F.; Jarrell, W. M.; Virginia, R. A.; Whitford, W. G. (1990). "Biological Feedbacks in Global Desertification". Science 247 (4946): 1043–1048. Bibcode:1990Sci...247.1043S. doi:10.1126/science.247.4946.1043. PMID 17800060.
- Meehl, G.A.; et al., "Ch 10: Global Climate Projections", Sec 10.5.4.6 Synthesis of Projected Global Temperature at Year 2100 Missing or empty
|title=(help), in IPCC AR4 WG1 2007
- Solomon; et al., "Technical Summary", TS.6.4.3 Global Projections: Key uncertainties Missing or empty
|title=(help), in IPCC AR4 WG1 2007.
- AMS Council (20 August 2012), 2012 American Meteorological Society (AMS) Information Statement on Climate Change, Boston, MA, USA: AMS
- Isaksen, Ivar S. A.; Michael Gauss; Gunnar Myhre; Katey M. Walter; Anthony and Carolyn Ruppel (20 April 2011). "Strong atmospheric chemistry feedback to climate warming from Arctic methane emissions" (PDF). Global Biogeochemical Cycles 25 (2). Bibcode:2011GBioC..25B2002I. doi:10.1029/2010GB003845.
- KEVIN SCHAEFER, TINGJUN ZHANG, LORI BRUHWILER, ANDREW P. BARRETT (2011). "Amount and timing of permafrost carbon release in response to climate warming". Tellus Series B 63 (2): 165–180. Bibcode:2011TellB..63..165S. doi:10.1111/j.1600-0889.2011.00527.x.
- Meehl, G.A.; et al., "Ch 10: Global Climate Projections", Sec 10.4.1 Carbon Cycle/Vegetation Feedbacks Missing or empty
|title=(help), in IPCC AR4 WG1 2007
- Soden, B. J.; Held, I. M. (2006). "An Assessment of Climate Feedbacks in Coupled Ocean–Atmosphere Models". Journal of Climate 19 (14): 3354. Bibcode:2006JCli...19.3354S. doi:10.1175/JCLI3799.1.
Interestingly, the true feedback is consistently weaker than the constant relative humidity value, implying a small but robust reduction in relative humidity in all models on average clouds appear to provide a positive feedback in all models
- Repo, M. E.; Susiluoto, S.; Lind, S. E.; Jokinen, S.; Elsakov, V.; Biasi, C.; Virtanen, T.; Martikainen, P. J. (2009). "Large N2O emissions from cryoturbated peat soil in tundra". Nature Geoscience 2 (3): 189. Bibcode:2009NatGe...2..189R. doi:10.1038/ngeo434.
- Simó, R.; Dachs, J. (2002). "Global ocean emission of dimethylsulfide predicted from biogeophysical data". Global Biogeochemical Cycles 16 (4): 1018. Bibcode:2002GBioC..16d..26S. doi:10.1029/2001GB001829.
- Stocker, T.F., Clarke, G.K.C., Le Treut, H., Lindzen, R.S., Meleshko, V.P., Mugara, R.K., Palmer, T.N., Pierrehumbert, R.T., Sellers, P.J., Trenberth, K.E., Willebrand, J. (2001). "Chapter 7: Physical Climate Processes and Feedbacks" (PDF). In Manabe, S., Mason, P. Climate Change 2001: The Scientific Basis. Contribution of Working Group I to the Third Assessment Report of the Intergovernmental Panel on Climate Change (Full free text). Cambridge, United Kingdom and New York, NY, USA: Cambridge University Press. pp. 445–448. ISBN 0-521-01495-6.
- Hansen, J., "2008: Tipping point: Perspective of a climatologist.", Wildlife Conservation Society/Island Press, 2008. Retrieved 2010.
- "The cryosphere today". University of Illinois at Urbana-Champagne Polar Research Group. Retrieved 2008-01-02.
- "Arctic Sea Ice News Fall 2007". National Snow and Ice Data Center. Retrieved 2008-01-02..
- "Arctic ice levels at record low opening Northwest Passage". Wikinews. September 16, 2007.
- "Avoiding dangerous climate change" (PDF). The Met Office. 2008. p. 9. Retrieved August 29, 2008.
- Adam, D. (2007-09-05). "Ice-free Arctic could be here in 23 years". The Guardian. Retrieved 2008-01-02.
- Eric Steig and Gavin Schmidt. "Antarctic cooling, global warming?". RealClimate. Retrieved 2008-01-20.
- "Southern hemisphere sea ice area". Cryosphere Today. Retrieved 2008-01-20.
- "Global sea ice area". Cryosphere Today. Retrieved 2008-01-20.
- Science Magazine February 19, 2009
- Archer, David (2005). "Fate of fossil fuel CO2 in geologic time" (PDF). Journal of Geophysical Research 110: C09S05. Bibcode:2005JGRC..11009S05A. doi:10.1029/2004JC002625.
- Sigurdur R. Gislason, Eric H. Oelkers, Eydis S. Eiriksdottir, Marin I. Kardjilov, Gudrun Gisladottir, Bergur Sigfusson, Arni Snorrason, Sverrir Elefsen, Jorunn Hardardottir, Peter Torssander, Niels Oskarsson (2009). "Direct evidence of the feedback between climate and weathering". Earth and Planetary Science Letters 277 (1-2): 213–222. Bibcode:2009E&PSL.277..213G. doi:10.1016/j.epsl.2008.10.018.
- The Carbon Cycle, What Goes Around Comes Around by John Arthur Harrison, Ph.D.
- Prologue: The Long Thaw: How Humans Are Changing the Next 100,000 Years of Earth's Climate by David Archer
- Cramer, W.; Bondeau, A.; Woodward, F. I.; Prentice, I. C.; Betts, R. A.; Brovkin, V.; Cox, P. M.; Fisher, V.; Foley, J. A.; Friend, A. D.; Kucharik, C.; Lomas, M. R.; Ramankutty, N.; Sitch, S.; Smith, B.; White, A.; Young-Molling, C. (2001). "Global response of terrestrial ecosystem structure and function to CO2and climate change: results from six dynamic global vegetation models". Global Change Biology 7 (4): 357. doi:10.1046/j.1365-2486.2001.00383.x.
- National Research Council Panel on Climate Change Feedbacks (2003). Understanding climate change feedbacks (Limited preview). Washington D.C., United States: National Academies Press. ISBN 978-0-309-09072-8.
- A.E. Dessler & S.C. Sherwood (20 February 2009). "A matter of humidity" (PDF). Science 323 (5917): 1020–1021. doi:10.1126/science.1171264. PMID 19229026.
- Yang, Zong-Liang. "Chapter 2: The global energy balance" (PDF). University of Texas. Retrieved 2010-02-15.
- IPCC AR4 WG1 (2007), Solomon, S.; Qin, D.; Manning, M.; Chen, Z.; Marquis, M.; Averyt, K.B.; Tignor, M.; and Miller, H.L., ed., Climate Change 2007: The Physical Science Basis, Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Cambridge University Press, ISBN 978-0-521-88009-1 (pb: 978-0-521-70596-7)
- Amplification of Global Warming by Carbon-Cycle Feedback Significantly Less Than Thought ScienceDaily, Jan. 28, 2010
- Arctic permafrost leaking methane at record levels guardian.co.uk, Thursday 14 January 2010
- Chapter 7. Physical Climate Processes and Feedbacks IPCC Third Assessment Report
- CO2: The Thermostat that Controls Earth's Temperature by NASA, Goddard Institute for Space Studies, October, 2010
- Deniers delight — a negative climate feedback! from Climate Progress, July 28, 2008
- "Global warming 20 years later: tipping points near" (2008) PDF, address to National Press Club, and House Select Committee on Energy Independence & Global warming, Washington DC [44 pages]:
- Global Warming: Climate Feedback
- More Climate Feedback Loops Past Peak, November 27, 2007
- Tipping point: Perspective of a climatologist. In State of the Wild 2008-2009: A Global Portrait of Wildlife, Wildlands, and Oceans. W. Woods, Ed. Wildlife Conservation Society/Island Press, pp. 6–15.
- What are ‘climate feedbacks’? Big Picture TV video February 20, 2007, David Wasdell, Director of the Meridian Programme
- How does climate change happen? (Part 1) Big Picture TV video February 20, 2007, David Wasdell, Director of the Meridian Programme
- How does climate change happen? (Part 2) Big Picture TV video February 20, 2007, David Wasdell, Director of the Meridian Programme
- Understanding Climate Change Feedbacks by Board on Atmospheric Sciences and Climate 2003 online text book | https://en.wikipedia.org/wiki/Climate_change_feedback |
4.15625 | |This article needs additional citations for verification. (April 2014)|
||This article may contain too much repetition or redundant language. (July 2014)|
|Color blindness or color deficiency|
|Classification and external resources|
Color blindness, or color vision deficiency, is the inability or decreased ability to see color, or perceive color differences, under normal lighting conditions. Color blindness affects a significant percentage of the population. There is no actual blindness but there is a deficiency of color vision. The most usual cause is a fault in the development of one or more sets of retinal cones that perceive color in light and transmit that information to the optic nerve. This type of color blindness is usually a sex-linked condition. The genes that produce photopigments are carried on the X chromosome; if some of these genes are missing or damaged, color blindness will be expressed in males with a higher probability than in females because males only have one X chromosome, whereas females have two and a functional gene on only one of the X chromosomes is sufficient to yield the necessary photopigments.
Color blindness can also be produced by physical or chemical damage to the eye, the optic nerve, or parts of the brain. For example, people with achromatopsia suffer from a completely different disorder, but are nevertheless unable to see colors.
Color blindness is usually classified as a mild disability. There are occasional circumstances where it is an advantage: some studies conclude that color blind people are better at penetrating certain color camouflages. Such findings may give an evolutionary reason for the high prevalence of red–green color blindness. There is also a study suggesting that people with some types of color blindness can distinguish colors that people with normal color vision are not able to distinguish.
- 1 Background
- 2 Classification
- 3 Causes
- 4 Diagnosis
- 5 Management
- 6 Epidemiology
- 7 Society and culture
- 8 Problems and compensations
- 9 See also
- 10 References
- 11 Further reading
- 12 External links
The first scientific paper on the subject of color blindness, Extraordinary facts relating to the vision of colours, was published by the English chemist John Dalton in 1798 after the realization of his own color blindness. Because of Dalton's work, the general condition has been called daltonism, although in English this term is now used only for deuteranopia.
Color blindness affects a large number of individuals, with protanopia and deuteranopia being the most common types. In individuals with Northern European ancestry, as many as 8 percent of men and 0.4 percent of women experience congenital color deficiency. The typical human retina contains two kinds of light cells: the rod cells (active in low light) and the cone cells (active in normal daylight). Normally, there are three kinds of cone cells, each containing a different pigment, which are activated when the pigments absorb light. The spectral sensitivities of the cones differ; one is most sensitive to short wavelengths, one to medium wavelengths, and the third to medium-to-long wavelengths within the visible spectrum, with their peak sensitivities in the blue, green, and yellow-green regions of the spectrum, respectively. The absorption spectra of the three systems overlap, and combine to cover the visible spectrum. These receptors are known as short (S), medium (M),and long (L) wavelength cones, but are also often referred to as blue, green, and red cones, although this terminology is inaccurate.
The receptors are each responsive to a wide range of wavelengths. For example, the long wavelength, "red", receptor has its peak sensitivity in the yellow-green, some way from the red end (longest wavelength) of the visible spectrum. The sensitivity of normal color vision actually depends on the overlap between the absorption ranges of the three systems: different colors are recognized when the different types of cone are stimulated to different degrees. Red light, for example, stimulates the long wavelength cones much more than either of the others, and reducing the wavelength causes the other two cone systems to be increasingly stimulated, causing a gradual change in hue.
Many of the genes involved in color vision are on the X chromosome, making color blindness much more common in males than in females because males only have one X chromosome, while females have two. Because this is an X-linked trait, an estimated 2–3% of women have a 4th color cone and can be considered tetrachromats. One such woman has been reported to be a true or functional tetrachromat, as she can discriminate colors most other people can't.
Color vision deficiencies can be classified as acquired or inherited.
- Acquired: Diseases, drugs (e.g., Plaquenil), and chemicals may cause color blindness.
- Inherited: There are three types of inherited or congenital color vision deficiencies: monochromacy, dichromacy, and anomalous trichromacy.
- Monochromacy, also known as "total color blindness", is the lack of ability to distinguish colors (and thus the person views everything as if it were on a black and white television); caused by cone defect or absence. Monochromacy occurs when two or all three of the cone pigments are missing and color and lightness vision is reduced to one dimension.
- Rod monochromacy (achromatopsia) is an exceedingly rare, nonprogressive inability to distinguish any colors as a result of absent or nonfunctioning retinal cones. It is associated with light sensitivity (photophobia), involuntary eye oscillations (nystagmus), and poor vision.
- Cone monochromacy is a rare total color blindness that is accompanied by relatively normal vision, electroretinogram, and electrooculogram. Cone monochromacy can also be a result of having more than one type of dichromatic color blindness. People who have, for instance, both protanopia and tritanopia are considered to have cone monochromacy. Since cone monochromacy is the lack of/damage of more than one cone in retinal environment, having two types of dichromacy would be an equivalent.
- Dichromacy is a moderately severe color vision defect in which one of the three basic color mechanisms is absent or not functioning. It is hereditary and, in the case of protanopia or deuteranopia, sex-linked, affecting predominantly males. Dichromacy occurs when one of the cone pigments is missing and color is reduced to two dimensions. Dichromacy conditions are labeled based on whether the "first" (Greek: prot-, referring to the red photoreceptors), "second" (deuter-, the green), or "third" (trit-, the blue) photoreceptors are affected.
- Protanopia is a severe type of color vision deficiency caused by the complete absence of red retinal photoreceptors. Protans have difficulties to distinguish between blue and green colors and also between red and green colors. It is a form of dichromatism in which the subject can only perceive light wavelengths from 400 to 650 nm, instead of the usual 700 nm. Pure reds cannot be seen, instead appearing black; purple colors cannot be distinguished from blues; more orange-tinted reds may appear as very dim yellows, and all orange-yellow-green shades of too long a wavelength to stimulate the blue receptors appear as a similar yellow hue. It is hereditary, sex-linked, and present in 1% of males.
- Deuteranopia is a type of color vision deficiency where the green photoreceptors are absent. It affects hue discrimination in the same way as protanopia, but without the dimming effect. Like protanopia, it is hereditary, sex-linked, and found in about 1% of the male population.
- Tritanopia is a very rare color vision disturbance in which there are only two cone pigments present and a total absence of blue retinal receptors. Blues appear greenish, yellows and oranges appear pinkish, and purple colors appear deep red. It is related to chromosome 7. Unlike protanopia and deuteranopia, tritanopia and tritanomaly are not sex-linked traits and can be acquired rather than inherited and can be reversed in some cases.
- Anomalous trichromacy is a common type of inherited color vision deficiency, occurring when one of the three cone pigments is altered in its spectral sensitivity.
- Protanomaly is a mild color vision defect in which an altered spectral sensitivity of red retinal receptors (closer to green receptor response) results in poor red–green hue discrimination. It is hereditary, sex-linked, and present in 1% of males. The difference with protanopia is that in this case the L-cone is present but malfunctioning, whereas in the earlier the L-cone is completely missing.
- Deuteranomaly, caused by a similar shift in the green retinal receptors, is by far the most common type of color vision deficiency, mildly affecting red–green hue discrimination in 5% of European males. It is hereditary and sex-linked. The difference with deuteranopia is that in this case the green sensitive cones are not missing but malfunctioning.
- Tritanomaly is a rare, hereditary color vision deficiency affecting blue–green and yellow–red/pink hue discrimination. It is related to chromosome "7". The difference is that the S-cone is malfunctioning but not missing.
- Monochromacy, also known as "total color blindness", is the lack of ability to distinguish colors (and thus the person views everything as if it were on a black and white television); caused by cone defect or absence. Monochromacy occurs when two or all three of the cone pigments are missing and color and lightness vision is reduced to one dimension.
By clinical appearance
Based on clinical appearance, color blindness may be described as total or partial. Total color blindness is much less common than partial color blindness. There are two major types of color blindness: those who have difficulty distinguishing between red and green, and who have difficulty distinguishing between blue and yellow.
- Total color blindness
- Partial color blindness
- Dichromacy (protanopia and deuteranopia)
- Anomalous trichromacy (protanomaly and deuteranomaly)
- Dichromacy (tritanopia)
- Anomalous trichromacy (tritanomaly)
- Blue-green trichromacy (tritanomaly)
Immunofluorescent imaging is a way to determine red-green color coding. Conventional color coding is difficult for individuals with red-green color blindness (protanopia or deuteranopia) to discriminate. Replacing red with magenta (top[where?]) or green with turquoise (bottom[where?]) improves visibility for such individuals.[not in citation given]
Color blindness can be inherited. It is most commonly inherited from mutations on the X chromosome but the mapping of the human genome has shown there are many causative mutations—mutations capable of causing color blindness originate from at least 19 different chromosomes and 56 different genes (as shown online at the Online Mendelian Inheritance in Man (OMIM) database at Johns Hopkins University). Two of the most common inherited forms of color blindness are protanopia and deuteranopia. One of the common color vision defects is red-green deficiency which is present in about 8 percent of males and 0.5 percent of females of Northern European ancestry.
Some of the inherited diseases known to cause color blindness are:
- cone dystrophy
- cone-rod dystrophy
- achromatopsia (a.k.a. rod monochromatism, stationary cone dystrophy or cone dysfunction syndrome)
- blue cone monochromatism (a.k.a. blue cone monochromacy or X-linked achromatopsia)
- Leber's congenital amaurosis
- retinitis pigmentosa (initially affects rods but can later progress to cones and therefore color blindness).
Inherited color blindness can be congenital (from birth), or it can commence in childhood or adulthood. Depending on the mutation, it can be stationary, that is, remain the same throughout a person's lifetime, or progressive. As progressive phenotypes involve deterioration of the retina and other parts of the eye, certain forms of color blindness can progress to legal blindness, i.e., an acuity of 6/60 (20/200) or worse, and often leave a person with complete blindness.
Color blindness always pertains to the cone photoreceptors in retinas, as the cones are capable of detecting the color frequencies of light.
About 8 percent of males, but only 0.5 percent of females, are color blind in some way or another, whether it is one color, a color combination, or another mutation. The reason males are at a greater risk of inheriting an X linked mutation is that males only have one X chromosome (XY, with the Y chromosome carrying altogether different genes than the X chromosome), and females have two (XX); if a woman inherits a normal X chromosome in addition to the one that carries the mutation, she will not display the mutation. Men do not have a second X chromosome to override the chromosome that carries the mutation. If 5% of variants of a given gene are defective, the probability of a single copy being defective is 5%, but the probability that two copies are both defective is 0.05 × 0.05 = 0.0025, or just 0.25%.
Other causes of color blindness include brain or retinal damage caused by shaken baby syndrome, accidents and other trauma which produce swelling of the brain in the occipital lobe, and damage to the retina caused by exposure to ultraviolet light (10–300 nm). Damage often presents itself later on in life.
Color blindness may also present itself in the spectrum of degenerative diseases of the eye, such as age-related macular degeneration, and as part of the retinal damage caused by diabetes. Another factor that may affect color blindness includes a deficiency in Vitamin A.
|This section does not cite any sources. (August 2012)|
The different kinds of inherited color blindness result from partial or complete loss of function of one or more of the different cone systems. When one cone system is compromised, dichromacy results. The most frequent forms of human color blindness result from problems with either the middle or long wavelength sensitive cone systems, and involve difficulties in discriminating reds, yellows, and greens from one another. They are collectively referred to as "red–green color blindness", though the term is an over-simplification and is somewhat misleading. Other forms of color blindness are much more rare. They include problems in discriminating blues from greens and yellows from reds/pinks, and the rarest forms of all, complete color blindness or monochromacy, where one cannot distinguish any color from grey, as in a black-and-white movie or photograph.
Congenital color vision deficiencies are subdivided based on the number of primary hues needed to match a given sample in the visible spectrum.
Monochromacy is the condition of possessing only a single channel for conveying information about color. Monochromats possess a complete inability to distinguish any colors and perceive only variations in brightness. It occurs in two primary forms:
- Rod monochromacy, frequently called achromatopsia, where the retina contains no cone cells, so that in addition to the absence of color discrimination, vision in lights of normal intensity is difficult. While normally rare, achromatopsia is very common on the island of Pingelap, a part of the Pohnpei state, Federated States of Micronesia, where it is called maskun: about 10% of the population there has it, and 30% are unaffected carriers. The island was devastated by a storm in the 18th century (an example of a genetic bottleneck) and one of the few male survivors carried a gene for achromatopsia. The population grew to several thousand before foreign troops introduced diseases to the island in the 1940s.
- Cone monochromacy is the condition of having both rods and cones, but only a single kind of cone. A cone monochromat can have good pattern vision at normal daylight levels, but will not be able to distinguish hues. Blue cone monochromacy (X chromosome) is caused by lack of functionality of L and M cones (red and green). It is encoded at the same place as red–green color blindness on the X chromosome. Peak spectral sensitivities are in the blue region of the visible spectrum (near 440 nm). People with this condition generally show nystagmus ("jiggling eyes"), photophobia (light sensitivity), reduced visual acuity, and myopia (nearsightedness). Visual acuity usually falls to the 20/50 to 20/400 range.
Protanopes, deuteranopes, and tritanopes are dichromats; that is, they can match any color they see with some mixture of just two primary colors (whereas normally humans are trichromats and require three primary colors). These individuals normally know they have a color vision problem and it can affect their lives on a daily basis. Two percent of the male population exhibit severe difficulties distinguishing between red, orange, yellow, and green. A certain pair of colors, that seem very different to a normal viewer, appear to be the same color (or different shades of same color) for such a dichromat. The terms protanopia, deuteranopia, and tritanopia come from Greek and literally mean "inability to see (anopia) with the first (prot-), second (deuter-), or third (trit-) [cone]", respectively.
- Protanopia (1% of males): Lacking the long-wavelength sensitive retinal cones, those with this condition are unable to distinguish between colors in the green–yellow–red section of the spectrum. They have a neutral point at a cyan-like wavelength around 492 nm (see spectral color for comparison) – that is, they cannot discriminate light of this wavelength from white. For a protanope, the brightness of red, orange, and yellow are much reduced compared to normal. This dimming can be so pronounced that reds may be confused with black or dark gray, and red traffic lights may appear to be extinguished. They may learn to distinguish reds from yellows primarily on the basis of their apparent brightness or lightness, not on any perceptible hue difference. Violet, lavender, and purple are indistinguishable from various shades of blue because their reddish components are so dimmed as to be invisible. For example, pink flowers, reflecting both red light and blue light, may appear just blue to the protanope. A very few people have been found who have one normal eye and one protanopic eye. These unilateral dichromats report that with only their protanopic eye open, they see wavelengths shorter than neutral point as blue and those longer than it as yellow. This is a rare form of color blindness.
- Deuteranopia (1% of males): Lacking the medium-wavelength cones, those affected are again unable to distinguish between colors in the green–yellow–red section of the spectrum. Their neutral point is at a slightly longer wavelength, 498 nm, a more greenish hue of cyan. A deuteranope suffers the same hue discrimination problems as protanopes, but without the abnormal dimming. Purple colors are not perceived as something opposite to spectral colors; all these appear similarly. This form of colorblindness is also known as Daltonism after John Dalton (his diagnosis was confirmed as deuteranopia in 1995, some 150 years after his death, by DNA analysis of his preserved eyeball). Equivalent forms for daltonism in Romanic languages such as daltonismo (Spanish, Portuguese and Italian), daltonisme (French), daltonism (Romanian) are still used to describe color blindess in a broad sense or deuteranopia in a more restricted sense. Deuteranopic unilateral dichromats report that with only their deuteranopic eye open, they see wavelengths shorter than neutral point as blue and longer than it as yellow.
- Tritanopia (less than 1% of males and females): Lacking the short-wavelength cones, those affected see short-wavelength colors (blue, indigo and a spectral violet) greenish and drastically dimmed, some of these colors even as black. Yellow is indistinguishable from pink, and purple colors are perceived as various shades of red. This form of color blindness is not sex-linked.
Anomalous trichromacy is the least serious type of color deficiency. People with protanomaly, deuteranomaly, or tritanomaly are trichromats, but the color matches they make differ from the normal. They are called anomalous trichromats. In order to match a given spectral yellow light, protanomalous observers need more red light in a red/green mixture than a normal observer, and deuteranomalous observers need more green. From a practical standpoint though, many protanomalous and deuteranomalous people have very little difficulty carrying out tasks that require normal color vision. Some may not even be aware that their color perception is in any way different from normal.
Protanomaly and deuteranomaly can be diagnosed using an instrument called an anomaloscope, which mixes spectral red and green lights in variable proportions, for comparison with a fixed spectral yellow. If this is done in front of a large audience of males, as the proportion of red is increased from a low value, first a small proportion of the audience will declare a match, while most will see the mixed light as greenish; these are the deuteranomalous observers. Next, as more red is added the majority will say that a match has been achieved. Finally, as yet more red is added, the remaining, protanomalous, observers will declare a match at a point where normal observers will see the mixed light as definitely reddish.
- Protanomaly (1% of males, 0.01% of females): Having a mutated form of the long-wavelength (red) pigment, whose peak sensitivity is at a shorter wavelength than in the normal retina, protanomalous individuals are less sensitive to red light than normal. This means that they are less able to discriminate colors, and they do not see mixed lights as having the same colors as normal observers. They also suffer from a darkening of the red end of the spectrum. This causes reds to reduce in intensity to the point where they can be mistaken for black. Protanomaly is a fairly rare form of color blindness, making up about 1% of the male population. Both protanomaly and deuteranomaly are carried on the X chromosome.
- Deuteranomaly (most common—6% of males, 0.4% of females): These individuals have a mutated form of the medium-wavelength (green) pigment. The medium-wavelength pigment is shifted towards the red end of the spectrum resulting in a reduction in sensitivity to the green area of the spectrum. Unlike protanomaly the intensity of colors is unchanged. This is the most common form of color blindness, making up about 6% of the male population. The deuteranomalous person is considered "green weak". For example, in the evening, dark green cars appear to be black to Deuteranomalous people. Similar to the protanomates, deuteranomates are poor at discriminating small differences in hues in the red, orange, yellow, green region of the spectrum. They make errors in the naming of hues in this region because the hues appear somewhat shifted towards green. One very important difference between deuteranomalous individuals and protanomalous individuals is deuteranomalous individuals do not have the loss of "brightness" problem.
- Tritanomaly (equally rare for males and females [0.01% for both]): Having a mutated form of the short-wavelength (blue) pigment. The short-wavelength pigment is shifted towards the green area of the spectrum. This is the rarest form of anomalous trichromacy color blindness. Unlike the other anomalous trichromacy color deficiencies, the mutation for this color blindness is carried on chromosome 7. Therefore, it is equally prevalent in both male & female populations. The OMIM gene code for this mutation is 304000 "Colorblindness, Partial Tritanomaly".
Total color blindness
Achromatopsia is strictly defined as the inability to see color. Although the term may refer to acquired disorders such as cerebral achromatopsia also known as color agnosia, it typically refers to congenital color vision disorders (i.e. more frequently rod monochromacy and less frequently cone monochromacy).
In cerebral achromatopsia, a person cannot perceive colors even though the eyes are capable of distinguishing them. Some sources do not consider these to be true color blindness, because the failure is of perception, not of vision. They are forms of visual agnosia.
Red–green color blindness
Protanopia, deuteranopia, protanomaly, and deuteranomaly are commonly inherited forms of red-green color blindness which affect a substantial portion of the human population. Those affected have difficulty with discriminating red and green hues due to the absence or mutation of the red or green retinal photoreceptors. It is sex-linked: genetic red–green color blindness affects males much more often than females, because the genes for the red and green color receptors are located on the X chromosome, of which males have only one and females have two. Females (46, XX) are red–green color blind only if both their X chromosomes are defective with a similar deficiency, whereas males (46, XY) are color blind if their single X chromosome is defective.
The gene for red–green color blindness is transmitted from a color blind male to all his daughters who are heterozygote carriers and are usually unaffected. In turn, a carrier woman has a fifty percent chance of passing on a mutated X chromosome region to each of her male offspring. The sons of an affected male will not inherit the trait from him, since they receive his Y chromosome and not his (defective) X chromosome. Should an affected male have children with a carrier or colorblind woman, their daughters may be colorblind by inheriting an affected X chromosome from each parent.
Because one X chromosome is inactivated at random in each cell during a woman's development, it is possible for her to have four different cone types, as when a carrier of protanomaly has a child with a deuteranomalic man. Denoting the normal vision alleles by P and D and the anomalous by p and d, the carrier is PD pD and the man is Pd. The daughter is either PD Pd or pD Pd. Suppose she is pD Pd. Each cell in her body expresses either her mother's chromosome pD or her father's Pd. Thus her red–green sensing will involve both the normal and the anomalous pigments for both colors. Such females are tetrachromats, since they require a mixture of four spectral lights to match an arbitrary light.
Blue–yellow color blindness
|This section does not cite any sources. (August 2012)|
Those with tritanopia and tritanomaly have difficulty discriminating between bluish and greenish hues, as well as yellowish and reddish hues.
Color blindness involving the inactivation of the short-wavelength sensitive cone system (whose absorption spectrum peaks in the bluish-violet) is called tritanopia or, loosely, blue–yellow color blindness. The tritanopes neutral point occurs near a yellowish 570 nm; green is perceived at shorter wavelengths and red at longer wavelengths. Mutation of the short-wavelength sensitive cones is called tritanomaly. Tritanopia is equally distributed among males and females. Jeremy H. Nathans (with the Howard Hughes Medical Institute) demonstrated that the gene coding for the blue receptor lies on chromosome 7, which is shared equally by males and females. Therefore, it is not sex-linked. This gene does not have any neighbor whose DNA sequence is similar. Blue color blindness is caused by a simple mutation in this gene.
The Ishihara color test, which consists of a series of pictures of colored spots, is the test most often used to diagnose red–green color deficiencies. A figure (usually one or more Arabic digits) is embedded in the picture as a number of spots in a slightly different color, and can be seen with normal color vision, but not with a particular color defect. The full set of tests has a variety of figure/background color combinations, and enable diagnosis of which particular visual defect is present. The anomaloscope, described above, is also used in diagnosing anomalous trichromacy.
Position yourself about 75cm from your monitor so that the colour test image you are looking at is at eye level, read the description of the image and see what you can see!! It is not necessary in all cases to use the entire set of images. In a large scale examination the test can be simplified to 6 tests; test, one of tests 2 or 3, one of tests 4, 5, 6 or 7, one of tests 8 or 9, one of tests 10, 11, 12 or 13 and one of tests 14 or 15.[this quote needs a citation]
Because the Ishihara color test contains only numerals, it may not be useful in diagnosing young children, who have not yet learned to use numerals. In the interest of identifying these problems early on in life, alternative color vision tests were developed using only symbols (square, circle, car).
Besides the Ishihara color test, the US Navy and US Army also allow testing with the Farnsworth Lantern Test. This test allows 30% of color deficient individuals, whose deficiency is not too severe, to pass.
Another test used by clinicians to measure chromatic discrimination is the Farnsworth-Munsell 100 hue test. The patient is asked to arrange a set of colored caps or chips to form a gradual transition of color between two anchor caps.
Most clinical tests are designed to be fast, simple, and effective at identifying broad categories of color blindness. In academic studies of color blindness, on the other hand, there is more interest in developing flexible tests to collect thorough datasets, identify copunctal points, and measure just noticeable differences.
There is generally no treatment to cure color deficiencies. ″The American Optometric Association reports a contact lens on one eye can increase the ability to differentiate between colors, though nothing can make you truly see the deficient color.″ Optometrists can supply colored spectacle lenses or a single red-tint contact lens to wear on the non-dominant eye, but although this may improve discrimination of some colors, it can make other colors more difficult to distinguish. A 1981 review of various studies to evaluate the effect of the X-chrom contact lens concluded that, while the lens may allow the wearer to achieve a better score on certain color vision tests, it did not correct color vision in the natural environment. A case history using the X-Chrom lens for a rod monochromat is reported and an X-Chrom manual is online. The GNOME desktop environment provides colorblind accessibility using the gnome-mag and the libcolorblind software. Using a gnome applet, the user may switch a color filter on and off, choosing from a set of possible color transformations that will displace the colors in order to disambiguate them. The software enables, for instance, a colorblind person to see the numbers in the Ishihara test.
Many applications for iPhone and iPad have been developed to help colorblind people to view the colors in a better way. Many applications launch a sort of simulation of colorblind vision to make normal-view people understand how the color-blinds see the world. Others allow a correction of the image grabbed from the camera with a special "daltonizer" algorithm.
In September 2009, the journal Nature reported that researchers at the University of Washington and University of Florida were able to give trichromatic vision to squirrel monkeys, which normally have only dichromatic vision, using gene therapy.
In 2003, a cybernetic device called eyeborg was developed to allow the wearer to hear sounds representing different colors. Achromatopsic artist Neil Harbisson was the first to use such a device in early 2004; the eyeborg allowed him to start painting in color by memorizing the sound corresponding to each color. In 2012, at a TED Conference, Harbisson explained how he could now perceive colors outside the ability of human vision. Portuguese Designer Miguel Neiva developed a code system, named Coloradd, based on five basic shapes that, when combined, make it easier to identify various colors for colorblind people. Its use is currently[when?] expanding in Portugal (hospitals, transportation, education) and in other countries.
Lenses that filter certain wavelengths of light can allow people suffering from a cone anomaly, but not dichromacy, to see a better spectrum of colors, especially those with classic "red/green" color blindness. They work by notching out wavelengths that strongly stimulate both red and green cones in a deuter- or protanomalous person, improving the distinction between the two cones' signals. As of 2013, sunglasses that enhance colors for many colorblind people are available commercially.
Color blindness affects a significant number of people, although exact proportions vary among groups. In Australia, for example, it occurs in about 8 percent of males and only about 0.4 percent of females. Isolated communities with a restricted gene pool sometimes produce high proportions of color blindness, including the less usual types. Examples include rural Finland, Hungary, and some of the Scottish islands. In the United States, about 7 percent of the male population—or about 10.5 million men—and 0.4 percent of the female population either cannot distinguish red from green, or see red and green differently from how others do (Howard Hughes Medical Institute, 2006). More than 95 percent of all variations in human color vision involve the red and green receptors in male eyes. It is very rare for males or females to be "blind" to the blue end of the spectrum.
|Protanopia (red deficient: L cone absent)||1.3%||0.02%|
|Deuteranopia (green deficient: M cone absent)||1.2%||0.01%|
|Tritanopia (blue deficient: S cone absent)||0.001%||0.03%|
|Protanomaly (red deficient: L cone defect)||1.3%||0.02%|
|Deuteranomaly (green deficient: M cone defect)||5.0%||0.35%|
|Tritanomaly (blue deficient: S cone defect)||0.0001%||0.0001%|
Frequency of red-green color blindness in males of various populations
|India (Andhra Pradesh)||292||7.5|
Society and culture
|This section needs additional citations for verification. (August 2012)|
Color codes present particular problems for those with color deficiencies as they are often difficult or impossible for them to perceive.
Designers need to take into account that color-blindness is highly sensitive to differences in material. For example, a red–green colorblind person who is incapable of distinguishing colors on a map printed on paper may have no such difficulty when viewing the map on a computer screen or television. In addition, some color blind people find it easier to distinguish problem colors on artificial materials, such as plastic or in acrylic paints, than on natural materials, such as paper or wood. Third, for some color blind people, color can only be distinguished if there is a sufficient "mass" of color: thin lines might appear black, while a thicker line of the same color can be perceived as having color.
Designers should also note that red–blue and yellow–blue color combinations are generally safe. So instead of the ever popular "red means bad and green means good" system, using these combinations can lead to a much higher ability to use color coding effectively. This will still cause problems for those with monochromatic color blindness, but it is still something worth considering.
When the need to process visual information as rapidly as possible arises, for example in an emergency situation, the visual system may operate only in shades of gray, with the extra information load in adding color being dropped. This is an important possibility to consider when designing, for example, emergency brake handles or emergency phones.
Color blindness may make it difficult or impossible for a person to engage in certain occupations. Persons with color blindness may be legally or practically barred from occupations in which color perception is an essential part of the job (e.g., mixing paint colors), or in which color perception is important for safety (e.g., operating vehicles in response to color-coded signals). This occupational safety principle originates from the Lagerlunda train crash of 1875 in Sweden. Following the crash, Professor Alarik Frithiof Holmgren, a physiologist, investigated and concluded that the color blindness of the engineer (who had died) had caused the crash. Professor Holmgren then created the first test using different-colored skeins to exclude people from jobs in the transportation industry on the basis of color blindness. However, there is a claim that there is no firm evidence that color deficiency did cause the collision, and that it might have not been the sole cause.
Color vision is important for occupations using telephone or computer networking cabling, as the individual wires inside the cables are color-coded using green, orange, brown, blue and white colors. Electronic wiring, transformers, resistors, and capacitors are color-coded as well, using black, brown, red, orange, green, yellow, blue, violet, gray, white, silver, gold.
Driving motor vehicles
Some countries (for example, Romania) have refused to grant driving licenses to individuals with color blindness. In Romania, there is an ongoing campaign to remove the legal restrictions that prohibit colorblind citizens from getting drivers' licenses.
While many aspects of aviation depend on color coding, only a few of them are critical enough to be interfered with by some milder types of color blindness. Some examples include color-gun signaling of aircraft that have lost radio communication, color-coded glide-path indications on runways, and the like. Some jurisdictions restrict the issuance of pilot credentials to persons who suffer from color blindness for this reason. Restrictions may be partial, allowing color-blind persons to obtain certification but with restrictions, or total, in which case color-blind persons are not permitted to obtain piloting credentials at all.
In the United States, the Federal Aviation Administration requires that pilots be tested for normal color vision as part of their medical clearance in order to obtain the required medical certificate, a prerequisite to obtaining a pilot's certification. If testing reveals color blindness, the applicant may be issued a license with restrictions, such as no night flying and no flying by color signals—such a restriction effectively prevents a pilot from holding certain flying occupations, such as that of an airline pilot, although commercial pilot certification is still possible, and there are a few flying occupations that do not require night flight and thus are still available to those with restrictions due to color blindness (e.g., agricultural aviation). The government allows several types of tests, including medical standard tests (e.g., the Ishihara, Dvorine, and others) and specialized tests oriented specifically to the needs of aviation. If an applicant fails the standard tests, they will receive a restriction on their medical certificate that states: "Not valid for night flying or by color signal control". They may apply to the FAA to take a specialized test, administered by the FAA. Typically, this test is the "color vision light gun test". For this test an FAA inspector will meet the pilot at an airport with an operating control tower. The color signal light gun will be shone at the pilot from the tower, and they must identify the color. If they pass they may be issued a waiver, which states that the color vision test is no longer required during medical examinations. They will then receive a new medical certificate with the restriction removed. This was once a Statement of Demonstrated Ability (SODA), but the SODA was dropped, and converted to a simple waiver (letter) early in the 2000s.
Research published in 2009 carried out by the City University of London's Applied Vision Research Centre, sponsored by the UK's Civil Aviation Authority and the US Federal Aviation Administration, has established a more accurate assessment of color deficiencies in pilot applicants' red–green and yellow–blue color range which could lead to a 35% reduction in the number of prospective pilots who fail to meet the minimum medical threshold.
Inability to distinguish color does not necessarily preclude the ability to become a celebrated artist. The 20th century expressionist painter Clifton Pugh, three-time winner of Australia's Archibald Prize, on biographical, gene inheritance and other grounds has been identified as a protanope. 19th century French artist Charles Méryon became successful by concentrating on etching rather than painting after he was diagnosed as having a red–green deficiency.
Rights of people with color blindness
||The examples and perspective in this article may not represent a worldwide view of the subject. (November 2014)|
At trial, it was decided that the carriers of color blindness have a right of access to wider knowledge, or the full enjoyment of their human condition.
Problems and compensations
Color blindness very rarely means complete monochromatism. In almost all cases, color blind people retain blue–yellow discrimination, and most color-blind individuals are anomalous trichromats rather than complete dichromats. In practice, this means that they often retain a limited discrimination along the red–green axis of color space, although their ability to separate colors in this dimension is severely reduced.
Dichromats often confuse red and green items. For example, they may find it difficult to distinguish a Braeburn apple from a Granny Smith and in some cases, the red and green of traffic lights without other clues—for example, shape or position. The vision of dichromats may also be compared to images produced by a color printer that has run out of the ink in one of its three color cartridges (for protanopes and deuteranopes, the magenta cartridge, and for tritanopes, the yellow cartridge). Dichromats tend to learn to use texture and shape clues and so are often able to penetrate camouflage that has been designed to deceive individuals with color-normal vision.
Colors of traffic lights are confusing to some dichromats, as there is insufficient apparent difference between the red/amber traffic lights, and that of sodium street lamps; also, the green can be confused with a grubby white lamp. This is a risk factor on high-speed undulating roads where angular cues cannot be used. British Rail color lamp signals use more easily identifiable colors: The red is blood red, the amber is yellow and the green is a bluish color. Most British road traffic lights are mounted vertically on a black rectangle with a white border (forming a "sighting board") and so dichromats can look for the position of the light within the rectangle—top, middle or bottom. In the Eastern provinces of Canada horizontally mounted traffic lights are generally differentiated by shape to facilitate identification for those with color blindness. In the United States, this is not done by shape but by position, as the red light is always on the left if the light is horizontal, or on top if the light is vertical. However, a single flashing light (red indicating cars must stop, yellow for caution/yield) is indistinguishable, but these are rare. A famous traffic light on Tipperary Hill in Syracuse, New York, is upside-down due to the sentiments of its Irish American community, but has been criticized due to the potential hazard it poses for color-blind persons.
- Because of variations in computer displays, these illustrations may not be accurately rendered.
- Wong, Bang (2011). "Color blindness". Nature Methods 8 (6): 441. doi:10.1038/nmeth.1618. PMID 21774112.
- Carlson, Neil R. (2007). Psychology: The Science of Behaviour. New Jersey, USA: Pearson Education. p. 145. ISBN 978-0-205-64524-4.
- Morgan, M. J.; Adam, A.; Mollon, J. D. (June 1992). "Dichromats detect colour-camouflaged objects that are not detected by trichromats". Proc. Biol. Sci. 248 (1323): 291–5. doi:10.1098/rspb.1992.0074. PMID 1354367.
- Bosten, J.M.; Robinson, J.D.; Jordan, G.; Mollon, J.D. (2005). "Multidimensional scaling reveals a color dimension unique to ‘color-deficient’ observers". Current Biology 15 (23): R950–2. doi:10.1016/j.cub.2005.11.031. PMID 16332521.
- Dalton, J (1798). "Extraordinary facts relating to the vision of colours: with observations". Memoirs of the Literary and Philosophical Society of Manchester 5: 28–45. OCLC 9879327.
- Chan, Xin; Goh, Shi; Tan, Ngiap (2014). "Subjects with colour vision deficiency in the community: what do primary care physicians need to know?". Asia Pacific Family Medicine 13 (1): 10. doi:10.1186/s12930-014-0010-3.
- "Colour vision deficiency - Causes". NHS Choices. 2012-12-14. Retrieved 2014-05-24.
- Roth, Mark (13 September 2006). "Some women may see 100,000,000 colors, thanks to their genes". Pittsburgh Post-Gazette.
- Didymus, JohnThomas (Jun 19, 2012), "Scientists find woman who sees 99 million more colors than others", Digital Journal
- Jordan, Gabriele; Deeb, Samir S.; Bosten, Jenny M.; Mollon, J. D. (July 2010). "The dimensionality of color vision in carriers of anomalous trichromacy". Journal of Vision 10 (12): 12. doi:10.1167/10.8.12. PMID 21047744.
- http://www.colourblindawareness.org/colour-blindness/acquired-colour-vision-defects/[full citation needed]
- MedlinePlus Encyclopedia Color blindness
- "Types of Color Deficiencies". Konan Medical. Retrieved 2014-04-26.
- http://www.color-blindness.com/protanopia-red-green-color-blindness/[full citation needed]
- http://www.color-blindness.com/deuteranopia-red-green-color-blindness/[full citation needed]
- Tovee, Martin J. (2008). An Introduction to the Visual System. Cambridge University Press. ISBN 0-521-70964-4.
- http://www.color-blindness.com/tritanopia-blue-yellow-color-blindness/[full citation needed]
- Spring, Kenneth R.; Parry-Hill, Matthew J.; Fellers, Thomas J.; Davidson, Michael W. "Human Vision and Color Perception". Florida State University. Retrieved 2007-04-05.
- Hoffman, Paul S. "Accommodating Color Blindness" (PDF). Archived from the original (PDF) on 15 May 2008. Retrieved 2009-07-01.
- Neitz, Maureen E. "Severity of Colorblindness Varies". Medical College of Wisconsin. Archived from the original on 5 February 2007. Retrieved 2007-04-05.
- Jones, Sara A; Shim, Sang-Hee; He, Jiang; Zhuang, Xiaowei (2011). "Fast, three-dimensional super-resolution imaging of live cells". Nature Methods 8 (6): 499–508. doi:10.1038/nmeth.1605. PMC 3137767. PMID 21552254.
- Albrecht, Mario (2010). "Color blindness". Nature Methods 7 (10): 775–775. doi:10.1038/nmeth1010-775a. ISSN 1548-7091.
- Sharpe, L.T.; Stockman, A.; Jägle, H.; Nathans, J. (1999). "Opsin genes, cone photopigments, color vision and color blindness". In Gegenfurtner, K. R.; Sharpe, L. T. Color Vision: From Genes to Perception. Cambridge University Press. ISBN 978-0-521-00439-8.
- American Medical Association (2003). Leikin, Jerrold B.; Lipsky, Martin S., eds. Complete Medical Encyclopedia (Encyclopedia) (First ed.). New York, NY: Random House Reference. p. 388. ISBN 0-8129-9100-1.
- Weiss, A. H.; Biersdorf, W. R. (1989). "Blue cone monochromatism". J Pediatr Ophthalmol Strabismus 26 (5): 218–23. PMID 2795409.
- David L. MacAdam (ed.) and Deane B. Judd (1979). Contributions to color science. NBS. p. 584.
- Simunovic, M P (2010). "Colour vision deficiency". Eye 24 (5): 747–55. doi:10.1038/eye.2009.251. PMID 19927164.
- Kalloniatis, Michael; Luu, Charles (July 9, 2007). "The Perception of Color". In Kolb, Helga; Fernandez, Eduardo; Nelson, Ralph. Webvision: The Organization of the Retina and Visual System. PMID 21413396.
- "Disease-causing Mutations and protein structure". UCL Biochemistry BSM Group. Retrieved 2007-04-02.
- "Types of Colour Blindness". Colour Blind Awareness.
- Blom, Jan Dirk (2009). A Dictionary of Hallucinations. Springer. p. 4. ISBN 978-1-4419-1222-0.
- Neitz, Jay; Neitz, Maureen (2011). "The genetics of normal and defective color vision". Vision Research 51 (7): 633–51. doi:10.1016/j.visres.2010.12.002. PMC 3075382. PMID 21167193.
- Fareed, Mohd; Anwar, Malik Azeem; Afzal, Mohammad (2015). "Prevalence and gene frequency of color vision impairments among children of six populations from North Indian region". Genes & Diseases 2 (2): 211–8. doi:10.1016/j.gendis.2015.02.006.
- "Myambutol (Ethambutol) Drug Information: Description, User Reviews, Drug Side Effects, Interactions - Prescribing Information at RxList". Rxlist.com. Retrieved 2014-05-24.
- Goldstein, E. Bruce (2007). Sensation and perception (7th ed.). Wadsworth: Thomson. p. 152. ISBN 978-0-534-55810-9.
- Gordon, N (1998). "Colour blindness". Public Health 112 (2): 81–4. doi:10.1038/sj.ph.1900446. PMID 9581449.
- Kinnear, PR; Sahraie, A (2002). "New Farnsworth-Munsell 100 hue test norms of normal observers for each year of age 5-22 and for age decades 30-70". The British Journal of Ophthalmology 86 (12): 1408–11. doi:10.1136/bjo.86.12.1408. PMC 1771429. PMID 12446376.
- Cole, Barry L; Lian, Ka-Yee; Lakkis, Carol (2006). "The new Richmond HRR pseudoisochromatic test for colour vision is better than the Ishihara test". Clinical and Experimental Optometry 89 (2): 73–80. doi:10.1111/j.1444-0938.2006.00015.x. PMID 16494609.
- Toufeeq, A (2004). "Specifying colours for colour vision testing using computer graphics". Eye 18 (10): 1001–5. doi:10.1038/sj.eye.6701378. PMID 15192692.
- http://www.aoa.org/patients-and-public/eye-and-vision-problems/glossary-of-eye-and-vision-conditions/color-deficiency?sso=y[full citation needed]
- Siegel, I. M. (1981). "The X-Chrom lens. On seeing red". Surv Ophthalmol 25 (5): 312–24. PMID 6971497.
- Zeltzer, HI (1979). "Use of modified X-Chrom for relief of light dazzlement and color blindness of a rod monochromat". Journal of the American Optometric Association 50 (7): 813–8. PMID 315420.
- An X-Chrom manual
- Dolgin, Elie (2009). "Colour blindness corrected by gene therapy". Nature. doi:10.1038/news.2009.921.
- Alfredo M. Ronchi: Eculture: Cultural Content in the Digital Age. Springer (New York, 2009). p. 319 ISBN 978-3-540-75273-8
- "I listen to color", Neil Harbisson at TED Global, 27 June 2012.
- A Scientist Accidentally Developed Sunglasses That Could Correct Color Blindness
- Introducing EnChroma
- Pogue, David (15 August 2013). "Glasses That Solve Colorblindness, for a Big Price Tag". The New York Times. Retrieved 22 July 2015.
- Ananya, Mandal. "Color Blindness Prevalence". Health. Retrieved 27 February 2014.
- "Causes and Incidence of Colorblindness". Causes of Color. Retrieved 27 February 2014.[unreliable source?]
- Harrison et al. (1977): Human Biology, Oxford University Press, Oxford, ISBN 0-19-857164-X; ISBN 0-19-857165-8.
- Hadžiselimović R., Berberović Lj., Sofradžija A. (1980): Populacijska genetika viđenja crvenog i zelenog dijela spektra u stanovništvu Bosne i Hercegovine / Population genetics of red and green spectrum vision of the population of Bosnia and Herzegovina. God. Biol. inst. Univ. u Sarajevu / Annual of Institute of Biology, University of Sarajevo, 33: 87-97.
- Crow, Kevin L. (2008). "Four Types of Disabilities: Their Impact on Online Learning". TechTrends 52 (1): 51–5. doi:10.1007/s11528-008-0112-6.
- Habibzadeh, Parham (2015-01-01). "Our red–green world". Australian Health Review. doi:10.1071/ah15161.
- Algis, J.; Vingrys, J.; Cole, Barry L. (1986). "Origins of colour vision standards within the transport industry". Ophthalmic & Physiological Optics 6 (4): 369–75. doi:10.1111/j.1475-1313.1986.tb01155.x. PMID 3306566.
- Mollon, JD; Cavonius, LR (2012). "The Lagerlunda Collision and the Introduction of Color Vision Testing". Survey of Ophthalmology 57 (2): 178–94. doi:10.1016/j.survophthal.2011.10.003. PMID 22301271.
- Meyers, Michael (2002). All in One A+ Certification Exam Guide (4th ed.). Berkeley, California: McGraw-Hill/Osborne. ISBN 0-07-222274-3.[page needed]
- Grob, Bernard (2001). Basic Electronics. Columbus, Ohio: Glencoe/McGraw-Hill. ISBN 0-02-802253-X.[page needed]
- "Petition to European Union on Colorblind's condition in Romania". Retrieved 2007-08-21.[self-published source?]
- Habibzadeh, Parham (2015-01-01). "Our red–green world". Australian Health Review. doi:10.1071/ah15161.
- "Aerospace Medical Dispositions — Color vision". Retrieved 2009-04-11.
- Warburton, Simon (29 May 2009). "Colour-blindness research could clear more pilots to fly: UK CAA". Air transport. Reed Business Information. Retrieved 29 October 2009.
- Cole, Barry L; Harris, Ross W (2009). "Colour blindness does not preclude fame as an artist: celebrated Australian artist Clifton Pugh was a protanope". Clinical and Experimental Optometry 92 (5): 421–8. doi:10.1111/j.1444-0938.2009.00384.x. PMID 19515095.
- Anon. "Charles Meryon". Art Encyclopedia. The Concise Grove Dictionary of Art. Oxford University Press. Retrieved 7 January 2010.
- "Full text of the decision of the court – in Portuguese language". Retrieved 2012-03-09.
- "Decree issued by president of a republic ratifying Legislative Decree No. 198, of june 13, which approved the Inter-American Convention AG/RES. 1608 – in Portuguese language". Retrieved 2012-03-09.
- "Inter-American Convention on the Elimination of All Forms of Discrimination against Person with Disabilities.". Retrieved 2012-03-09.
- Sarah Zhang. "The Story Behind Syracuse's Upside-Down Traffic Light". Gizmodo.
- Kaiser, Peter K.; Boynton, Robert M. (1996). Human color vision. Washington, DC: Optical Society of America. ISBN 1-55752-461-0. OCLC 472932250.
- McIntyre, Donald (2002). Colour blindness: causes and effects. Chester: Dalton Publishing. ISBN 0-9541886-0-8. OCLC 49204679.
- Rubin, Melvin L.; Cassin, Barbara; Solomon, Sheila (1984). Dictionary of eye terminology. Gainesville, Fla: Triad Pub. Co. ISBN 0-937404-07-1. OCLC 10375427.
- Shevell, Steven K. (2003). The science of color. Amsterdam: Elsevier. ISBN 0-444-51251-9. OCLC 52271315.
- Hilbert, David; Byrne, Alexander (1997). Readings on color. Cambridge, Mass: MIT Press. ISBN 0-262-52231-4. OCLC 35762680.
- Stiles, W. S.; Wyszecki, Günter (2000). Color science: concepts and methods, quantitative data and formulae. Chichester: John Wiley & Sons. ISBN 0-471-39918-3. OCLC 799532137.
- Kuchenbecker, J.; Broschmann, D. (2014). Plates for color vision testing. New York: Thieme. ISBN 978-3-13-175481-3.
|Wikimedia Commons has media related to Color blindness.|
|Wikisource has original text related to this article:| | https://en.wikipedia.org/wiki/Deuteranomaly |
4.0625 | 7 Written questions
6 Multiple choice questions
- A figure of speech in which an object or animal is given human feelings, thoughts, or attitudes
- a figure of speech that uses exaggeration to express strong emotion, make a point, or evoke humor
- the problem or problems characters face in a story
- consists of lines of poetry that do not have a regular rhythm and do not rhyme
- the perspective from which the writer tells the story (1st, 2nd, 3rd person; omniscient, limited omniscient). NOTE: Out of the Dust is told from a first person narrative point of view.
- n. A conclusion not directly provided by evidence, but able to infer or to guess based on the facts at hand or the details the author gives you.
6 True/False questions
Setting → the time and place of a story
Suspense → n. A conclusion not directly provided by evidence, but able to infer or to guess based on the facts at hand or the details the author gives you.
Foreshadowing → the use of hints and clues to suggest what will happen later in a plot
Symbolism/Symbol → An object or action in a literary work that means more than itself, that stands for something beyond itself.
Theme → the appearance of the words on the page
Metaphor → the time and place of a story | https://quizlet.com/21339289/test |
4 | 3 Answers | Add Yours
The way capacitors and resistors behave is totally different. While resistors allow a current to flow through them which is proportional to the voltage drop across the resistor, capacitors oppose a change in voltage across them by either drawing in or supplying current as they charge or discharge resp. The flow of current through a capacitor is thus directly proportional to the rate of change of voltage across it.
This is given by the relation, i = C* (de/dt) where de/dt is the instantaneous change in voltage.
As the voltage does not change in the case of DC, de/dt = 0 and the current that is allowed to pass through by the capacitor is 0. For AC voltage the voltage changes in a regular manner. Hence here de/dt is not 0 and a current is allowed to flow through by the capacitor.
Dont be so panic, u dnt have to learn huge theory. U cn just learn it with the formula ------- xc=1/(2 pi f c). Where xc=capacitive reactance, f=frequency, c=capacitance and pi=3.14.
So in case of an dc frequency,i.e, f is zero. So reactance i.e., impedance is infinite. ( According to formula,put 0 in place of f, u get xc=~infinite). Thats why capacitor gives infinite impedance or resistance to dc signal... Got it?
The capacitors block DC current because there is an insulating layer between one part and the other part of the circuit.
We know that direct current cannot pass though a open circuit.
Am I right!!!????
We’ve answered 301,114 questions. We can answer yours, too.Ask a question | http://www.enotes.com/homework-help/why-does-capacitor-block-dc-but-allow-ac-pass-212149 |
4.03125 | - References and Links
- How to Cite
Sea ice provides a resting and birthing place for seals and walrus, a hunting and breeding ground for polar bears, and a foraging ground for arctic fox, whales, caribou, and other mammals. A lack of ice and poor ice conditions cause stress for marine mammals and ultimately affect their livelihoods and abilities to reproduce. According to the Arctic Climate Impact Assessment's 2004 report, Impacts of a Warming Arctic, expected reductions in sea ice will drastically shrink marine habitats for ice-dependent seals, polar bears, and some seabirds, likely pushing some species to extinction.
Poor ice conditions affect polar mammals in a variety of ways. If the pack ice retreats beyond the edge of the continental shelf where walrus typically feed, they must swim great distances to reach their feeding grounds. Similarly, Arctic fox have been stranded on shore with land predators, instead of being able to migrate onto the ice. Peary caribou have been observed falling through unusually thin ice during their migrations.
Narwhals are a species of whale that migrate closer to coasts in summer. When the winter freeze begins, they move away from the shores and live below densely packed ice, surviving by breathing through leads and small holes in the ice. As spring comes, these leads open up into channels, and the whales return to coastal bays. Decreasing sea ice compromises the livelihood of most marine mammals, but in the case of narwhals, increasing ice is the problem. One area with increasing sea ice is Baffin Bay, between Canada and Greenland, where ice cover increased an average of 0.04% per year between 1978 and 2001. This small increase is enough to close holes in the ice where narwhals feed, and they also risk being trapped in the ice.
Seals and polar bears are greatly affected by changes in sea ice, as the following sections explain.
Learn About NSIDC
Arctic Sea Ice News & Analysis: Read scientific analysis on Arctic sea ice conditions. We provide an update during the first week of each month, or more frequently as conditions warrant.
Icelights: Get answers to your burning questions about ice and climate. | https://nsidc.org/cryosphere/seaice/environment/mammals.html |
4.125 | You Are Here
Leader Resource 3: Unitarian Universalist Principles in Children's Language
Unitarian Universalist Principles in Children’s Language
Use these versions of the Principles to guide a discussion about faith as children prepare to create their own faith symbols in Session 2, Activity 5:
1. We believe that each and every person is important.
2. We believe that all people should be treated fairly and kindly.
3. We believe that we should accept one another and keep on learning together.
4. We believe that each person must be free to search for what is true and right in life.
5. We believe that all persons should have a vote about the things that concern them.
6. We believe in working for a peaceful, fair, and free world.
7. We believe in caring for our planet Earth, the home we share with all living things. | http://www.uua.org/re/tapestry/children/home/session2/60019.shtml |
4.3125 | When NASA's next Mars rover, Curiosity, arrives at the Red Planet next month, it will help pave the way for the humans who might one day follow.
In addition to looking for signs of current and past habitability to extraterrestrial life, the rover, due to land Aug. 6, will learn more about whether Mars could be habitable for humans — particularly in terms of its weather. The continuous record of Martian weather and radiation Curiosity plans to collect will help future forecasters tell humans — should we choose to go — how best to protect themselves in the harsh environment, experts say.
That's why NASA's Human Exploration and Operations Mission Directorate paid to include a radiation detector onboard the car-size Curiosity, the centerpiece of the Mars Science Laboratory mission, which is run by NASA's Jet Propulsion Laboratory.
“When we were designing Curiosity, we were going to use it for our habitability investigations as well,” said Ashwin Vasavada, MSL's deputy project scientist. “But it really is paid for and intended to understand the environment humans will experience on Mars.”
The $2.5 billion rover launched Nov. 26, 2011. It is designed to work for at least two years on Mars.
Curiosity will sample the Martian environment every hour through two main instruments: a meteorology station and a radiation detector. The instruments will run even when the rover is sleeping, during the Martian night, to provide a continual stream of data. [Mars Rover Curiosity's Landing Site: Gale Crater (Infographic)]
The Radiation Assessment Detector (RAD), in fact, began running during Curiosity's eight-month journey to Mars. Radiation from the sun and galactic cosmic rays occur throughout the solar system, meaning that humans would be exposed to elevated radiation from the moment they leave Earth's cradling magnetic field. Understanding how much radiation would bombard the spacecraft is the first step to learning how we can shield humans against it.
When Curiosity begins work on the Red Planet, RAD's telescope detectors will run for 15 minutes every hour, measuring a broad range of high-energy radiation in the atmosphere and on the surface.
It's not fully known just how radiation behaves close to the surface. Although orbiting spacecraft such as the Mars Reconnaissance Orbiter can measure it from above, it's harder for those spacecraft on high to see radiation close to the ground. Of most concern to scientists are rays that can splinter off from radiation hitting the Martian atmosphere.
“The high-energy particles can generate secondary, lower-energy particles when they interact with molecules of gas in the atmosphere,” Vasavada said.
Most particles in cosmic rays are protons, which can generate secondary gamma rays or neutrons, he added. This process also happens on Earth, but higher in the atmosphere and far away from the surface.
According to Vasavada, these energetic particles can ionize molecules inside humans, breaking the molecules apart and damaging cells. Essential complex organic molecules such as DNA could be affected.
“How much damage a particle does is not simply related to how energetic it is,” he said. “Heavier, less energetic particles produced as secondaries may be rarer than protons to an astronaut, but can do just as much total damage.”
Weather forecasting will also be needed for astronauts roaming on Mars. In a first since the Viking vanguard missions of the 1970s, MSL will feature a full meteorology package called the Rover Environmental Monitoring Station. The Spanish–built REMS will run for at least five minutes every hour, night and day.
To capture the speed and direction of the wind, and the air's temperature and humidity, REMS will use electronic sensors on two booms stretching out horizontally from a camera mast mounted on the rover.
Ultraviolet radiation will be measured using a sensor stuck on the rover's deck. Some of the wavelengths it will watch for are the same ones sensed by the Mars Reconnaissance Orbiter flying above, providing a more complete record of what's happening on Mars.
Inside the rover, an air pressure sensor will taste the air outside through a tube with a small opening to the atmosphere. Radiation-sensitive electronics controlling REMS will also stay inside Curiosity to protect them from the elements.
Through coordinating MSL's weather and radiation sensing with what is seen from above, NASA expects a better picture of what Mars looks and feels like, making it easier for humans to get there. | http://www.space.com/16751-mars-rover-curiosity-weather-station.html |
4.21875 | The Computer Revolution/Programming/Procedural Programming< The Computer Revolution
Procedural programming focuses on the step-by-step instructions that tell the computer what to do to solve a problem. This is the other most significant approach to programming, next to object-oriented programming (OOP). Procedural programming is based on procedure call which is locating specific tasks in procedures that are called by the main program code whenever a task needs to be completed. When it is done programming, the program control will return to the main program.
Procedures are small sections of program code that are called by the main program code when the tasks need to be performed. After a procedure finishes, the program control returns to the main program.
By doing this approach, this allows each procedure to be performed as many times as needed without requiring unnecessary multiple copies of the same code. This makes the program smaller and the main program is easier to understand. With using this programming, it allows faster development time.
Structured programming is a type of programming that involves breaking the program into smaller modules of code. The modules have a duty of performing a single task, therefore eliminating the need to use the GOTO statement. The GOTO statement was very heavily used before procedural programming was created. It is basically a statement to send program control to a specific line of a code to complete commands at that specific point until another GOTO statement is reached.
In structured programming the overall general tasks that need to be performed first are of the highest priority and then the tasks get more and more specific as it goes on- the top-down philosophy. Sometimes structured programming and procedural programming are used interchangeably. | https://en.m.wikibooks.org/wiki/The_Computer_Revolution/Programming/Procedural_Programming |
4.03125 | An engraving of a medieval minstrel.
A person who performed in a show for nobility during the middle ages by reciting lyrical poetry while playing a zither is an example of a minstrel.
- any of a medieval class of entertainers who traveled from place to place: known esp. for singing and reciting to musical accompaniment
- Old Poet. a poet, singer, or musician
- ⌂ a performer in a minstrel show
Origin of minstrelMiddle English menestrel ; from OFr, minstrel, servant, origin, originally , official ; from Late Latin ministerialis, imperial officer ; from Classical Latin ministerium, ministry
- A medieval entertainer who traveled from place to place, especially to sing and recite poetry.
- a. A lyric poet.b. A musician.
- A performer in a minstrel show.
Origin of minstrelMiddle English minstral, from Old French menestrel, servant, entertainer, from Late Latin ministeri&amacron;lis, official in the imperial household, from Latin ministerium, ministry; see ministry. | http://www.yourdictionary.com/minstrel |
4.09375 | U.S.G.S. engineer Bailey Willis ( February 19, 1949) was known for his unorthodox approach to geological questions. Puzzled by the geological structures he discovered in mountain ranges, long before computer-models were available, he constructed a machine to simulate the mountain-forming process.
In a box with a moveable piston he folded and crushed layers of beeswax and compared the structures with the large tectonic folds and thrusts he had mapped in the Appalachian Mountains. He realized that folds and nappes could form also by horizontal movements and compressive forces - not, as still many geologists argued, only by vertical movements (easier to explain at the time).
Fig.1.Willis"Compression Machine for Experiments" from "The Mechanics of Appalachian Structure" (1891), all images in public domain.
Fig.2. Miniature mountains made by the "compression machine" - the strata first form regular folds, however as the shortening continues, shear zones develop and single "tectonic nappes" start to pile up, as seen in real mountains.
Fig.3. Folded strata in the central Appalachian Mountains. In later years Willis proposed a first version of plate tectonics to explain mountain formation processes - the Atlantic Ocean was formed when a "bubble" of magma pushed apart the American and European continents, along the borders the layers of rocks were compressed and folded up - the Appalachian Mountains formed. Unfortunately this mountain range is significantly older than the Atlantic…
Fig.4. Willis subdivided mountain ranges in a central zone, characterized by folds, and an outer zone, characterized by shear zones (geological map of Cleveland in Tennessee). Today we know that the conformation can be much more complicated than that. | http://blogs.scientificamerican.com/history-of-geology/the-man-who-made-mountains/ |
4.34375 | Both polar regions of the earth are cold, primarily because they receive far less solar radiation than the tropics and mid-latitudes do. At either pole the sun never rises more than 23.5 degrees above the horizon and both locations experience six months of continuous darkness. Moreover, most of the sunlight that does shine on the polar regions is reflected by the bright white surface.
What makes the South Pole so much colder than the North Pole is that it sits on top of a very thick ice sheet, which itself sits on a continent. The surface of the ice sheet at the South Pole is more than 9,000 feet in elevation--more than a mile and a half above sea level. Antarctica is by far the highest continent on the earth. In comparison, the North Pole rests in the middle of the Arctic Ocean, where the surface of floating ice rides only a foot or so above the surrounding sea.The Arctic Ocean also acts as an effective heat reservoir, warming the cold atmosphere in the winter and drawing heat from the atmosphere in the summer.
Answer originally published May 5, 2003. | http://www.scientificamerican.com/article/why-is-the-south-pole-col/ |
4.34375 | Crito: Biography: Plato
Plato, a legendary Athenian philosopher, lived from 429 to 347 B.C. Since Socrates didn't write anything himself, his influence and philosophy is mainly known through his pupil, Plato, who eventually surpassed his teacher through influential ideas of his own.
Since Plato inherited a sizable fortune and reputation from his aristocratic family, he had plenty of time to speculate about philosophy. At first he considered becoming a politician himself, but the death of Socrates, which Plato and others believed was unjust, disillusioned Plato with politics. Yet he remained an ivory-tower critic, best known for his firm belief in the rule of philosopher kings. Plato believed that only philosophical intellectuals could have the objectivity to govern fairly. His science of episteme sought to teach young men virtue and goodness in order to preserve the beleaguered polis of Athens. In this vein, Plato set up the Academy, one of the first schools of philosophy. Through this institution, many Greek ideas were preserved and enhanced. The Academy survived until the Roman government under Justinian disbanded it in 529 A.D.
Plato's writing has left an undisputed mark on Western thought. His famous work, Crito, describes Socrates' refusal to escape from prison. Many of the ideas advocated in Crito carry great weight in the judicial systems of today.
Below are recent members to our connections group.
If you go to our homework help section Members are able to post questions And get answers from other members.
You can make money helping other students or you can set your price on homework help that you need. | http://www.novelguide.com/crito/biography.html |
4.09375 | A periplus (//) is a manuscript document that lists the ports and coastal landmarks, in order and with approximate intervening distances, that the captain of a vessel could expect to find along a shore. It served the same purpose as the later Roman itinerarium of road stops; however, the Greek navigators added various notes, which if they were professional geographers (as many were) became part of their own additions to Greek geography. In that sense the periplus was a type of log.
The form of the periplus is at least as old as the earliest Greek historian, the Ionian Hecataeus of Miletus. The works of Herodotus and Thucydides contain passages that appear to have been based on peripli.
Periplus is the Latinization of the Greek word περίπλους (periplous, contracted from περίπλοος periploos), literally "a sailing-around." Both segments, peri- and -plous, were independently productive: the ancient Greek speaker understood the word in its literal sense; however, it developed a few specialized meanings, one of which became a standard term in the ancient navigation of Phoenicians, Greeks, and Romans.
In the Persian Gulf
A Rahnameh listed the ports and coastal landmarks and distances along the shores.
These lost but much-cited sailing directions go back at least until the 12th century. In some, the Indian Ocean was described as "a hard sea to get out of" and warned of the "circumambient sea, whence all return was impossible.
Several examples of peripli have survived:
- The Periplus of Hanno the Navigator, Carthaginian colonist and explorer who explored the coast of Africa from present-day Morocco southward at least as far as Senegal in the sixth or fifth century BCE.
- The Massaliote Periplus, a description of trade routes along the coasts of Atlantic Europe, possibly dates to the sixth century BCE.
- Pytheas of Massilia, (fourth century BCE) On the Ocean (Περί του Ωκεανού), has not survived; only excerpts remain, quoted or paraphrased by later authors, notably in Avienus' Ora maritima.
- The Periplus of Pseudo-Scylax, generally is thought to date to the fourth or third century BCE.
- The Periplus of the Erythraean Sea was written by a Romanized Alexandrian in the first century CE. It gives the shoreline itinerary of the Red (Erythraean) Sea, starting each time at the port of Berenice. Beyond the Red Sea, the manuscript describes the coast of India as far as the Ganges River and the east coast of Africa (called Azania). The unknown author of the Periplus of the Red Sea claims that Hippalus, a mariner, was knowledgeable about the "monsoon winds" that shorten the round-trip from India to the Red Sea. According to the Periplus of the Red Sea, "the Horn of Africa," was called, "the Cape of Spices." The author of the text Periplus of the Red Sea called modern day Yemen the "Frankincense Country."
- The Periplus Ponti Euxini, a description of trade routes along the coasts of the Black Sea, written by Arrian in the early second century CE.
A periplus was also an ancient naval manoeuvre in which attacking triremes would outflank or encircle the defenders to attack them in the rear.
- Kish, George (1978). A Source Book in Geography. Cambridge: Harvard University Press. p. 21. ISBN 0-674-82270-6.
- Shahar, Yuval (2004). Josephus Geographicus: The Classical Context of Geography in Josephus. Mohr Siebeck. p. 40. ISBN 3-16-148256-5.
- Dehkhoda, Ali Akbar; Moʻin, Mohammad (1958). Loghat-namehʻi Dehkhoda. Tehran: Tehran University Press: Rahnāma.
- Fernandez-Armesto, Felipe (2001). Civilizations: Culture, Ambition, and the Transformation of Nature. New York: Free Press. ISBN 0-7432-0248-1.
- The Periplus of Hanno a voyage of discovery down the west African coast. Translated by Schoff, H. 1912.
- Xinru Liu, The Silk Road in World History (New York: Oxford University Press, 2010), 34.
- Xinru Liu, The Silk Road in World History (New York: Oxford University Press, 2010), 36.
- Xinru Liu, The Silk Road in World History (New York: Oxford University Press, 2010), 37.
- The dictionary definition of periplus at Wiktionary | https://en.wikipedia.org/wiki/Periplus |
4.03125 | The Qin dynasty replaced the Zhou dynasty in 221 BC and ruled until 206 BC. It was the first unified Chinese empire, and followed a philosophy known as Legalism to great effect in amassing its force and overriding all the other states of the Warring States period.
The dynasty was founded by the ruler of the state of Qin, Prince Ying Zheng. The state was excellent in battle and conquered all the the states, unifying the country. The state of Qin was originally a fief of the Zhou state, a military province intended to provide a buffer against western "barbarians" and to produce horses; it was in an out of the way area, and the region was part-desert. There was little indication that this state would prove to be the most powerful, or that it would unify China in the end. However, victories at such key battles as Battle of Changping, the bloodiest engagement in history until the Second World War, against the state of Zhao, allowed Qin to develop and conquer the other 6 states by 221 BC under Ying Zheng.
|History of China|
|Xia c. 2070–c. 1600 BC|
|Shang c. 1600 – 1046 BC|
|Zhou 1045–256 BC|
|Qin 221–206 BC|
|Han 206 BC – 220 AD|
|Three Kingdoms 220–280|
|Northern and Southern|
| Five Dynasties and|
Ten Kingdoms 907–960
|People's Republic 1949–present|
The Qin dynasty began building the Great Wall of China, although this was a much earlier wall, made of tamped earth, not the more modern stone structure constructed under the Ming dynasty, over 1500 years later. The wall was a large project, and produced by corvee labour, but it built on earlier structures produced by various states in the Warring States period to keep out the nomads of the steppes to the north and west.
The Qin dynasty's highly authoritarian approach led to a rebellion which replaced it with the Han dynasty, although many modern scholars agree that it was not the tight rule of Ying Zheng (or "Qin Shihuang Di") that was unpopular; rather, it was the breakdown of the law instituted by the Qin under Ying Zheng's successor that caused unrest and rebellion in 206 BC. The period following the Qin was a short period of warlords, the two most famous being Liu Bang, the governor of Pei, and Xiang Yu, a nobleman of the former state of Chu. Liu Bang defeated Xiang Yu and founded the Han Dynasty, although both are famous in Chinese history and culture, with Xiang Yu being the subject of much of Chinese opera, the famous aria, "Xiang Yu bids farewell to his concubine" being based on an event in his life according the famous Chinese history, the Records of the Historian by Sima Qian of the Han. | http://www.conservapedia.com/Qin_dynasty |
4.3125 | Tetrachromacy is the condition of possessing four independent channels for conveying color information, or possessing four types of cone cells in the eye. Organisms with tetrachromacy are called tetrachromats.
In tetrachromatic organisms, the sensory color space is four-dimensional, meaning that to match the sensory effect of arbitrarily chosen spectra of light within their visible spectrum requires mixtures of at least four primary colors.
Tetrachromacy is demonstrated among several species of birds, fish, amphibians, reptiles and insects. It was also the normal condition of most mammals in the past; a genetic change made the majority of species of this class eventually lose two of their four cones.
The normal explanation of tetrachromacy is that the organism's retina contains four types of higher-intensity light receptors (called cone cells in vertebrates as opposed to rod cells, which are lower intensity light receptors) with different absorption spectra. This means that the animal may see wavelengths beyond those of a typical human being's eyesight, and may be able to distinguish colors that to a human appear to be identical. Species with tetrachromatic color vision have an unknown physiological advantage over rival species.
Some species of birds, such as the zebra finch and the Columbidae, use the ultraviolet wavelength 300–400 nm specific to tetrachromatic color vision as a tool during mate selection and foraging. When selecting for mates, ultraviolet plumage and skin coloration show a high level of selection. A typical bird eye will respond to wavelengths from about 300 to 700 nm. In terms of frequency, this corresponds to a band in the vicinity of 430–1000 THz.
Bird’s eyes are tetrachromats, but their retina cone cells are much more complex than those of humans:
Birds have many more cones than humans and thus can see colors better than humans. Birds' photopigments are sensitive to four or five peak wavelengths, and thus birds are much more sensitive to colors than humans are.
Foraging insects can see wavelengths that flowers reflect (ranging from 300 nm to 700 nm ). Pollination being a mutualistic relationship, foraging insects and plants have coevolved, both increasing wavelength range: in perception (pollinators), in reflection and variation (flower colors). Directional selection has led plants to display increasingly diverse amounts of color variations extending into the ultraviolet color scale, thus attracting higher levels of pollinators. Some pollinators may use tetrachromatic color vision to increase and maintain a higher foraging success rate over their trichromatic competitors.
A handful of tetrachromats have been identified, including an Australian born painter, Concetta Antico, the "world’s first tetrachromat artist". Her tetrachromacy was discovered in December 2012, by Dr. Jay Neitz. Based on Antico’s genes, scientists think that her fourth cone absorbs wavelengths that are ‘reddish-orangey-yellow.' These scientists are trying to understand whether this is how she sees things. They also think the difference between the color perceptions of a tetrachromat and a normal trichromat human is not so dramatic as the difference between normal human vision and that of a colour blind individual. Antico has a daughter who is colour blind, which some speculate may be due to a negative effect of her own genes.
Apes (including humans) and Old World monkeys normally have three types of cone cells and are therefore trichromats. However, at low light intensities, the rod cells may contribute to color vision, giving a small region of tetrachromacy in the color space; human rod cells' sensitivity is greatest at a blueish-green wavelength.
In humans, two cone cell pigment genes are present on the X chromosome: the classical type 2 opsin genes OPN1MW and OPN1MW2. It has been suggested that humans with two X chromosomes could possess multiple cone cell pigments, perhaps born as full tetrachromats who have four simultaneously functioning kinds of cone cells, each type with a specific pattern of responsiveness to different wavelengths of light in the range of the visible spectrum. One study suggested that 2–3% of the world's women might have the type of fourth cone whose sensitivity peak is between the standard red and green cones, giving, theoretically, a significant increase in color differentiation. Another study suggests that as many as 50% of women and 8% of men may have four photopigments and corresponding increased chromatic discrimination compared to trichromats. In June 2012, after 20 years of study of women with four types of cones (non-functional tetrachromats), neuroscientist Dr. Gabriele Jordan identified a woman (subject cDa29) who could detect a greater variety of colors than trichromats could, corresponding with a functional tetrachromat (or true tetrachromat).
Variation in cone pigment genes is widespread in most human populations, but the most prevalent and pronounced tetrachromacy would derive from female carriers of major red/green pigment anomalies, usually classed as forms of "color blindness" (protanomaly or deuteranomaly). The biological basis for this phenomenon is X-inactivation of heterozygotic alleles for retinal pigment genes, which is the same mechanism that gives the majority of female new-world monkeys trichromatic vision.
In humans, preliminary visual processing occurs in the neurons of the retina. It is not known how these nerves would respond to a new color channel, that is, whether they could handle it separately or just combine it in with an existing channel. Visual information leaves the eye by way of the optic nerve; it is not known whether the optic nerve has the spare capacity to handle a new color channel. A variety of final image processing takes place in the brain; it is not known how the various areas of the brain would respond if presented with a new color channel.
Mice, which normally have only two cone pigments, can be engineered to express a third cone pigment, and appear to demonstrate increased chromatic discrimination, arguing against some of these obstacles; however, the original publication's claims about plasticity in the optic nerve have also been disputed.
Humans cannot see ultraviolet light directly because the lens of the eye blocks most light in the wavelength range of 300–400 nm; shorter wavelengths are blocked by the cornea. The photoreceptor cells of the retina are sensitive to near ultraviolet light and people lacking a lens (a condition known as aphakia) see near ultraviolet light (down to 300 nm) as whitish blue, or for some wavelengths, whitish violet, probably because all three types of cones are roughly equally sensitive to ultraviolet light, but blue cones a bit more.
Tetrachromacy may also enhance vision in dim lighting. | http://everything.explained.today/Tetrachromacy/ |
4 | (Phys.org) —Geckos' ability to stick to trees and leaves during rainforest downpours has fascinated scientists for decades, leading a group of University of Akron researchers to solve the mystery.
They discovered that wet, hydrophobic (water-repellent) surfaces like those of leaves and tree trunks secure a gecko's grip similar to the way dry surfaces do. The finding brings UA integrated bioscience doctoral candidate Alyssa Stark and her research colleagues closer to developing a synthetic adhesive that sticks when wet.
Principal investigator Stark and her fellow UA researchers Ila Badge, Nicholas Wucinich, Timothy Sullivan, Peter Niewiarowski and Ali Dhinojwala study the adhesive qualities of gecko pads, which have tiny, clingy hairs that stick like Velcro to dry surfaces. In a 2012 study, the team discovered that geckos lose their grip on wet glass. This finding led the scientists to explore how the lizards function in their natural environments.
The scientists studied the clinging power of six geckos, which they outfitted with harnesses and tugged upon gently as the lizards clung to surfaces in wet and dry conditions.
Link between adhesion and 'wettability'
The researchers found that the effect of water on adhesive strength correlates with wettability, or the ability of a liquid to maintain contact with a solid surface. On glass, which has high wettability, a film of water forms between the surface and the gecko's foot, decreasing adhesion.
Conversely, on surfaces with low wettability, such as waxy leaves on tropical plants, the areas in contact with the gecko's toes remain dry and adhesion, firm.
"The geckos stuck just as well under water as they did on a dry surface, as long as the surface was hydrophobic," Stark explains. "We believe this is how geckos stick to wet leaves and tree trunks in their natural environment."
The discovery, "Surface Wettability Plays a Significant Role in Gecko Adhesion Underwater," was published April 1, 2013 by the Proceedings of the National Academy of Sciences. The study has implications for the design of a synthetic gecko-inspired adhesive.
Explore further: Sticky gecko feet: The role of temperature and humidity
"Surface wettability plays a significant role in gecko adhesion underwater," by Alyssa Y. Stark et al. PNAS, 2013. | http://phys.org/news/2013-04-geckos-firm-natural-habitat.html |
4.1875 | Three-dimensional space (mathematics)
Three-dimensional space (also: tri-dimensional space or 3-space) is a geometric three-parameter model of the physical universe (without considering time) in which all known matter exists. These three dimensions can be labeled by a combination of three chosen from the terms length, width, height, depth, and breadth. Any three directions can be chosen, provided that they do not all lie in the same plane.
In physics and mathematics, a sequence of n numbers can be understood as a location in n-dimensional space. When n = 3, the set of all such locations is called three-dimensional Euclidean space. It is commonly represented by the symbol . This space is only one example of a great variety of spaces in three dimensions called 3-manifolds.
- 1 In geometry
- 2 In linear algebra
- 3 In calculus
- 4 In topology
- 5 See also
- 6 References
- 7 External links
In mathematics, analytic geometry (also called Cartesian geometry) describes every point in three-dimensional space by means of three coordinates. Three coordinate axes are given, each perpendicular to the other two at the origin, the point at which they cross. They are usually labeled x, y, and z. Relative to these axes, the position of any point in three-dimensional space is given by an ordered triple of real numbers, each number giving the distance of that point from the origin measured along the given axis, which is equal to the distance of that point from the plane determined by the other two axes.
Other popular methods of describing the location of a point in three-dimensional space include cylindrical coordinates and spherical coordinates, though there is an infinite number of possible methods. See Euclidean space.
Below are images of the above-mentioned systems.
|Class||Platonic solids||Kepler-Poinsot polyhedra|
|Coxeter group||A3, [3,3]||B3, [4,3]||H3, [5,3]|
A sphere in 3-space (also called a 2-sphere because its surface is 2-dimensional) consists of the set of all points in 3-space at a fixed distance r from a central point P. The volume enclosed by this surface is:
Another type of sphere, but having a three-dimensional surface is the 3-sphere: points equidistant to the origin of the euclidean space at distance one. If any position is , then characterize a point in the 3-sphere.
In the familiar 3-dimensional space that we live in, there are three pairs of cardinal directions: north/south (latitude), east/west (longitude) and up/down (altitude). These pairs of directions are mutually orthogonal: They are at right angles to each other. Movement along one axis does not change the coordinate value of the other two axes. In mathematical terms, they lie on three coordinate axes, usually labelled x, y, and z. The z-buffer in computer graphics refers to this z-axis, representing depth in the 2-dimensional imagery displayed on the computer screen.
In linear algebra
Another mathematical way of viewing three-dimensional space is found in linear algebra, where the idea of independence is crucial. Space has three dimensions because the length of a box is independent of its width or breadth. In the technical language of linear algebra, space is three-dimensional because every point in space can be described by a linear combination of three independent vectors.
Dot product, angle, and length
The dot product of two vectors A = [A1, A2,A3] and B = [B1, B2,B3] is defined as:
A vector can be pictured as an arrow. Its magnitude is its length, and its direction is the direction the arrow points. The magnitude of a vector A is denoted by . In this viewpoint, the dot product of two Euclidean vectors A and B is defined by
where θ is the angle between A and B.
The dot product of a vector A by itself is
the formula for the Euclidean length of the vector.
The cross product or vector product is a binary operation on two vectors in three-dimensional space and is denoted by the symbol ×. The cross product a × b of the vectors a and b is a vector that is perpendicular to both and therefore normal to the plane containing them. It has many applications in mathematics, physics, and engineering.
One can in n dimensions take the product of n − 1 vectors to produce a vector perpendicular to all of them. But if the product is limited to non-trivial binary products with vector results, it exists only in three and seven dimensions.
Gradient, divergence and curl
In a rectangular coordinate system, the gradient is given by
Line integrals, surface integrals, and volume integrals
A surface integral is a generalization of multiple integrals to integration over surfaces. It can be thought of as the double integral analog of the line integral. To find an explicit formula for the surface integral, we need to parameterize the surface of interest, S, by considering a system of curvilinear coordinates on S, like the latitude and longitude on a sphere. Let such a parameterization be x(s, t), where (s, t) varies in some region T in the plane. Then, the surface integral is given by
where the expression between bars on the right-hand side is the magnitude of the cross product of the partial derivatives of x(s, t), and is known as the surface element. Given a vector field v on S, that is a function that assigns to each x in S a vector v(x), the surface integral can be defined component-wise according to the definition of the surface integral of a scalar field; the result is a vector.
Fundamental theorem of line integrals
Let . Then
Suppose V is a subset of (in the case of n = 3, V represents a volume in 3D space) which is compact and has a piecewise smooth boundary S (also indicated with ∂V = S ). If F is a continuously differentiable vector field defined on a neighborhood of V, then the divergence theorem says:
The left side is a volume integral over the volume V, the right side is the surface integral over the boundary of the volume V. The closed manifold ∂V is quite generally the boundary of V oriented by outward-pointing normals, and n is the outward pointing unit normal field of the boundary ∂V. (dS may be used as a shorthand for ndS.)
Three-dimensional space has a number of topological properties that distinguish it from spaces of other dimension numbers. For example, at least three dimensions are required to tie a knot in a piece of string.
With the space , the topologists locally model all other 3-manifolds.
- 3D science and technology (disambiguation)
- Dimensional analysis
- Distance from a point to a plane
- Skew lines#Distance
- Three-dimensional graph
- Two-dimensional space
- S. Lipschutz, M. Lipson (2009). Linear Algebra (Schaum’s Outlines) (4th ed.). McGraw Hill. ISBN 978-0-07-154352-1.
- M.R. Spiegel, S. Lipschutz, D. Spellman (2009). Vector Analysis (Schaum’s Outlines) (2nd ed.). McGraw Hill. ISBN 978-0-07-161545-7.
- WS Massey (1983). "Cross products of vectors in higher dimensional Euclidean spaces". The American Mathematical Monthly 90 (10): 697–701. doi:10.2307/2323537. JSTOR 2323537.
If one requires only three basic properties of the cross product ... it turns out that a cross product of vectors exists only in 3-dimensional and 7-dimensional Euclidean space.
- Arfken, p. 43.
- M. R. Spiegel; S. Lipschutz; D. Spellman (2009). Vector Analysis. Schaum’s Outlines (2nd ed.). USA: McGraw Hill. ISBN 978-0-07-161545-7.
- Rolfsen, Dale (1976). Knots and Links. Berkeley, California: Publish or Perish. ISBN 0-914098-16-0.
|Wikiquote has quotations related to: Three-dimensional space (mathematics)|
|Wikimedia Commons has media related to 3D.|
- The dictionary definition of three-dimensional at Wiktionary
- Weisstein, Eric W., "Four-Dimensional Geometry", MathWorld.
- Elementary Linear Algebra - Chapter 8: Three-dimensional Geometry Keith Matthews from University of Queensland, 1991 | https://en.wikipedia.org/wiki/Three-dimensional |
4.40625 | If you're seeing this message, it means we're having trouble loading external resources for Khan Academy.
If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.
Solving absolute value equations
Learn how to solve absolute-value equations. For example, solve 2|x-1|=5.
Sal introduces the idea of an absolute value equation, gives a bunch of examples of such equations and solves them, and discusses how the graph of an absolute value equation should look.
Sal solves the equation 8|x+7|+4 = -6|x+7|+6.
Sal solves the equation |3x-9|=0.
Sal solves the equation 4|x+10|+4 = 6|x+10|+10 to find that it has no possible solution.
Solve equations that contain absolute value expressions. | https://www.khanacademy.org/math/algebra/absolute-value-equations-functions/absolute-value-equations |
4.34375 | A virus is a small infectious agent that replicates only inside the living cells of other organisms. Viruses can infect all types of life forms, from animals and plants to microorganisms, including bacteria and archaea.
Since Dmitri Ivanovsky's 1892 article describing a non-bacterial pathogen infecting tobacco plants, and the discovery of the tobacco mosaic virus by Martinus Beijerinck in 1898, about 5,000 virus species have been described in detail, although there are millions of types. Viruses are found in almost every ecosystem on Earth and are the most abundant type of biological entity. The study of viruses is known as virology, a sub-speciality of microbiology.
While not inside an infected cell or in the process of infecting a cell, viruses exist in the form of independent particles. These viral particles, also known as virions, consist of two or three parts: (i) the genetic material made from either DNA or RNA, long molecules that carry genetic information; (ii) a protein coat, called the capsid, which surrounds and protects the genetic material; and in some cases (iii) an envelope of lipids that surrounds the protein coat when they are outside a cell. The shapes of these virus particles range from simple helical and icosahedral forms for some virus species to more complex structures for others. Most virus species have virions that are too small to be seen with an optical microscope. The average virion is about one one-hundredth the size of the average bacterium.
The origins of viruses in the evolutionary history of life are unclear: some may have evolved from plasmids—pieces of DNA that can move between cells—while others may have evolved from bacteria. In evolution, viruses are an important means of horizontal gene transfer, which increases genetic diversity. Viruses are considered by some to be a life form, because they carry genetic material, reproduce, and evolve through natural selection. However they lack key characteristics (such as cell structure) that are generally considered necessary to count as life. Because they possess some but not all such qualities, viruses have been described as "organisms at the edge of life".
Viruses spread in many ways; viruses in plants are often transmitted from plant to plant by insects that feed on plant sap, such as aphids; viruses in animals can be carried by blood-sucking insects. These disease-bearing organisms are known as vectors. Influenza viruses are spread by coughing and sneezing. Norovirus and rotavirus, common causes of viral gastroenteritis, are transmitted by the faecal–oral route and are passed from person to person by contact, entering the body in food or water. HIV is one of several viruses transmitted through sexual contact and by exposure to infected blood. The range of host cells that a virus can infect is called its "host range". This can be narrow, meaning a virus is capable of infecting few species, or broad, meaning it is capable of infecting many.
Viral infections in animals provoke an immune response that usually eliminates the infecting virus. Immune responses can also be produced by vaccines, which confer an artificially acquired immunity to the specific viral infection. However, some viruses including those that cause AIDS and viral hepatitis evade these immune responses and result in chronic infections. Antibiotics have no effect on viruses, but several antiviral drugs have been developed.
- 1 Etymology
- 2 History
- 3 Origins
- 4 Microbiology
- 5 Classification
- 6 Role in human disease
- 7 Infection in other species
- 8 Role in aquatic ecosystems
- 9 Role in evolution
- 10 Applications
- 11 See also
- 12 References
- 13 External links
The word is from the Latin neuter vīrus referring to poison and other noxious liquids, from 'the same Indo-European base as Sanskrit viṣa poison, Avestan vīša poison, ancient Greek ἰός poison', first attested in English in 1398 in John Trevisa's translation of Bartholomeus Anglicus's De Proprietatibus Rerum. Virulent, from Latin virulentus (poisonous), dates to c. 1400. A meaning of "agent that causes infectious disease" is first recorded in 1728, before the discovery of viruses by Dmitri Ivanovsky in 1892. The English plural is viruses (sometimes also viri or vira), whereas the Latin word is a mass noun, which has no classically attested plural (however in Neo-Latin vīra is used). The adjective viral dates to 1948. The term virion (plural virions), which dates from 1959, is also used to refer to a single, stable infective viral particle that is released from the cell and is fully capable of infecting other cells of the same type.
Louis Pasteur was unable to find a causative agent for rabies and speculated about a pathogen too small to be detected using a microscope. In 1884, the French microbiologist Charles Chamberland invented a filter (known today as the Chamberland filter or Chamberland-Pasteur filter) with pores smaller than bacteria. Thus, he could pass a solution containing bacteria through the filter and completely remove them from the solution. In 1892, the Russian biologist Dmitri Ivanovsky used this filter to study what is now known as the tobacco mosaic virus. His experiments showed that crushed leaf extracts from infected tobacco plants remain infectious after filtration. Ivanovsky suggested the infection might be caused by a toxin produced by bacteria, but did not pursue the idea. At the time it was thought that all infectious agents could be retained by filters and grown on a nutrient medium – this was part of the germ theory of disease. In 1898, the Dutch microbiologist Martinus Beijerinck repeated the experiments and became convinced that the filtered solution contained a new form of infectious agent. He observed that the agent multiplied only in cells that were dividing, but as his experiments did not show that it was made of particles, he called it a contagium vivum fluidum (soluble living germ) and re-introduced the word virus. Beijerinck maintained that viruses were liquid in nature, a theory later discredited by Wendell Stanley, who proved they were particulate. In the same year Friedrich Loeffler and Paul Frosch passed the first animal virus – agent of foot-and-mouth disease (aphthovirus) – through a similar filter.
In the early 20th century, the English bacteriologist Frederick Twort discovered a group of viruses that infect bacteria, now called bacteriophages (or commonly phages), and the French-Canadian microbiologist Félix d'Herelle described viruses that, when added to bacteria on agar, would produce areas of dead bacteria. He accurately diluted a suspension of these viruses and discovered that the highest dilutions (lowest virus concentrations), rather than killing all the bacteria, formed discrete areas of dead organisms. Counting these areas and multiplying by the dilution factor allowed him to calculate the number of viruses in the original suspension. Phages were heralded as a potential treatment for diseases such as typhoid and cholera, but their promise was forgotten with the development of penicillin. The study of phages provided insights into the switching on and off of genes, and a useful mechanism for introducing foreign genes into bacteria.
By the end of the 19th century, viruses were defined in terms of their infectivity, their ability to be filtered, and their requirement for living hosts. Viruses had been grown only in plants and animals. In 1906, Ross Granville Harrison invented a method for growing tissue in lymph, and, in 1913, E. Steinhardt, C. Israeli, and R. A. Lambert used this method to grow vaccinia virus in fragments of guinea pig corneal tissue. In 1928, H. B. Maitland and M. C. Maitland grew vaccinia virus in suspensions of minced hens' kidneys. Their method was not widely adopted until the 1950s, when poliovirus was grown on a large scale for vaccine production.
Another breakthrough came in 1931, when the American pathologist Ernest William Goodpasture grew influenza and several other viruses in fertilized chickens' eggs. In 1949, John Franklin Enders, Thomas Weller, and Frederick Robbins grew polio virus in cultured human embryo cells, the first virus to be grown without using solid animal tissue or eggs. This work enabled Jonas Salk to make an effective polio vaccine.
The first images of viruses were obtained upon the invention of electron microscopy in 1931 by the German engineers Ernst Ruska and Max Knoll. In 1935, American biochemist and virologist Wendell Meredith Stanley examined the tobacco mosaic virus and found it was mostly made of protein. A short time later, this virus was separated into protein and RNA parts. The tobacco mosaic virus was the first to be crystallised and its structure could therefore be elucidated in detail. The first X-ray diffraction pictures of the crystallised virus were obtained by Bernal and Fankuchen in 1941. On the basis of her pictures, Rosalind Franklin discovered the full structure of the virus in 1955. In the same year, Heinz Fraenkel-Conrat and Robley Williams showed that purified tobacco mosaic virus RNA and its protein coat can assemble by themselves to form functional viruses, suggesting that this simple mechanism was probably the means through which viruses were created within their host cells.
The second half of the 20th century was the golden age of virus discovery and most of the over 2,000 recognised species of animal, plant, and bacterial viruses were discovered during these years. In 1957, equine arterivirus and the cause of Bovine virus diarrhea (a pestivirus) were discovered. In 1963, the hepatitis B virus was discovered by Baruch Blumberg, and in 1965, Howard Temin described the first retrovirus. Reverse transcriptase, the enzyme that retroviruses use to make DNA copies of their RNA, was first described in 1970, independently by Howard Martin Temin and David Baltimore. In 1983 Luc Montagnier's team at the Pasteur Institute in France, first isolated the retrovirus now called HIV.
Viruses are found wherever there is life and have probably existed since living cells first evolved. The origin of viruses is unclear because they do not form fossils, so molecular techniques have been used to compare the DNA or RNA of viruses and are a useful means of investigating how they arose. In addition, viral genetic material may occasionally integrate into germline of the host organisms, by which they can be passed on vertically to offsprings of the host for many generations. This provides an invaluable source of information for paleovirologist to trace back ancient viruses that have existed up to millions of years ago. Currently, there are three main hypotheses that aim to explain the origins of viruses:
- Regressive hypothesis
- Viruses may have once been small cells that parasitised larger cells. Over time, genes not required by their parasitism were lost. The bacteria rickettsia and chlamydia are living cells that, like viruses, can reproduce only inside host cells. They lend support to this hypothesis, as their dependence on parasitism is likely to have caused the loss of genes that enabled them to survive outside a cell. This is also called the degeneracy hypothesis, or reduction hypothesis.
- Cellular origin hypothesis
- Some viruses may have evolved from bits of DNA or RNA that "escaped" from the genes of a larger organism. The escaped DNA could have come from plasmids (pieces of naked DNA that can move between cells) or transposons (molecules of DNA that replicate and move around to different positions within the genes of the cell). Once called "jumping genes", transposons are examples of mobile genetic elements and could be the origin of some viruses. They were discovered in maize by Barbara McClintock in 1950. This is sometimes called the vagrancy hypothesis, or the escape hypothesis.
- Coevolution hypothesis
- This is also called the virus-first hypothesis and proposes that viruses may have evolved from complex molecules of protein and nucleic acid at the same time as cells first appeared on Earth and would have been dependent on cellular life for billions of years. Viroids are molecules of RNA that are not classified as viruses because they lack a protein coat. However, they have characteristics that are common to several viruses and are often called subviral agents. Viroids are important pathogens of plants. They do not code for proteins but interact with the host cell and use the host machinery for their replication. The hepatitis delta virus of humans has an RNA genome similar to viroids but has a protein coat derived from hepatitis B virus and cannot produce one of its own. It is, therefore, a defective virus. Although hepatitis delta virus genome may replicate independently once inside a host cell, it requires the help of hepatitis B virus to provide a protein coat so that it can be transmitted to new cells. In similar manner, the sputnik virophage is dependent on mimivirus, which infects the protozoan Acanthamoeba castellanii. These viruses, which are dependent on the presence of other virus species in the host cell, are called satellites and may represent evolutionary intermediates of viroids and viruses.
In the past, there were problems with all of these hypotheses: the regressive hypothesis did not explain why even the smallest of cellular parasites do not resemble viruses in any way. The escape hypothesis did not explain the complex capsids and other structures on virus particles. The virus-first hypothesis contravened the definition of viruses in that they require host cells. Viruses are now recognised as ancient and as having origins that pre-date the divergence of life into the three domains. This discovery has led modern virologists to reconsider and re-evaluate these three classical hypotheses.
The evidence for an ancestral world of RNA cells and computer analysis of viral and host DNA sequences are giving a better understanding of the evolutionary relationships between different viruses and may help identify the ancestors of modern viruses. To date, such analyses have not proved which of these hypotheses is correct. However, it seems unlikely that all currently known viruses have a common ancestor, and viruses have probably arisen numerous times in the past by one or more mechanisms.
Prions are infectious protein molecules that do not contain DNA or RNA. They can cause infections such as scrapie in sheep, bovine spongiform encephalopathy ("mad cow" disease) in cattle, and chronic wasting disease in deer; in humans, prionic diseases include Kuru, Creutzfeldt–Jakob disease, and Gerstmann–Sträussler–Scheinker syndrome. Although prions are fundamentally different from viruses and viroids, their discovery gives credence to the theory that viruses could have evolved from self-replicating molecules.
Opinions differ on whether viruses are a form of life, or organic structures that interact with living organisms. They have been described as "organisms at the edge of life", since they resemble organisms in that they possess genes and evolve by natural selection, and reproduce by creating multiple copies of themselves through self-assembly. Although they have genes, they do not have a cellular structure, which is often seen as the basic unit of life. Viruses do not have their own metabolism, and require a host cell to make new products. They therefore cannot naturally reproduce outside a host cell – although bacterial species such as rickettsia and chlamydia are considered living organisms despite the same limitation. Accepted forms of life use cell division to reproduce, whereas viruses spontaneously assemble within cells. They differ from autonomous growth of crystals as they inherit genetic mutations while being subject to natural selection. Virus self-assembly within host cells has implications for the study of the origin of life, as it lends further credence to the hypothesis that life could have started as self-assembling organic molecules.
Viruses display a wide diversity of shapes and sizes, called morphologies. In general, viruses are much smaller than bacteria. Most viruses that have been studied have a diameter between 20 and 300 nanometres. Some filoviruses have a total length of up to 1400 nm; their diameters are only about 80 nm. Most viruses cannot be seen with an optical microscope so scanning and transmission electron microscopes are used to visualise virions. To increase the contrast between viruses and the background, electron-dense "stains" are used. These are solutions of salts of heavy metals, such as tungsten, that scatter the electrons from regions covered with the stain. When virions are coated with stain (positive staining), fine detail is obscured. Negative staining overcomes this problem by staining the background only.
A complete virus particle, known as a virion, consists of nucleic acid surrounded by a protective coat of protein called a capsid. These are formed from identical protein subunits called capsomeres. Viruses can have a lipid "envelope" derived from the host cell membrane. The capsid is made from proteins encoded by the viral genome and its shape serves as the basis for morphological distinction. Virally coded protein subunits will self-assemble to form a capsid, in general requiring the presence of the virus genome. Complex viruses code for proteins that assist in the construction of their capsid. Proteins associated with nucleic acid are known as nucleoproteins, and the association of viral capsid proteins with viral nucleic acid is called a nucleocapsid. The capsid and entire virus structure can be mechanically (physically) probed through atomic force microscopy. In general, there are four main morphological virus types:
- These viruses are composed of a single type of capsomer stacked around a central axis to form a helical structure, which may have a central cavity, or tube. This arrangement results in rod-shaped or filamentous virions: These can be short and highly rigid, or long and very flexible. The genetic material, in general, single-stranded RNA, but ssDNA in some cases, is bound into the protein helix by interactions between the negatively charged nucleic acid and positive charges on the protein. Overall, the length of a helical capsid is related to the length of the nucleic acid contained within it and the diameter is dependent on the size and arrangement of capsomers. The well-studied tobacco mosaic virus is an example of a helical virus.
- Most animal viruses are icosahedral or near-spherical with chiral icosahedral symmetry. A regular icosahedron is the optimum way of forming a closed shell from identical sub-units. The minimum number of identical capsomers required is twelve, each composed of five identical sub-units. Many viruses, such as rotavirus, have more than twelve capsomers and appear spherical but they retain this symmetry. Capsomers at the apices are surrounded by five other capsomers and are called pentons. Capsomers on the triangular faces are surrounded by six others and are called hexons. Hexons are in essence flat and pentons, which form the 12 vertices, are curved. The same protein may act as the subunit of both the pentamers and hexamers or they may be composed of different proteins.
- This is an icosahedron elongated along the fivefold axis and is a common arrangement of the heads of bacteriophages. This structure is composed of a cylinder with a cap at either end.
- Some species of virus envelop themselves in a modified form of one of the cell membranes, either the outer membrane surrounding an infected host cell or internal membranes such as nuclear membrane or endoplasmic reticulum, thus gaining an outer lipid bilayer known as a viral envelope. This membrane is studded with proteins coded for by the viral genome and host genome; the lipid membrane itself and any carbohydrates present originate entirely from the host. The influenza virus and HIV use this strategy. Most enveloped viruses are dependent on the envelope for their infectivity.
- These viruses possess a capsid that is neither purely helical nor purely icosahedral, and that may possess extra structures such as protein tails or a complex outer wall. Some bacteriophages, such as Enterobacteria phage T4, have a complex structure consisting of an icosahedral head bound to a helical tail, which may have a hexagonal base plate with protruding protein tail fibres. This tail structure acts like a molecular syringe, attaching to the bacterial host and then injecting the viral genome into the cell.
The poxviruses are large, complex viruses that have an unusual morphology. The viral genome is associated with proteins within a central disk structure known as a nucleoid. The nucleoid is surrounded by a membrane and two lateral bodies of unknown function. The virus has an outer envelope with a thick layer of protein studded over its surface. The whole virion is slightly pleiomorphic, ranging from ovoid to brick shape. Mimivirus is one of the largest characterised viruses, with a capsid diameter of 400 nm. Protein filaments measuring 100 nm project from the surface. The capsid appears hexagonal under an electron microscope, therefore the capsid is probably icosahedral. In 2011, researchers discovered the largest then known virus in samples of water collected from the ocean floor off the coast of Las Cruces, Chile. Provisionally named Megavirus chilensis, it can be seen with a basic optical microscope. In 2013, the Pandoravirus genus was discovered in Chile and Australia, and has genomes about twice as large as Megavirus and Mimivirus.
Some viruses that infect Archaea have complex structures that are unrelated to any other form of virus, with a wide variety of unusual shapes, ranging from spindle-shaped structures, to viruses that resemble hooked rods, teardrops or even bottles. Other archaeal viruses resemble the tailed bacteriophages, and can have multiple tail structures.
An enormous variety of genomic structures can be seen among viral species; as a group, they contain more structural genomic diversity than plants, animals, archaea, or bacteria. There are millions of different types of viruses, although only about 5,000 types have been described in detail. As of September 2015, the NCBI Virus genome database has more than 75,000 complete genome sequences. but there are doubtlessly many more to be discovered.
A virus has either a DNA or an RNA genome and is called a DNA virus or an RNA virus, respectively. The vast majority of viruses have RNA genomes. Plant viruses tend to have single-stranded RNA genomes and bacteriophages tend to have double-stranded DNA genomes.
Viral genomes are circular, as in the polyomaviruses, or linear, as in the adenoviruses. The type of nucleic acid is irrelevant to the shape of the genome. Among RNA viruses and certain DNA viruses, the genome is often divided up into separate parts, in which case it is called segmented. For RNA viruses, each segment often codes for only one protein and they are usually found together in one capsid. However, all segments are not required to be in the same virion for the virus to be infectious, as demonstrated by brome mosaic virus and several other plant viruses.
A viral genome, irrespective of nucleic acid type, is almost always either single-stranded or double-stranded. Single-stranded genomes consist of an unpaired nucleic acid, analogous to one-half of a ladder split down the middle. Double-stranded genomes consist of two complementary paired nucleic acids, analogous to a ladder. The virus particles of some virus families, such as those belonging to the Hepadnaviridae, contain a genome that is partially double-stranded and partially single-stranded.
For most viruses with RNA genomes and some with single-stranded DNA genomes, the single strands are said to be either positive-sense (called the plus-strand) or negative-sense (called the minus-strand), depending on if they are complementary to the viral messenger RNA (mRNA). Positive-sense viral RNA is in the same sense as viral mRNA and thus at least a part of it can be immediately translated by the host cell. Negative-sense viral RNA is complementary to mRNA and thus must be converted to positive-sense RNA by an RNA-dependent RNA polymerase before translation. DNA nomenclature for viruses with single-sense genomic ssDNA is similar to RNA nomenclature, in that the template strand for the viral mRNA is complementary to it (−), and the coding strand is a copy of it (+). However, several types of ssDNA and ssRNA viruses have genomes that are ambisense in that transcription can occur off both strands in a double-stranded replicative intermediate. Examples include geminiviruses, which are ssDNA plant viruses and arenaviruses, which are ssRNA viruses of animals.
Genome size varies greatly between species. The smallest viral genomes – the ssDNA circoviruses, family Circoviridae – code for only two proteins and have a genome size of only two kilobases; the largest–the pandoraviruses–have genome sizes of around two megabases which code for about 2500 proteins.
In general, RNA viruses have smaller genome sizes than DNA viruses because of a higher error-rate when replicating, and have a maximum upper size limit. Beyond this limit, errors in the genome when replicating render the virus useless or uncompetitive. To compensate for this, RNA viruses often have segmented genomes – the genome is split into smaller molecules – thus reducing the chance that an error in a single-component genome will incapacitate the entire genome. In contrast, DNA viruses generally have larger genomes because of the high fidelity of their replication enzymes. Single-strand DNA viruses are an exception to this rule, however, as mutation rates for these genomes can approach the extreme of the ssRNA virus case.
Viruses undergo genetic change by several mechanisms. These include a process called antigenic drift where individual bases in the DNA or RNA mutate to other bases. Most of these point mutations are "silent" – they do not change the protein that the gene encodes – but others can confer evolutionary advantages such as resistance to antiviral drugs. Antigenic shift occurs when there is a major change in the genome of the virus. This can be a result of recombination or reassortment. When this happens with influenza viruses, pandemics might result. RNA viruses often exist as quasispecies or swarms of viruses of the same species but with slightly different genome nucleoside sequences. Such quasispecies are a prime target for natural selection.
Segmented genomes confer evolutionary advantages; different strains of a virus with a segmented genome can shuffle and combine genes and produce progeny viruses or (offspring) that have unique characteristics. This is called reassortment or viral sex.
Genetic recombination is the process by which a strand of DNA is broken and then joined to the end of a different DNA molecule. This can occur when viruses infect cells simultaneously and studies of viral evolution have shown that recombination has been rampant in the species studied. Recombination is common to both RNA and DNA viruses.
Viral populations do not grow through cell division, because they are acellular. Instead, they use the machinery and metabolism of a host cell to produce multiple copies of themselves, and they assemble in the cell.
The life cycle of viruses differs greatly between species but there are six basic stages in the life cycle of viruses:
Attachment is a specific binding between viral capsid proteins and specific receptors on the host cellular surface. This specificity determines the host range of a virus. For example, HIV infects a limited range of human leucocytes. This is because its surface protein, gp120, specifically interacts with the CD4 molecule – a chemokine receptor – which is most commonly found on the surface of CD4+ T-Cells. This mechanism has evolved to favour those viruses that infect only cells in which they are capable of replication. Attachment to the receptor can induce the viral envelope protein to undergo changes that results in the fusion of viral and cellular membranes, or changes of non-enveloped virus surface proteins that allow the virus to enter.
Penetration follows attachment: Virions enter the host cell through receptor-mediated endocytosis or membrane fusion. This is often called viral entry. The infection of plant and fungal cells is different from that of animal cells. Plants have a rigid cell wall made of cellulose, and fungi one of chitin, so most viruses can get inside these cells only after trauma to the cell wall. However, nearly all plant viruses (such as tobacco mosaic virus) can also move directly from cell to cell, in the form of single-stranded nucleoprotein complexes, through pores called plasmodesmata. Bacteria, like plants, have strong cell walls that a virus must breach to infect the cell. However, given that bacterial cell walls are much less thick than plant cell walls due to their much smaller size, some viruses have evolved mechanisms that inject their genome into the bacterial cell across the cell wall, while the viral capsid remains outside.
Uncoating is a process in which the viral capsid is removed: This may be by degradation by viral enzymes or host enzymes or by simple dissociation; the end-result is the releasing of the viral genomic nucleic acid.
Replication of viruses involves primarily multiplication of the genome. Replication involves synthesis of viral messenger RNA (mRNA) from "early" genes (with exceptions for positive sense RNA viruses), viral protein synthesis, possible assembly of viral proteins, then viral genome replication mediated by early or regulatory protein expression. This may be followed, for complex viruses with larger genomes, by one or more further rounds of mRNA synthesis: "late" gene expression is, in general, of structural or virion proteins.
Assembly – Following the structure-mediated self-assembly of the virus particles, some modification of the proteins often occurs. In viruses such as HIV, this modification (sometimes called maturation) occurs after the virus has been released from the host cell.
Release – Viruses can be released from the host cell by lysis, a process that kills the cell by bursting its membrane and cell wall if present: This is a feature of many bacterial and some animal viruses. Some viruses undergo a lysogenic cycle where the viral genome is incorporated by genetic recombination into a specific place in the host's chromosome. The viral genome is then known as a "provirus" or, in the case of bacteriophages a "prophage". Whenever the host divides, the viral genome is also replicated. The viral genome is mostly silent within the host. However, at some point, the provirus or prophage may give rise to active virus, which may lyse the host cells. Enveloped viruses (e.g., HIV) typically are released from the host cell by budding. During this process the virus acquires its envelope, which is a modified piece of the host's plasma or other, internal membrane.
The genetic material within virus particles, and the method by which the material is replicated, varies considerably between different types of viruses.
- DNA viruses
- The genome replication of most DNA viruses takes place in the cell's nucleus. If the cell has the appropriate receptor on its surface, these viruses enter the cell sometimes by direct fusion with the cell membrane (e.g., herpesviruses) or – more usually – by receptor-mediated endocytosis. Most DNA viruses are entirely dependent on the host cell's DNA and RNA synthesising machinery, and RNA processing machinery. However, viruses with larger genomes may encode much of this machinery themselves. In eukaryotes the viral genome must cross the cell's nuclear membrane to access this machinery, while in bacteria it need only enter the cell.
- RNA viruses
- Replication usually takes place in the cytoplasm. RNA viruses can be placed into four different groups depending on their modes of replication. The polarity (whether or not it can be used directly by ribosomes to make proteins) of single-stranded RNA viruses largely determines the replicative mechanism; the other major criterion is whether the genetic material is single-stranded or double-stranded. All RNA viruses use their own RNA replicase enzymes to create copies of their genomes.
- Reverse transcribing viruses
- These have ssRNA (Retroviridae, Metaviridae, Pseudoviridae) or dsDNA (Caulimoviridae, and Hepadnaviridae) in their particles. Reverse transcribing viruses with RNA genomes (retroviruses), use a DNA intermediate to replicate, whereas those with DNA genomes (pararetroviruses) use an RNA intermediate during genome replication. Both types use a reverse transcriptase, or RNA-dependent DNA polymerase enzyme, to carry out the nucleic acid conversion. Retroviruses integrate the DNA produced by reverse transcription into the host genome as a provirus as a part of the replication process; pararetroviruses do not, although integrated genome copies of especially plant pararetroviruses can give rise to infectious virus. They are susceptible to antiviral drugs that inhibit the reverse transcriptase enzyme, e.g. zidovudine and lamivudine. An example of the first type is HIV, which is a retrovirus. Examples of the second type are the Hepadnaviridae, which includes Hepatitis B virus.
Effects on the host cell
The range of structural and biochemical effects that viruses have on the host cell is extensive. These are called cytopathic effects. Most virus infections eventually result in the death of the host cell. The causes of death include cell lysis, alterations to the cell's surface membrane and apoptosis. Often cell death is caused by cessation of its normal activities because of suppression by virus-specific proteins, not all of which are components of the virus particle.
Some viruses cause no apparent changes to the infected cell. Cells in which the virus is latent and inactive show few signs of infection and often function normally. This causes persistent infections and the virus is often dormant for many months or years. This is often the case with herpes viruses. Some viruses, such as Epstein–Barr virus, can cause cells to proliferate without causing malignancy, while others, such as papillomaviruses, are established causes of cancer.
Viruses are by far the most abundant biological entities on Earth and they outnumber all the others put together. They infect all types of cellular life including animals, plants, bacteria and fungi. However, different types of viruses can infect only a limited range of hosts and many are species-specific. Some, such as smallpox virus for example, can infect only one species – in this case humans, and are said to have a narrow host range. Other viruses, such as rabies virus, can infect different species of mammals and are said to have a broad range. The viruses that infect plants are harmless to animals, and most viruses that infect other animals are harmless to humans. The host range of some bacteriophages is limited to a single strain of bacteria and they can be used to trace the source of outbreaks of infections by a method called phage typing.
Classification seeks to describe the diversity of viruses by naming and grouping them on the basis of similarities. In 1962, André Lwoff, Robert Horne, and Paul Tournier were the first to develop a means of virus classification, based on the Linnaean hierarchical system. This system bases classification on phylum, class, order, family, genus, and species. Viruses were grouped according to their shared properties (not those of their hosts) and the type of nucleic acid forming their genomes. Later the International Committee on Taxonomy of Viruses was formed. However, viruses are not classified on the basis of phylum or class, as their small genome size and high rate of mutation makes it difficult to determine their ancestry beyond Order. As such, the Baltimore Classification is used to supplement the more traditional hierarchy.
The International Committee on Taxonomy of Viruses (ICTV) developed the current classification system and wrote guidelines that put a greater weight on certain virus properties to maintain family uniformity. A unified taxonomy (a universal system for classifying viruses) has been established. The 9th lCTV Report defines the concept of the virus species as the lowest taxon (group) in a branching hierarchy of viral taxa. However, at present only a small part of the total diversity of viruses has been studied, with analyses of samples from humans finding that about 20% of the virus sequences recovered have not been seen before, and samples from the environment, such as from seawater and ocean sediments, finding that the large majority of sequences are completely novel.
The general taxonomic structure is as follows:
In the current (2013) ICTV taxonomy, 7 orders have been established, the Caudovirales, Herpesvirales, Ligamenvirales, Mononegavirales, Nidovirales, Picornavirales, and Tymovirales. The committee does not formally distinguish between subspecies, strains, and isolates. In total there are 7 orders, 103 families, 22 subfamilies, 455 genera, about 2,827 species and over 4,000 types yet unclassified.
The Nobel Prize-winning biologist David Baltimore devised the Baltimore classification system. The ICTV classification system is used in conjunction with the Baltimore classification system in modern virus classification.
The Baltimore classification of viruses is based on the mechanism of mRNA production. Viruses must generate mRNAs from their genomes to produce proteins and replicate themselves, but different mechanisms are used to achieve this in each virus family. Viral genomes may be single-stranded (ss) or double-stranded (ds), RNA or DNA, and may or may not use reverse transcriptase (RT). In addition, ssRNA viruses may be either sense (+) or antisense (−). This classification places viruses into seven groups:
- I: dsDNA viruses (e.g. Adenoviruses, Herpesviruses, Poxviruses)
- II: ssDNA viruses (+ strand or "sense") DNA (e.g. Parvoviruses)
- III: dsRNA viruses (e.g. Reoviruses)
- IV: (+)ssRNA viruses (+ strand or sense) RNA (e.g. Picornaviruses, Togaviruses)
- V: (−)ssRNA viruses (− strand or antisense) RNA (e.g. Orthomyxoviruses, Rhabdoviruses)
- VI: ssRNA-RT viruses (+ strand or sense) RNA with DNA intermediate in life-cycle (e.g. Retroviruses)
- VII: dsDNA-RT viruses (e.g. Hepadnaviruses)
As an example of viral classification, the chicken pox virus, varicella zoster (VZV), belongs to the order Herpesvirales, family Herpesviridae, subfamily Alphaherpesvirinae, and genus Varicellovirus. VZV is in Group I of the Baltimore Classification because it is a dsDNA virus that does not use reverse transcriptase.
Role in human disease
Examples of common human diseases caused by viruses include the common cold, influenza, chickenpox, and cold sores. Many serious diseases such as Ebola virus disease, AIDS, avian influenza, and SARS are caused by viruses. The relative ability of viruses to cause disease is described in terms of virulence. Other diseases are under investigation to discover if they have a virus as the causative agent, such as the possible connection between human herpesvirus 6 (HHV6) and neurological diseases such as multiple sclerosis and chronic fatigue syndrome. There is controversy over whether the bornavirus, previously thought to cause neurological diseases in horses, could be responsible for psychiatric illnesses in humans.
Viruses have different mechanisms by which they produce disease in an organism, which depends largely on the viral species. Mechanisms at the cellular level primarily include cell lysis, the breaking open and subsequent death of the cell. In multicellular organisms, if enough cells die, the whole organism will start to suffer the effects. Although viruses cause disruption of healthy homeostasis, resulting in disease, they may exist relatively harmlessly within an organism. An example would include the ability of the herpes simplex virus, which causes cold sores, to remain in a dormant state within the human body. This is called latency and is a characteristic of the herpes viruses, including Epstein–Barr virus, which causes glandular fever, and varicella zoster virus, which causes chickenpox and shingles. Most people have been infected with at least one of these types of herpes virus. However, these latent viruses might sometimes be beneficial, as the presence of the virus can increase immunity against bacterial pathogens, such as Yersinia pestis.
Some viruses can cause lifelong or chronic infections, where the viruses continue to replicate in the body despite the host's defense mechanisms. This is common in hepatitis B virus and hepatitis C virus infections. People chronically infected are known as carriers, as they serve as reservoirs of infectious virus. In populations with a high proportion of carriers, the disease is said to be endemic.
Viral epidemiology is the branch of medical science that deals with the transmission and control of virus infections in humans. Transmission of viruses can be vertical, which means from mother to child, or horizontal, which means from person to person. Examples of vertical transmission include hepatitis B virus and HIV, where the baby is born already infected with the virus. Another, more rare, example is the varicella zoster virus, which, although causing relatively mild infections in humans, can be fatal to the foetus and newborn baby.
Horizontal transmission is the most common mechanism of spread of viruses in populations. Transmission can occur when: body fluids are exchanged during sexual activity, e.g., HIV; blood is exchanged by contaminated transfusion or needle sharing, e.g., hepatitis C; exchange of saliva by mouth, e.g., Epstein–Barr virus; contaminated food or water is ingested, e.g., norovirus; aerosols containing virions are inhaled, e.g., influenza virus; and insect vectors such as mosquitoes penetrate the skin of a host, e.g., dengue. The rate or speed of transmission of viral infections depends on factors that include population density, the number of susceptible individuals, (i.e., those not immune), the quality of healthcare and the weather.
Epidemiology is used to break the chain of infection in populations during outbreaks of viral diseases. Control measures are used that are based on knowledge of how the virus is transmitted. It is important to find the source, or sources, of the outbreak and to identify the virus. Once the virus has been identified, the chain of transmission can sometimes be broken by vaccines. When vaccines are not available, sanitation and disinfection can be effective. Often, infected people are isolated from the rest of the community, and those that have been exposed to the virus are placed in quarantine. To control the outbreak of foot-and-mouth disease in cattle in Britain in 2001, thousands of cattle were slaughtered. Most viral infections of humans and other animals have incubation periods during which the infection causes no signs or symptoms. Incubation periods for viral diseases range from a few days to weeks, but are known for most infections. Somewhat overlapping, but mainly following the incubation period, there is a period of communicability — a time when an infected individual or animal is contagious and can infect another person or animal. This, too, is known for many viral infections, and knowledge of the length of both periods is important in the control of outbreaks. When outbreaks cause an unusually high proportion of cases in a population, community, or region, they are called epidemics. If outbreaks spread worldwide, they are called pandemics.
Epidemics and pandemics
Native American populations were devastated by contagious diseases, in particular, smallpox, brought to the Americas by European colonists. It is unclear how many Native Americans were killed by foreign diseases after the arrival of Columbus in the Americas, but the numbers have been estimated to be close to 70% of the indigenous population. The damage done by this disease significantly aided European attempts to displace and conquer the native population.
A pandemic is a worldwide epidemic. The 1918 flu pandemic, which lasted until 1919, was a category 5 influenza pandemic caused by an unusually severe and deadly influenza A virus. The victims were often healthy young adults, in contrast to most influenza outbreaks, which predominantly affect juvenile, elderly, or otherwise-weakened patients. Older estimates say it killed 40–50 million people, while more recent research suggests that it may have killed as many as 100 million people, or 5% of the world's population in 1918.
Most researchers believe that HIV originated in sub-Saharan Africa during the 20th century; it is now a pandemic, with an estimated 38.6 million people now living with the disease worldwide. The Joint United Nations Programme on HIV/AIDS (UNAIDS) and the World Health Organization (WHO) estimate that AIDS has killed more than 25 million people since it was first recognised on 5 June 1981, making it one of the most destructive epidemics in recorded history. In 2007 there were 2.7 million new HIV infections and 2 million HIV-related deaths.
Several highly lethal viral pathogens are members of the Filoviridae. Filoviruses are filament-like viruses that cause viral hemorrhagic fever, and include ebolaviruses and marburgviruses. Marburg virus, first discovered in 1967, attracted widespread press attention in April 2005 for an outbreak in Angola. Ebola Virus Disease has also caused intermittent outbreaks with high mortality rates since 1976 when it was first identified. The worst and most recent one is the West Africa epidemic.
Viruses are an established cause of cancer in humans and other species. Viral cancers occur only in a minority of infected persons (or animals). Cancer viruses come from a range of virus families, including both RNA and DNA viruses, and so there is no single type of "oncovirus" (an obsolete term originally used for acutely transforming retroviruses). The development of cancer is determined by a variety of factors such as host immunity and mutations in the host. Viruses accepted to cause human cancers include some genotypes of human papillomavirus, hepatitis B virus, hepatitis C virus, Epstein–Barr virus, Kaposi's sarcoma-associated herpesvirus and human T-lymphotropic virus. The most recently discovered human cancer virus is a polyomavirus (Merkel cell polyomavirus) that causes most cases of a rare form of skin cancer called Merkel cell carcinoma. Hepatitis viruses can develop into a chronic viral infection that leads to liver cancer. Infection by human T-lymphotropic virus can lead to tropical spastic paraparesis and adult T-cell leukemia. Human papillomaviruses are an established cause of cancers of cervix, skin, anus, and penis. Within the Herpesviridae, Kaposi's sarcoma-associated herpesvirus causes Kaposi's sarcoma and body cavity lymphoma, and Epstein–Barr virus causes Burkitt's lymphoma, Hodgkin's lymphoma, B lymphoproliferative disorder, and nasopharyngeal carcinoma. Merkel cell polyomavirus closely related to SV40 and mouse polyomaviruses that have been used as animal models for cancer viruses for over 50 years.
Host defence mechanisms
The body's first line of defence against viruses is the innate immune system. This comprises cells and other mechanisms that defend the host from infection in a non-specific manner. This means that the cells of the innate system recognise, and respond to, pathogens in a generic way, but, unlike the adaptive immune system, it does not confer long-lasting or protective immunity to the host.
RNA interference is an important innate defence against viruses. Many viruses have a replication strategy that involves double-stranded RNA (dsRNA). When such a virus infects a cell, it releases its RNA molecule or molecules, which immediately bind to a protein complex called a dicer that cuts the RNA into smaller pieces. A biochemical pathway – the RISC complex, is activated, which ensures cell survival by degrading the viral mRNA. Rotaviruses have evolved to avoid this defence mechanism by not uncoating fully inside the cell, and releasing newly produced mRNA through pores in the particle's inner capsid. Their genomic dsRNA remains protected inside the core of the virion.
When the adaptive immune system of a vertebrate encounters a virus, it produces specific antibodies that bind to the virus and often render it non-infectious. This is called humoral immunity. Two types of antibodies are important. The first, called IgM, is highly effective at neutralizing viruses but is produced by the cells of the immune system only for a few weeks. The second, called IgG, is produced indefinitely. The presence of IgM in the blood of the host is used to test for acute infection, whereas IgG indicates an infection sometime in the past. IgG antibody is measured when tests for immunity are carried out.
Antibodies can continue to be an effective defence mechanism even after viruses have managed to gain entry to the host cell. A protein that is in cells, called TRIM21, can attach to the antibodies on the surface of the virus particle. This primes the subsequent destruction of the virus by the enzymes of the cell's proteosome system.
A second defence of vertebrates against viruses is called cell-mediated immunity and involves immune cells known as T cells. The body's cells constantly display short fragments of their proteins on the cell's surface, and, if a T cell recognises a suspicious viral fragment there, the host cell is destroyed by killer T cells and the virus-specific T-cells proliferate. Cells such as the macrophage are specialists at this antigen presentation. The production of interferon is an important host defence mechanism. This is a hormone produced by the body when viruses are present. Its role in immunity is complex; it eventually stops the viruses from reproducing by killing the infected cell and its close neighbours.
Not all virus infections produce a protective immune response in this way. HIV evades the immune system by constantly changing the amino acid sequence of the proteins on the surface of the virion. This is known as "escape mutation" as the viral epitopes escape recognition by the host immune response. These persistent viruses evade immune control by sequestration, blockade of antigen presentation, cytokine resistance, evasion of natural killer cell activities, escape from apoptosis, and antigenic shift. Other viruses, called neurotropic viruses, are disseminated by neural spread where the immune system may be unable to reach them.
Prevention and treatment
Because viruses use vital metabolic pathways within host cells to replicate, they are difficult to eliminate without using drugs that cause toxic effects to host cells in general. The most effective medical approaches to viral diseases are vaccinations to provide immunity to infection, and antiviral drugs that selectively interfere with viral replication.
Vaccination is a cheap and effective way of preventing infections by viruses. Vaccines were used to prevent viral infections long before the discovery of the actual viruses. Their use has resulted in a dramatic decline in morbidity (illness) and mortality (death) associated with viral infections such as polio, measles, mumps and rubella. Smallpox infections have been eradicated. Vaccines are available to prevent over thirteen viral infections of humans, and more are used to prevent viral infections of animals. Vaccines can consist of live-attenuated or killed viruses, or viral proteins (antigens). Live vaccines contain weakened forms of the virus, which do not cause the disease but, nonetheless, confer immunity. Such viruses are called attenuated. Live vaccines can be dangerous when given to people with a weak immunity (who are described as immunocompromised), because in these people, the weakened virus can cause the original disease. Biotechnology and genetic engineering techniques are used to produce subunit vaccines. These vaccines use only the capsid proteins of the virus. Hepatitis B vaccine is an example of this type of vaccine. Subunit vaccines are safe for immunocompromised patients because they cannot cause the disease. The yellow fever virus vaccine, a live-attenuated strain called 17D, is probably the safest and most effective vaccine ever generated.
Antiviral drugs are often nucleoside analogues (fake DNA building-blocks), which viruses mistakenly incorporate into their genomes during replication. The life-cycle of the virus is then halted because the newly synthesised DNA is inactive. This is because these analogues lack the hydroxyl groups, which, along with phosphorus atoms, link together to form the strong "backbone" of the DNA molecule. This is called DNA chain termination. Examples of nucleoside analogues are aciclovir for Herpes simplex virus infections and lamivudine for HIV and Hepatitis B virus infections. Aciclovir is one of the oldest and most frequently prescribed antiviral drugs. Other antiviral drugs in use target different stages of the viral life cycle. HIV is dependent on a proteolytic enzyme called the HIV-1 protease for it to become fully infectious. There is a large class of drugs called protease inhibitors that inactivate this enzyme.
Hepatitis C is caused by an RNA virus. In 80% of people infected, the disease is chronic, and without treatment, they are infected for the remainder of their lives. However, there is now an effective treatment that uses the nucleoside analogue drug ribavirin combined with interferon. The treatment of chronic carriers of the hepatitis B virus by using a similar strategy using lamivudine has been developed.
Infection in other species
Viruses infect all cellular life and, although viruses occur universally, each cellular species has its own specific range that often infect only that species. Some viruses, called satellites, can replicate only within cells that have already been infected by another virus.
Viruses are important pathogens of livestock. Diseases such as foot-and-mouth disease and bluetongue are caused by viruses. Companion animals such as cats, dogs, and horses, if not vaccinated, are susceptible to serious viral infections. Canine parvovirus is caused by a small DNA virus and infections are often fatal in pups. Like all invertebrates, the honey bee is susceptible to many viral infections. However, most viruses co-exist harmlessly in their host and cause no signs or symptoms of disease.
There are many types of plant virus, but often they cause only a loss of yield, and it is not economically viable to try to control them. Plant viruses are often spread from plant to plant by organisms, known as vectors. These are normally insects, but some fungi, nematode worms, and single-celled organisms have been shown to be vectors. When control of plant virus infections is considered economical, for perennial fruits, for example, efforts are concentrated on killing the vectors and removing alternate hosts such as weeds. Plant viruses cannot infect humans and other animals because they can reproduce only in living plant cells.
Plants have elaborate and effective defence mechanisms against viruses. One of the most effective is the presence of so-called resistance (R) genes. Each R gene confers resistance to a particular virus by triggering localised areas of cell death around the infected cell, which can often be seen with the unaided eye as large spots. This stops the infection from spreading. RNA interference is also an effective defence in plants. When they are infected, plants often produce natural disinfectants that kill viruses, such as salicylic acid, nitric oxide, and reactive oxygen molecules.
Plant virus particles or virus-like particles (VLPs) have applications in both biotechnology and nanotechnology. The capsids of most plant viruses are simple and robust structures and can be produced in large quantities either by the infection of plants or by expression in a variety of heterologous systems. Plant virus particles can be modified genetically and chemically to encapsulate foreign material and can be incorporated into supramolecular structures for use in biotechnology.
Bacteriophages are a common and diverse group of viruses and are the most abundant form of biological entity in aquatic environments – there are up to ten times more of these viruses in the oceans than there are bacteria, reaching levels of 250,000,000 bacteriophages per millilitre of seawater. These viruses infect specific bacteria by binding to surface receptor molecules and then entering the cell. Within a short amount of time, in some cases just minutes, bacterial polymerase starts translating viral mRNA into protein. These proteins go on to become either new virions within the cell, helper proteins, which help assembly of new virions, or proteins involved in cell lysis. Viral enzymes aid in the breakdown of the cell membrane, and, in the case of the T4 phage, in just over twenty minutes after injection over three hundred phages could be released.
The major way bacteria defend themselves from bacteriophages is by producing enzymes that destroy foreign DNA. These enzymes, called restriction endonucleases, cut up the viral DNA that bacteriophages inject into bacterial cells. Bacteria also contain a system that uses CRISPR sequences to retain fragments of the genomes of viruses that the bacteria have come into contact with in the past, which allows them to block the virus's replication through a form of RNA interference. This genetic system provides bacteria with acquired immunity to infection.
Some viruses replicate within archaea: these are double-stranded DNA viruses with unusual and sometimes unique shapes. These viruses have been studied in most detail in the thermophilic archaea, particularly the orders Sulfolobales and Thermoproteales. Defences against these viruses involve RNA interference from repetitive DNA sequences within archaean genomes that are related to the genes of the viruses. Most archaea have CRISPR–Cas systems as an adaptive defense against viruses. These enable archaea to retain sections of viral DNA, which are then used to target and eliminate subsequent infections by the virus using a process similar to RNA interference.
Role in aquatic ecosystems
A teaspoon of seawater contains about one million viruses. Most of these are bacteriophages, which are harmless to plants and animals, and are in fact essential to the regulation of saltwater and freshwater ecosystems. They infect and destroy bacteria in aquatic microbial communities, and are the most important mechanism of recycling carbon in the marine environment. The organic molecules released from the dead bacterial cells stimulate fresh bacterial and algal growth. Viral activity may also contribute to the biological pump, the process whereby carbon is sequestered in the deep ocean.
Microorganisms constitute more than 90% of the biomass in the sea. It is estimated that viruses kill approximately 20% of this biomass each day and that there are 15 times as many viruses in the oceans as there are bacteria and archaea. Viruses are the main agents responsible for the rapid destruction of harmful algal blooms, which often kill other marine life. The number of viruses in the oceans decreases further offshore and deeper into the water, where there are fewer host organisms.
Like any organism, marine mammals are susceptible to viral infections. In 1988 and 2002, thousands of harbor seals were killed in Europe by phocine distemper virus. Many other viruses, including caliciviruses, herpesviruses, adenoviruses and parvoviruses, circulate in marine mammal populations.
Role in evolution
Viruses are an important natural means of transferring genes between different species, which increases genetic diversity and drives evolution. It is thought that viruses played a central role in the early evolution, before the diversification of bacteria, archaea and eukaryotes, at the time of the last universal common ancestor of life on Earth. Viruses are still one of the largest reservoirs of unexplored genetic diversity on Earth.
Life sciences and medicine
Viruses are important to the study of molecular and cell biology as they provide simple systems that can be used to manipulate and investigate the functions of cells. The study and use of viruses have provided valuable information about aspects of cell biology. For example, viruses have been useful in the study of genetics and helped our understanding of the basic mechanisms of molecular genetics, such as DNA replication, transcription, RNA processing, translation, protein transport, and immunology.
Geneticists often use viruses as vectors to introduce genes into cells that they are studying. This is useful for making the cell produce a foreign substance, or to study the effect of introducing a new gene into the genome. In similar fashion, virotherapy uses viruses as vectors to treat various diseases, as they can specifically target cells and DNA. It shows promising use in the treatment of cancer and in gene therapy. Eastern European scientists have used phage therapy as an alternative to antibiotics for some time, and interest in this approach is increasing, because of the high level of antibiotic resistance now found in some pathogenic bacteria. Expression of heterologous proteins by viruses is the basis of several manufacturing processes that are currently being used for the production of various proteins such as vaccine antigens and antibodies. Industrial processes have been recently developed using viral vectors and a number of pharmaceutical proteins are currently in pre-clinical and clinical trials.
Virotherapy involves the use of genetically modified viruses to treat diseases. Viruses have been modified by scientists to reproduce in cancer cells and destroy them but not infect healthy cells. Talimogene laherparepvec (T-VEC), for example, is a modified herpes simplex virus that has had a gene, which is required for viruses to replicate in healthy cells, deleted and replaced with a human gene (GM-CSF) that stimulates immunity. When this virus infects cancer cells, it destroys them and in doing so the presence the GM-CSF gene attracts dendritic cells from the surrounding tissues of the body. The dendritic cells process the dead cancer cells and present components of them to other cells of the immune system. Having completed successful clinical trials, this virus is expected to gain approval for the treatment of a skin cancer called melanoma in late 2015. Viruses that have been reprogrammed to kill cancer cells are called oncolytic viruses.
Materials science and nanotechnology
Current trends in nanotechnology promise to make much more versatile use of viruses. From the viewpoint of a materials scientist, viruses can be regarded as organic nanoparticles. Their surface carries specific tools designed to cross the barriers of their host cells. The size and shape of viruses, and the number and nature of the functional groups on their surface, is precisely defined. As such, viruses are commonly used in materials science as scaffolds for covalently linked surface modifications. A particular quality of viruses is that they can be tailored by directed evolution. The powerful techniques developed by life sciences are becoming the basis of engineering approaches towards nanomaterials, opening a wide range of applications far beyond biology and medicine.
Because of their size, shape, and well-defined chemical structures, viruses have been used as templates for organizing materials on the nanoscale. Recent examples include work at the Naval Research Laboratory in Washington, D.C., using Cowpea mosaic virus (CPMV) particles to amplify signals in DNA microarray based sensors. In this application, the virus particles separate the fluorescent dyes used for signalling to prevent the formation of non-fluorescent dimers that act as quenchers. Another example is the use of CPMV as a nanoscale breadboard for molecular electronics.
Many viruses can be synthesized de novo ("from scratch") and the first synthetic virus was created in 2002. Although somewhat of a misconception, it is not the actual virus that is synthesized, but rather its DNA genome (in case of a DNA virus), or a cDNA copy of its genome (in case of RNA viruses). For many virus families the naked synthetic DNA or RNA (once enzymatically converted back from the synthetic cDNA) is infectious when introduced into a cell. That is, they contain all the necessary information to produce new viruses. This technology is now being used to investigate novel vaccine strategies. The ability to synthesize viruses has far-reaching consequences, since viruses can no longer be regarded as extinct, as long as the information of their genome sequence is known and permissive cells are available. As of March 2014[update], the full-length genome sequences of 3843 different viruses, including smallpox, are publicly available in an online database maintained by the National Institutes of Health.
The ability of viruses to cause devastating epidemics in human societies has led to the concern that viruses could be weaponised for biological warfare. Further concern was raised by the successful recreation of the infamous 1918 influenza virus in a laboratory. The smallpox virus devastated numerous societies throughout history before its eradication. There are only two centers in the world that are authorized by the WHO to keep stocks of smallpox virus: the Vector Institute in Russia and the Centers for Disease Control and Prevention in the United States. Fears that it may be used as a weapon may not be totally unfounded. As the vaccine for smallpox sometimes had severe side-effects, it is no longer used routinely in any country. Thus, much of the modern human population has almost no established resistance to smallpox, and would be vulnerable to the virus.
- Koonin EV, Senkevich TG, Dolja VV. The ancient Virus World and evolution of cells. Biol. Direct. 2006;1:29. doi:10.1186/1745-6150-1-29. PMID 16984643.
- Dimmock p. 4
- Dimmock p. 49
- Breitbart M, Rohwer F. Here a virus, there a virus, everywhere the same virus?. Trends Microbiol. 2005;13(6):278–84. doi:10.1016/j.tim.2005.04.003. PMID 15936660.
- Lawrence CM, Menon S, Eilers BJ, et al.. Structural and functional studies of archaeal viruses. J. Biol. Chem.. 2009;284(19):12599–603. doi:10.1074/jbc.R800078200. PMID 19158076.
- Edwards RA, Rohwer F. Viral metagenomics. Nature Reviews Microbiology. 2005;3(6):504–10. doi:10.1038/nrmicro1163. PMID 15886693.
- Canchaya C, Fournous G, Chibani-Chennoufi S, Dillmann ML, Brüssow H. Phage as agents of lateral gene transfer. Current Opinion in Microbiology. 2003;6(4):417–24. doi:10.1016/S1369-5274(03)00086-9. PMID 12941415.
- Rybicki, EP. The classification of organisms at the edge of life, or problems with virus systematics. S Afr J Sci. 1990;86:182–186.
- Shors pp. 49–50
- "virus, n." OED Online. Oxford University Press, March 2015. Web. 23 March 2015.
- Harper D. The Online Etymology Dictionary. virus; 2011 [Retrieved 2014-12-19].
- "virulent, adj." OED Online. Oxford University Press, March 2015. Web. 23 March 2015.
- Harper D. The Online Etymology Dictionary. virulent; 2011 [Retrieved 2014-12-19].
- e.g. Michael Worboys: Cambridge History of Medicine: Spreading Germs: Disease Theories and Medical Practice in Britain, 1865-1900, Cambridge University Press, 2000, p. 204
- e.g. Karsten Buschard & Rikke Thon: Diabetic Animal Models. In: Handbook of Laboratory Animal Science. Second Edition. Volume II: Animal Models, edited by Jann Hau & Gerald L. Van Hoosier Jr., CRC Press, 2003, p. 163 & 166
- William T. Stearn: Botanical Latin. History, Grammar, Syntax, Terminology and Vocabulary. David & Charles, third edition, 1983. Quote: "Virus: virus (s.n. II), gen. sing. viri, nom. pl. vira, gen. pl. vīrorum (to be distinguished from virorum, of men)."
- Pons: virus
- Harper D. The Online Etymology Dictionary. viral; 2011 [Retrieved 2014-12-19].
- Harper D. The Online Etymology Dictionary. virion; 2011 [Retrieved 2014-12-19].
- Casjens S. In: Mahy BWJ and Van Regenmortel MHV. Desk Encyclopedia of General Virology. Boston: Academic Press; 2010. ISBN 0-12-375146-2. p. 167.
- Bordenave G. Louis Pasteur (1822–1895). Microbes and Infection / Institut Pasteur. 2003;5(6):553–60. doi:10.1016/S1286-4579(03)00075-3. PMID 12758285.
- Shors pp. 76–77
- Collier p. 3
- Dimmock p.4–5
- Fenner F.. In: Mahy B. W. J. and Van Regenmortal M. H. V.. Desk Encyclopedia of General Virology. 1 ed. Oxford, UK: Academic Press; 2009. ISBN 0-12-375146-2. p. 15.
- Shors p. 589
- D'Herelle F. On an invisible microbe antagonistic toward dysenteric bacilli: brief note by Mr. F. D'Herelle, presented by Mr. Roux. Research in Microbiology. 2007;158(7):553–4. doi:10.1016/j.resmic.2007.07.005. PMID 17855060.
- Steinhardt E, Israeli C, Lambert R.A.. Studies on the cultivation of the virus of vaccinia. J. Inf Dis.. 1913;13(2):294–300. doi:10.1093/infdis/13.2.294.
- Collier p. 4
- Goodpasture EW, Woodruff AM, Buddingh GJ. The cultivation of vaccine and other viruses in the chorioallantoic membrane of chick embryos. Science. 1931;74(1919):371–372. doi:10.1126/science.74.1919.371. PMID 17810781. Bibcode:1931Sci....74..371G.
- Rosen, FS. Isolation of poliovirus—John Enders and the Nobel Prize. New England Journal of Medicine. 2004;351(15):1481–83. doi:10.1056/NEJMp048202. PMID 15470207.
- From Nobel Lectures, Physics 1981–1990, (1993) Editor-in-Charge Tore Frängsmyr, Editor Gösta Ekspång, World Scientific Publishing Co., Singapore.
- In 1887, Buist visualised one of the largest, Vaccinia virus, by optical microscopy after staining it. Vaccinia was not known to be a virus at that time. (Buist J.B. Vaccinia and Variola: a study of their life history Churchill, London)
- Stanley WM, Loring HS. The isolation of crystalline tobacco mosaic virus protein from diseased tomato plants. Science. 1936;83(2143):85. doi:10.1126/science.83.2143.85. PMID 17756690. Bibcode:1936Sci....83...85S.
- Stanley WM, Lauffer MA. Disintegration of tobacco mosaic virus in urea solutions. Science. 1939;89(2311):345–347. doi:10.1126/science.89.2311.345. PMID 17788438. Bibcode:1939Sci....89..345S.
- Creager AN, Morgan GJ. After the double helix: Rosalind Franklin's research on Tobacco mosaic virus. Isis. 2008;99(2):239–72. doi:10.1086/588626. PMID 18702397.
- Dimmock p. 12
- Norrby E. Nobel Prizes and the emerging virus concept. Arch. Virol.. 2008;153(6):1109–23. doi:10.1007/s00705-008-0088-8. PMID 18446425.
- Collier p. 745
- Temin HM, Baltimore D. RNA-directed DNA synthesis and RNA tumor viruses. Adv. Virus Res.. 1972 [Retrieved 16 September 2008];17:129–86. doi:10.1016/S0065-3527(08)60749-6. PMID 4348509.
- Barré-Sinoussi, F. et al.. Isolation of a T-lymphotropic retrovirus from a patient at risk for acquired immune deficiency syndrome (AIDS). Science. 1983;220(4599):868–871. doi:10.1126/science.6189183. PMID 6189183. Bibcode:1983Sci...220..868B.
- Iyer LM, Balaji S, Koonin EV, Aravind L. Evolutionary genomics of nucleo-cytoplasmic large DNA viruses. Virus Res.. 2006;117(1):156–84. doi:10.1016/j.virusres.2006.01.009. PMID 16494962.
- Sanjuán R, Nebot MR, Chirico N, Mansky LM, Belshaw R. Viral mutation rates. Journal of Virology. 2010;84(19):9733–48. doi:10.1128/JVI.00694-10. PMID 20660197.
- Shors pp. 14–16
- Collier pp. 11–21
- Dimmock p. 16
- Collier p. 11
- Mahy WJ & Van Regenmortel MHV (eds). Desk Encyclopedia of General Virology. Oxford: Academic Press; 2009. ISBN 0-12-375146-2. p. 24.
- Shors p. 574
- The origin and behavior of mutable loci in maize. Proc Natl Acad Sci U S A.. 1950;36(6):344–55. doi:10.1073/pnas.36.6.344. PMID 15430309. Bibcode:1950PNAS...36..344M.
- Collier pp. 11–12
- Dimmock p. 55
- Shors 551–3
- Tsagris EM, de Alba AE, Gozmanova M, Kalantidis K. Viroids. Cell. Microbiol.. 2008;10(11):2168. doi:10.1111/j.1462-5822.2008.01231.x. PMID 18764915.
- Shors p. 492–3
- La Scola B, Desnues C, Pagnier I, Robert C, Barrassi L, Fournous G, Merchat M, Suzan-Monti M, Forterre P, Koonin E, Raoult D. The virophage as a unique parasite of the giant mimivirus. Nature. 2008;455(7209):100–4. doi:10.1038/nature07218. PMID 18690211. Bibcode:2008Natur.455..100L.
- Collier p. 777
- Dimmock p. 55–7
- Mahy WJ & Van Regenmortel MHV (eds). Desk Encyclopedia of General Virology. Oxford: Academic Press; 2009. ISBN 0-12-375146-2. p. 28.
- Mahy WJ & Van Regenmortel MHV (eds). Desk Encyclopedia of General Virology. Oxford: Academic Press; 2009. ISBN 0-12-375146-2. p. 26.
- Dimmock pp. 15–16
- Liberski PP. Prion diseases: a riddle wrapped in a mystery inside an enigma. Folia Neuropathol. 2008;46(2):93–116. PMID 18587704.
- Belay ED and Schonberger LB. Desk Encyclopedia of Human and Medical Virology. Boston: Academic Press; 2009. ISBN 0-12-375147-0. p. 497–504.
- Lupi O, Dadalti P, Cruz E, Goodheart C. Did the first virus self-assemble from self-replicating prion proteins and RNA?. Med. Hypotheses. 2007;69(4):724–30. doi:10.1016/j.mehy.2007.03.031. PMID 17512677.
- Holmes EC. Viral evolution in the genomic age. PLoS Biol.. 2007;5(10):e278. doi:10.1371/journal.pbio.0050278. PMID 17914905.
- Wimmer E, Mueller S, Tumpey TM, Taubenberger JK. Synthetic viruses: a new opportunity to understand and prevent viral disease. Nature Biotechnology. 2009;27(12):1163–72. doi:10.1038/nbt.1593. PMID 20010599.
- Horn M. Chlamydiae as symbionts in eukaryotes. Annual Review of Microbiology. 2008;62:113–31. doi:10.1146/annurev.micro.62.081307.162818. PMID 18473699.
- Ammerman NC, Beier-Sexton M, Azad AF. Laboratory maintenance of Rickettsia rickettsii. Current Protocols in Microbiology. 2008;Chapter 3:Unit 3A.5. doi:10.1002/9780471729259.mc03a05s11. PMID 19016440.
- Collier pp. 33–55
- Collier pp. 33–37
- Kiselev NA, Sherman MB, Tsuprun VL. Negative staining of proteins. Electron Microsc. Rev.. 1990;3(1):43–72. doi:10.1016/0892-0354(90)90013-I. PMID 1715774.
- Collier p. 40
- Caspar DL, Klug A. Physical principles in the construction of regular viruses. Cold Spring Harb. Symp. Quant. Biol.. 1962;27:1–24. doi:10.1101/sqb.1962.027.001.005. PMID 14019094.
- Crick FH, Watson JD. Structure of small viruses. Nature. 1956;177(4506):473–5. doi:10.1038/177473a0. PMID 13309339. Bibcode:1956Natur.177..473C.
- Manipulation of individual viruses: friction and mechanical properties. Biophysical Journal. 1997;72(3):1396–1403. doi:10.1016/S0006-3495(97)78786-1. PMID 9138585. Bibcode:1997BpJ....72.1396F.
- Imaging of viruses by atomic force microscopy. J Gen Virol. 2001;82(9):2025–2034. PMID 11514711.
- Collier p. 37
- Collier pp. 40, 42
- Casens, S. Desk Encyclopedia of General Virology. Boston: Academic Press; 2009. ISBN 0-12-375146-2. p. 167–174.
- Collier pp. 42–43
- Rossmann MG, Mesyanzhinov VV, Arisaka F, Leiman PG. The bacteriophage T4 DNA injection machine. Current Opinion in Structural Biology. 2004;14(2):171–80. doi:10.1016/j.sbi.2004.02.001. PMID 15093831.
- Long GW, Nobel J, Murphy FA, Herrmann KL, Lourie B. Experience with electron microscopy in the differential diagnosis of smallpox. Appl Microbiol. 1970;20(3):497–504. PMID 4322005.
- Suzan-Monti M, La Scola B, Raoult D. Genomic and evolutionary aspects of Mimivirus. Virus Research. 2006;117(1):145–155. doi:10.1016/j.virusres.2005.07.011. PMID 16181700.
- Arslan D, Legendre M, Seltzer V, Abergel C, Claverie JM. Distant Mimivirus relative with a larger genome highlights the fundamental features of Megaviridae. Proceedings of the National Academy of Sciences of the United States of America. 2011;108(42):17486–91. doi:10.1073/pnas.1110889108. PMID 21987820. Bibcode:2011PNAS..10817486A.
- Philippe Nadège; et al. (2013). "Pandoraviruses: amoeba viruses with genomes up to 2.5 Mb reaching that of parasitic eukaryotes". Science 341 (6143): 281–286. doi:10.1126/science.1239181. PMID 23869018.
- Prangishvili D, Forterre P, Garrett RA. Viruses of the Archaea: a unifying view. Nature Reviews Microbiology. 2006;4(11):837–48. doi:10.1038/nrmicro1527. PMID 17041631.
- NCBI Viral Genome database
- Pennisi E (2011). "Going Viral: Exploring the Role Of Viruses in Our Bodies". Science 331 (6024): 1513. doi:10.1126/science.331.6024.1513. PMID 21436418.
- Collier pp. 96–99
- Saunders, Venetia A.; Carter, John. Virology: principles and applications. Chichester: John Wiley & Sons; 2007. ISBN 0-470-02387-2. p. 72.
- Belyi VA, Levine AJ, Skalka AM (2010). "Sequences from ancestral single-stranded DNA viruses in vertebrate genomes: the parvoviridae and circoviridae are more than 40 to 50 million years old". J. Virol. 84 (23): 12458–62. doi:10.1128/JVI.01789-10. PMC 2976387. PMID 20861255.
- Philippe N, Legendre M, Doutre G, Couté Y, Poirot O, Lescot M, Arslan D, Seltzer V, Bertaux L, Bruley C, Garin J, Claverie JM, Abergel C (2013). "Pandoraviruses: amoeba viruses with genomes up to 2.5 Mb reaching that of parasitic eukaryotes". Science 341 (6143): 281–6. doi:10.1126/science.1239181. PMID 23869018.
- Pressing J, Reanney DC. Divided genomes and intrinsic noise. J Mol Evol. 1984;20(2):135–46. doi:10.1007/BF02257374. PMID 6433032.
- Duffy S, Holmes EC. Validation of high rates of nucleotide substitution in geminiviruses: phylogenetic evidence from East African cassava mosaic viruses. The Journal of General Virology. 2009;90(Pt 6):1539–47. doi:10.1099/vir.0.009266-0. PMID 19264617.
- Sandbulte MR, Westgeest KB, Gao J, Xu X, Klimov AI, Russell CA, Burke DF, Smith DJ, Fouchier RA, Eichelberger MC. Discordant antigenic drift of neuraminidase and hemagglutinin in H1N1 and H3N2 influenza viruses. Proceedings of the National Academy of Sciences of the United States of America. 2011;108(51):20748–53. doi:10.1073/pnas.1113801108. PMID 22143798. Bibcode:2011PNAS..10820748S.
- Moss RB, Davey RT, Steigbigel RT, Fang F. Targeting pandemic influenza: a primer on influenza antivirals and drug resistance. The Journal of Antimicrobial Chemotherapy. 2010;65(6):1086–93. doi:10.1093/jac/dkq100. PMID 20375034.
- Hampson AW, Mackenzie JS. The influenza viruses. Med. J. Aust.. 2006;185(10 Suppl):S39–43. PMID 17115950.
- Metzner KJ. Detection and significance of minority quasispecies of drug-resistant HIV-1. J HIV Ther. 2006;11(4):74–81. PMID 17578210.
- Goudsmit, Jaap. Viral Sex. Oxford Univ Press, 1998.ISBN 978-0-19-512496-5 ISBN 0-19-512496-0
- Worobey M, Holmes EC. Evolutionary aspects of recombination in RNA viruses. J. Gen. Virol.. 1999;80 ( Pt 10):2535–43. PMID 10573145.
- Lukashev AN. Role of recombination in evolution of enteroviruses. Rev. Med. Virol.. 2005;15(3):157–67. doi:10.1002/rmv.457. PMID 15578739.
- Umene K. Mechanism and application of genetic recombination in herpesviruses. Rev. Med. Virol.. 1999;9(3):171–82. doi:10.1002/(SICI)1099-1654(199907/09)9:3<171::AID-RMV243>3.0.CO;2-A. PMID 10479778.
- Collier pp. 75–91
- Dimmock p. 70
- Boevink P, Oparka KJ. Virus-host interactions during movement processes. Plant Physiol.. 2005 [Retrieved 2014-12-19];138(4):1815–21. doi:10.1104/pp.105.066761. PMID 16172094. PMC 1183373.
- Dimmock p. 71
- Barman S, Ali A, Hui EK, Adhikary L, Nayak DP. Transport of viral proteins to the apical membranes and interaction of matrix protein with glycoproteins in the assembly of influenza viruses. Virus Res.. 2001;77(1):61–9. doi:10.1016/S0168-1702(01)00266-0. PMID 11451488.
- Shors pp. 60, 597
- Dimmock, Chapter 15, Mechanisms in virus latentcy, pp.243–259
- Dimmock 185–187
- Shors p. 54; Collier p. 78
- Collier p. 79
- Staginnus C, Richert-Pöggeler KR. Endogenous pararetroviruses: two-faced travelers in the plant genome. Trends in Plant Science. 2006;11(10):485–91. doi:10.1016/j.tplants.2006.08.008. PMID 16949329.
- Collier pp. 88–89
- Collier pp. 115–146
- Collier p. 115
- Roulston A, Marcellus RC, Branton PE. Viruses and apoptosis. Annu. Rev. Microbiol.. 1999;53:577–628. doi:10.1146/annurev.micro.53.1.577. PMID 10547702.
- Alwine JC. Modulation of host cell stress responses by human cytomegalovirus. Curr. Top. Microbiol. Immunol.. 2008;325:263–79. doi:10.1007/978-3-540-77349-8_15. PMID 18637511.
- Sinclair J. Human cytomegalovirus: Latency and reactivation in the myeloid lineage. J. Clin. Virol.. 2008;41(3):180–5. doi:10.1016/j.jcv.2007.11.014. PMID 18164651.
- Jordan MC, Jordan GW, Stevens JG, Miller G. Latent herpesviruses of humans. Annals of Internal Medicine. 1984;100(6):866–80. doi:10.7326/0003-4819-100-6-866. PMID 6326635.
- Sissons JG, Bain M, Wills MR. Latency and reactivation of human cytomegalovirus. J. Infect.. 2002;44(2):73–7. doi:10.1053/jinf.2001.0948. PMID 12076064.
- Barozzi P, Potenza L, Riva G, Vallerini D, Quadrelli C, Bosco R, Forghieri F, Torelli G, Luppi M. B cells and herpesviruses: a model of lymphoproliferation. Autoimmun Rev. 2007;7(2):132–6. doi:10.1016/j.autrev.2007.02.018. PMID 18035323.
- Subramanya D, Grivas PD. HPV and cervical cancer: updates on an established relationship. Postgrad Med. 2008;120(4):7–13. doi:10.3810/pgm.2008.11.1928. PMID 19020360.
- Crawford, Dorothy H.. Viruses: A Very Short Introduction. Oxford University Press, USA; 2011. ISBN 0-19-957485-5. p. 16.
- Shors p. 388
- Shors p. 353
- Dimmock p. 272
- Baggesen DL, Sørensen G, Nielsen EM, Wegener HC. Phage typing of Salmonella Typhimurium – is it still a useful tool for surveillance and outbreak investigation?. Eurosurveillance. 2010 [Retrieved 2014-12-19];15(4):19471. PMID 20122382.
- Lwoff A, Horne RW, Tournier P. A virus system. C. R. Hebd. Seances Acad. Sci.. 1962;254:4225–7. French. PMID 14467544.
- Lwoff A, Horne R, Tournier P. A system of viruses. Cold Spring Harb. Symp. Quant. Biol.. 1962;27:51–5. doi:10.1101/sqb.1962.027.001.008. PMID 13931895.
- King AMQ, Lefkowitz E, Adams MJ, Carstens EB. Virus Taxonomy: Ninth Report of the International Committee on Taxonomy of Viruses. Elsevier; 2011. ISBN 0-12-384684-6. p. 6.
- Adams, MJ.; Lefkowitz, EJ.; King, AM.; Carstens, EB. (Dec 2013). "Recently agreed changes to the International Code of Virus Classification and Nomenclature.". Archives of Virology 158 (12): 2633–9. doi:10.1007/s00705-013-1749-9. PMID 23836393.
- As defined therein, "A species is the lowest taxonomic level in the hierarchy approved by the ICTV. A species is a monophyletic group of viruses whose properties can be distinguished from those of other species by multiple criteria."
- Delwart EL. Viral metagenomics. Rev. Med. Virol.. 2007;17(2):115–31. doi:10.1002/rmv.532. PMID 17295196.
- Virus Taxonomy 2013. International Committee on Taxonomy of Viruses. Retrieved on 24 December 2011.
- ICTV Master Species List 2013 v2
- This Excel file contains the official ICTV Master Species list for 2014 and lists all approved virus taxa. This is version 11 of the MSL published on 24 August 2014. Retrieved on 24 December 2014
- Baltimore D. The strategy of RNA viruses. Harvey Lect.. 1974;70 Series:57–74. PMID 4377923.
- van Regenmortel MH, Mahy BW. Emerging issues in virus taxonomy. Emerging Infect. Dis.. 2004;10(1):8–13. doi:10.3201/eid1001.030279. PMID 15078590.
- Mayo MA. Developments in plant virus taxonomy since the publication of the 6th ICTV Report. International Committee on Taxonomy of Viruses. Arch. Virol.. 1999;144(8):1659–66. doi:10.1007/s007050050620. PMID 10486120.
- de Villiers EM, Fauquet C, Broker TR, Bernard HU, zur Hausen H. Classification of papillomaviruses. Virology. 2004;324(1):17–27. doi:10.1016/j.virol.2004.03.033. PMID 15183049.
- Mainly Chapter 33 (Disease summaries), pages 367–392 in:Fisher, Bruce; Harvey, Richard P.; Champe, Pamela C.. Lippincott's Illustrated Reviews: Microbiology (Lippincott's Illustrated Reviews Series). Hagerstwon, MD: Lippincott Williams & Wilkins; 2007. ISBN 0-7817-8215-5. p. pages 367–392.
- Komaroff AL. Is human herpesvirus-6 a trigger for chronic fatigue syndrome?. J. Clin. Virol.. 2006;37 Suppl 1:S39–46. doi:10.1016/S1386-6532(06)70010-5. PMID 17276367.
- Chen C, Chiu Y, Wei F, Koong F, Liu H, Shaw C, Hwu H, Hsiao K. High seroprevalence of Borna virus infection in schizophrenic patients, family members and mental health workers in Taiwan. Mol Psychiatry. 1999;4(1):33–8. doi:10.1038/sj.mp.4000484. PMID 10089006.
- Margolis TP, Elfman FL, Leib D, et al.. Spontaneous reactivation of herpes simplex virus type 1 in latently infected murine sensory ganglia. J. Virol.. 2007;81(20):11069–74. doi:10.1128/JVI.00243-07. PMID 17686862.
- Whitley RJ, Roizman B. Herpes simplex virus infections. Lancet. 2001;357(9267):1513–8. doi:10.1016/S0140-6736(00)04638-9. PMID 11377626.
- Barton ES, White DW, Cathelyn JS, et al.. Herpesvirus latency confers symbiotic protection from bacterial infection. Nature. 2007;447(7142):326–9. doi:10.1038/nature05762. PMID 17507983. Bibcode:2007Natur.447..326B.
- Bertoletti A, Gehring A. Immune response and tolerance during chronic hepatitis B virus infection. Hepatol. Res.. 2007;37 Suppl 3:S331–8. doi:10.1111/j.1872-034X.2007.00221.x. PMID 17931183.
- Rodrigues C, Deshmukh M, Jacob T, Nukala R, Menon S, Mehta A. Significance of HBV DNA by PCR over serological markers of HBV in acute and chronic patients. Indian journal of medical microbiology. 2001;19(3):141–4. PMID 17664817.
- Nguyen VT, McLaws ML, Dore GJ. Highly endemic hepatitis B infection in rural Vietnam. Journal of Gastroenterology and Hepatology. 2007;22(12):2093–100. doi:10.1111/j.1440-1746.2007.05010.x. PMID 17645465.
- Fowler MG, Lampe MA, Jamieson DJ, Kourtis AP, Rogers MF. Reducing the risk of mother-to-child human immunodeficiency virus transmission: past successes, current progress and challenges, and future directions. Am. J. Obstet. Gynecol.. 2007;197(3 Suppl):S3–9. doi:10.1016/j.ajog.2007.06.048. PMID 17825648.
- Sauerbrei A, Wutzler P. The congenital varicella syndrome. Journal of perinatology : official journal of the California Perinatal Association. 2000;20(8 Pt 1):548–54. PMID 11190597.
- Garnett GP. Role of herd immunity in determining the effect of vaccines against sexually transmitted disease. J. Infect. Dis.. 2005;191 Suppl 1:S97–106. doi:10.1086/425271. PMID 15627236.
- Platonov AE. (The influence of weather conditions on the epidemiology of vector-borne diseases by the example of West Nile fever in Russia). Vestn. Akad. Med. Nauk SSSR. 2006;(2):25–9. Russian. PMID 16544901.
- Shors p. 198
- Shors pp. 199, 209
- Shors p. 19
- Shors p. 126
- Shors pp. 193–194
- Shors p. 194
- Shors pp. 192–193
- * Ranlet P. The British, the Indians, and smallpox: what actually happened at Fort Pitt in 1763?. Pa Hist. 2000 [Retrieved 16 September 2008];67(3):427–41. PMID 17216901.
- Van Rijn K. "Lo! The poor Indian!" colonial responses to the 1862–63 smallpox epidemic in British Columbia and Vancouver Island. Can Bull Med Hist. 2006 [Retrieved 16 September 2008];23(2):541–60. PMID 17214129.
- Patterson KB, Runge T. Smallpox and the Native American. Am. J. Med. Sci.. 2002;323(4):216–22. doi:10.1097/00000441-200204000-00009. PMID 12003378.
- Sessa R, Palagiano C, Scifoni MG, di Pietro M, Del Piano M. The major epidemic infections: a gift from the Old World to the New?. Panminerva Med. 1999;41(1):78–84. PMID 10230264.
- Bianchine PJ, Russo TA. The role of epidemic infectious diseases in the discovery of America. Allergy Proc. 1992 [Retrieved 2014-12-19];13(5):225–32. doi:10.2500/108854192778817040. PMID 1483570.
- Hauptman LM. Smallpox and American Indian; Depopulation in Colonial New York. N Y State J Med. 1979;79(12):1945–9. PMID 390434.
- Fortuine R. Smallpox decimates the Tlingit (1787). Alaska Med. 1988 [Retrieved 16 September 2008];30(3):109. PMID 3041871.
- Collier pp. 409–415
- Patterson KD, Pyle GF. The geography and mortality of the 1918 influenza pandemic. Bull Hist Med.. 1991;65(1):4–21. PMID 2021692.
- Johnson NP, Mueller J. Updating the accounts: global mortality of the 1918–1920 "Spanish" influenza pandemic. Bull Hist Med. 2002;76(1):105–15. doi:10.1353/bhm.2002.0022. PMID 11875246.
- Gao F, Bailes E, Robertson DL, et al.. Origin of HIV-1 in the Chimpanzee Pan troglodytes troglodytes. Nature. 1999;397(6718):436–441. doi:10.1038/17130. PMID 9989410. Bibcode:1999Natur.397..436G.
- Shors p. 447
- Mawar N, Saha S, Pandit A, Mahajan U. The third phase of HIV pandemic: social consequences of HIV/AIDS stigma & discrimination & future needs [PDF]. Indian J. Med. Res.. 2005 [Retrieved 2014-12-19];122(6):471–84. PMID 16517997.
- UNAIDS. Status of the global HIV epidemic [PDF]; 2008 [Retrieved 2014-12-19].
- Towner JS, Khristova ML, Sealy TK, et al.. Marburgvirus genomics and association with a large hemorrhagic fever outbreak in Angola. J. Virol.. 2006;80(13):6497–516. doi:10.1128/JVI.00069-06. PMID 16775337.
- World Health Organisation report, 24 September 2014
- Einstein MH, Schiller JT, Viscidi RP, Strickler HD, Coursaget P, Tan T, Halsey N, Jenkins D. Clinician's guide to human papillomavirus immunology: knowns and unknowns. The Lancet Infectious Diseases. 2009;9(6):347–56. doi:10.1016/S1473-3099(09)70108-2. PMID 19467474.
- Shuda M, Feng H, Kwun HJ, Rosen ST, Gjoerup O, Moore PS, Chang Y. T antigen mutations are a human tumor-specific signature for Merkel cell polyomavirus. Proceedings of the National Academy of Sciences of the United States of America. 2008;105(42):16272–7. doi:10.1073/pnas.0806526105. PMID 18812503. Bibcode:2008PNAS..10516272S.
- Pulitzer MP, Amin BD, Busam KJ. Merkel cell carcinoma: review. Advances in Anatomic Pathology. 2009;16(3):135–44. doi:10.1097/PAP.0b013e3181a12f5a. PMID 19395876.
- Koike K. Hepatitis C virus contributes to hepatocarcinogenesis by modulating metabolic and intracellular signalling pathways. J. Gastroenterol. Hepatol.. 2007;22 Suppl 1:S108–11. doi:10.1111/j.1440-1746.2006.04669.x. PMID 17567457.
- Hu J, Ludgate L. HIV-HBV and HIV-HCV coinfection and liver cancer development. Cancer Treat. Res.. 2007;133:241–52. doi:10.1007/978-0-387-46816-7_9. PMID 17672044.
- Bellon M, Nicot C. Telomerase: a crucial player in HTLV-I-induced human T-cell leukemia. Cancer genomics & proteomics. 2007;4(1):21–5. PMID 17726237.
- Schiffman M, Castle PE, Jeronimo J, Rodriguez AC, Wacholder S. Human papillomavirus and cervical cancer. Lancet. 2007;370(9590):890–907. doi:10.1016/S0140-6736(07)61416-0. PMID 17826171.
- Klein E, Kis LL, Klein G. Epstein-Barr virus infection in humans: from harmless to life endangering virus-lymphocyte interactions. Oncogene. 2007;26(9):1297–305. doi:10.1038/sj.onc.1210240. PMID 17322915.
- zur Hausen H. Novel human polyomaviruses—re-emergence of a well known virus family as possible human carcinogens. International Journal of Cancer. Journal International Du Cancer. 2008;123(2):247–50. doi:10.1002/ijc.23620. PMID 18449881.
- Molecular Biology of the Cell; Fourth Edition. New York and London: Garland Science; 2002 [Retrieved 2014-12-19]. ISBN 0-8153-3218-1.
- Ding SW, Voinnet O. Antiviral immunity directed by small RNAs. Cell. 2007;130(3):413–26. doi:10.1016/j.cell.2007.07.039. PMID 17693253.
- Patton JT, Vasquez-Del Carpio R, Spencer E. Replication and transcription of the rotavirus genome. Curr. Pharm. Des.. 2004;10(30):3769–77. doi:10.2174/1381612043382620. PMID 15579070.
- Jayaram H, Estes MK, Prasad BV. Emerging themes in rotavirus cell entry, genome organization, transcription and replication. Virus Res.. 2004;101(1):67–81. doi:10.1016/j.virusres.2003.12.007. PMID 15010218.
- Greer S, Alexander GJ. Viral serology and detection. Baillieres Clin. Gastroenterol.. 1995;9(4):689–721. doi:10.1016/0950-3528(95)90057-8. PMID 8903801.
- Matter L, Kogelschatz K, Germann D. Serum levels of rubella virus antibodies indicating immunity: response to vaccination of subjects with low or undetectable antibody concentrations. J. Infect. Dis.. 1997;175(4):749–55. doi:10.1086/513967. PMID 9086126.
- Mallery DL, McEwan WA, Bidgood SR, Towers GJ, Johnson CM, James LC; McEwan; Bidgood; Towers; Johnson; James (November 2010). "Antibodies mediate intracellular immunity through tripartite motif-containing 21 (TRIM21)". Proceedings of the National Academy of Sciences of the United States of America 107 (46): 19985–90. Bibcode:2010PNAS..10719985M. doi:10.1073/pnas.1014074107. PMC 2993423. PMID 21045130.
- Cascalho M, Platt JL. Novel functions of B cells. Crit. Rev. Immunol.. 2007;27(2):141–51. doi:10.1615/critrevimmunol.v27.i2.20. PMID 17725500.
- Le Page C, Génin P, Baines MG, Hiscott J. Interferon activation and innate immunity. Rev Immunogenet. 2000;2(3):374–86. PMID 11256746.
- Hilleman MR. Strategies and mechanisms for host and pathogen survival in acute and persistent viral infections. Proc. Natl. Acad. Sci. U.S.A.. 2004;101 Suppl 2:14560–6. doi:10.1073/pnas.0404758101. PMID 15297608. Bibcode:2004PNAS..10114560H.
- Asaria P, MacMahon E. Measles in the United Kingdom: can we eradicate it by 2010?. BMJ. 2006;333(7574):890–5. doi:10.1136/bmj.38989.445845.7C. PMID 17068034.
- Lane JM. Mass vaccination and surveillance/containment in the eradication of smallpox. Curr. Top. Microbiol. Immunol.. 2006;304:17–29. doi:10.1007/3-540-36583-4_2. PMID 16989262.
- Arvin AM, Greenberg HB. New viral vaccines. Virology. 2006;344(1):240–9. doi:10.1016/j.virol.2005.09.057. PMID 16364754.
- Pastoret PP, Schudel AA, Lombard M. Conclusions—future trends in veterinary vaccinology. Rev. – Off. Int. Epizoot.. 2007;26(2):489–94, 495–501, 503–9. PMID 17892169.
- Palese P. Making better influenza virus vaccines?. Emerging Infect. Dis.. 2006;12(1):61–5. doi:10.3201/eid1201.051043. PMID 16494719.
- Thomssen R. Live attenuated versus killed virus vaccines. Monographs in allergy. 1975;9:155–76. PMID 1090805.
- McLean AA. Development of vaccines against hepatitis A and hepatitis B. Rev. Infect. Dis.. 1986;8(4):591–8. doi:10.1093/clinids/8.4.591. PMID 3018891.
- Casswall TH, Fischler B. Vaccination of the immunocompromised child. Expert review of vaccines. 2005;4(5):725–38. doi:10.1586/14760522.214.171.1245. PMID 16221073.
- Barnett ED, Wilder-Smith A, Wilson ME. Yellow fever vaccines and international travelers. Expert Rev Vaccines. 2008;7(5):579–87. doi:10.1586/147605126.96.36.1999. PMID 18564013.
- Magden J, Kääriäinen L, Ahola T. Inhibitors of virus replication: recent developments and prospects. Appl. Microbiol. Biotechnol.. 2005;66(6):612–21. doi:10.1007/s00253-004-1783-3. PMID 15592828.
- Mindel A, Sutherland S. Genital herpes — the disease and its treatment including intravenous acyclovir. J. Antimicrob. Chemother.. 1983;12 Suppl B:51–9. doi:10.1093/jac/12.suppl_b.51. PMID 6355051.
- Witthöft T, Möller B, Wiedmann KH, et al.. Safety, tolerability and efficacy of peginterferon alpha-2a and ribavirin in chronic hepatitis C in clinical practice: The German Open Safety Trial. J. Viral Hepat.. 2007;14(11):788–96. doi:10.1111/j.1365-2893.2007.00871.x. PMID 17927615.
- Rudin D, Shah SM, Kiss A, Wetz RV, Sottile VM. Interferon and lamivudine vs. interferon for hepatitis B e antigen-positive hepatitis B treatment: meta-analysis of randomized controlled trials. Liver Int.. 2007;27(9):1185–93. doi:10.1111/j.1478-3231.2007.01580.x. PMID 17919229.
- Dimmock p. 3
- Goris N, Vandenbussche F, De Clercq K. Potential of antiviral therapy and prophylaxis for controlling RNA viral infections of livestock. Antiviral Res.. 2008;78(1):170–8. doi:10.1016/j.antiviral.2007.10.003. PMID 18035428.
- Carmichael L. An annotated historical account of canine parvovirus. J. Vet. Med. B Infect. Dis. Vet. Public Health. 2005;52(7–8):303–11. doi:10.1111/j.1439-0450.2005.00868.x. PMID 16316389.
- Chen YP, Zhao Y, Hammond J, Hsu H, Evans JD, Feldlaufer MF. Multiple virus infections in the honey bee and genome divergence of honey bee viruses. Journal of Invertebrate Pathology. October–November 2004;87(2–3):84–93. doi:10.1016/j.jip.2004.07.005. PMID 15579317.
- Shors p. 584
- Shors pp. 562–587
- Dinesh-Kumar SP, Tham Wai-Hong, Baker BJ. Structure—function analysis of the tobacco mosaic virus resistance gene N. PNAS. 2000;97(26):14789–94. doi:10.1073/pnas.97.26.14789. PMID 11121079. Bibcode:2000PNAS...9714789D.
- Shors pp. 573–576
- Soosaar JL, Burch-Smith TM, Dinesh-Kumar SP. Mechanisms of plant resistance to viruses. Nature Reviews Microbiology. 2005;3(10):789–98. doi:10.1038/nrmicro1239. PMID 16132037.
- Lomonossoff, GP. Recent Advances in Plant Virology. Caister Academic Press; 2011. ISBN 978-1-904455-75-2. Virus Particles and the Uses of Such Particles in Bio- and Nanotechnology.
- Wommack KE, Colwell RR. Virioplankton: viruses in aquatic ecosystems. Microbiol. Mol. Biol. Rev.. 2000;64(1):69–114. doi:10.1128/MMBR.64.1.69-114.2000. PMID 10704475.
- Bergh O, Børsheim KY, Bratbak G, Heldal M. High abundance of viruses found in aquatic environments. Nature. 1989;340(6233):467–8. doi:10.1038/340467a0. PMID 2755508. Bibcode:1989Natur.340..467B.
- Shors pp. 595–97
- Bickle TA, Krüger DH. Biology of DNA restriction. Microbiol. Rev.. 1 June 1993;57(2):434–50. PMID 8336674.
- Barrangou R, Fremaux C, Deveau H, et al.. CRISPR provides acquired resistance against viruses in prokaryotes. Science. 2007;315(5819):1709–12. doi:10.1126/science.1138140. PMID 17379808. Bibcode:2007Sci...315.1709B.
- Brouns SJ, Jore MM, Lundgren M, et al.. Small CRISPR RNAs guide antiviral defense in prokaryotes. Science. 2008;321(5891):960–4. doi:10.1126/science.1159689. PMID 18703739. Bibcode:2008Sci...321..960B.
- Prangishvili D, Garrett RA. Exceptionally diverse morphotypes and genomes of crenarchaeal hyperthermophilic viruses. Biochem. Soc. Trans.. 2004;32(Pt 2):204–8. doi:10.1042/BST0320204. PMID 15046572.
- Mojica FJ, Díez-Villaseñor C, García-Martínez J, Soria E. Intervening sequences of regularly spaced prokaryotic repeats derive from foreign genetic elements. J. Mol. Evol.. 2005;60(2):174–82. doi:10.1007/s00239-004-0046-3. PMID 15791728.
- Makarova KS, Grishin NV, Shabalina SA, Wolf YI, Koonin EV. A putative RNA-interference-based immune system in prokaryotes: computational analysis of the predicted enzymatic machinery, functional analogies with eukaryotic RNAi, and hypothetical mechanisms of action. Biol. Direct. 2006 [Retrieved 2014-12-19];1:7. doi:10.1186/1745-6150-1-7. PMID 16545108. PMC 1462988.
- van der Oost J, Westra ER, Jackson RN, Wiedenheft B (2014). "Unravelling the structural and mechanistic basis of CRISPR-Cas systems". Nature Reviews. Microbiology 12 (7): 479–92. doi:10.1038/nrmicro3279. PMC 4225775. PMID 24909109.
- Shors p. 4
- Shors p. 5
- Shors p. 593
- Suttle CA. Marine viruses—major players in the global ecosystem. Nature Reviews Microbiology. 2007;5(10):801–12. doi:10.1038/nrmicro1750. PMID 17853907.
- Suttle CA. Viruses in the sea. Nature. 2005;437(7057):356–61. doi:10.1038/nature04160. PMID 16163346. Bibcode:2005Natur.437..356S.
- www.cdc.gov. Harmful Algal Blooms: Red Tide: Home [Retrieved 2014-12-19].
- Hall, A. J., Jepson, P. D., Goodman, S. J. & Harkonen, T. "Phocine distemper virus in the North and European Seas — data and models, nature and nurture". Biol. Conserv. 131, 221–229 (2006)
- Forterre P, Philippe H. The last universal common ancestor (LUCA), simple or complex?. The Biological Bulletin. 1999;196(3):373–5; discussion 375–7. doi:10.2307/1542973. PMID 11536914.
- Collier p.8
- Lodish, Harvey; Berk, Arnold; Zipursky, S. Lawrence; Matsudaira, Paul; Baltimore, David; Darnell, James.Viruses:Structure, Function, and Uses Retrieved on 16 September 2008
- Matsuzaki S, Rashel M, Uchiyama J, Sakurai S, Ujihara T, Kuroda M, Ikeuchi M, Tani T, Fujieda M, Wakiguchi H, Imai S. Bacteriophage therapy: a revitalized therapy against bacterial infectious diseases. Journal of Infection and Chemotherapy : Official Journal of the Japan Society of Chemotherapy. 2005;11(5):211–9. doi:10.1007/s10156-005-0408-9. PMID 16258815.
- Gleba, YY; Giritch, A. Recent Advances in Plant Virology. Caister Academic Press; 2011. ISBN 978-1-904455-75-2. Plant Viral Vectors for Protein Expression.
- Jefferson, A; Cadet, V. E.; Hielscher, A (2015). "The mechanisms of genetically modified vaccinia viruses for the treatment of cancer". Critical Reviews in Oncology/Hematology 95: 407–16. doi:10.1016/j.critrevonc.2015.04.001. PMID 25900073.
- Karimkhani, C; Gonzalez, R; Dellavalle, R. P. (2014). "A review of novel therapies for melanoma". American Journal of Clinical Dermatology 15 (4): 323–37. doi:10.1007/s40257-014-0083-7. PMID 24928310.
- "Injectable T-VEC Offers Hope to Melanoma Patients", Medscape, May 28, 2015, Retrieved 20 May 2015
- Burke, J; Nieva, J; Borad, M. J.; Breitbach, C. J. (2015). "Oncolytic viruses: Perspectives on clinical development". Current Opinion in Virology 13: 55–60. doi:10.1016/j.coviro.2015.03.020. PMID 25989094.
- Fischlechner M, Donath E. Viruses as Building Blocks for Materials and Devices. Angewandte Chemie International Edition. 2007;46(18):3184–93. doi:10.1002/anie.200603445. PMID 17348058.
- Soto CM, Blum AS, Vora GJ, et al.. Fluorescent signal amplification of carbocyanine dyes using engineered viral nanoparticles. J. Am. Chem. Soc.. 2006;128(15):5184–9. doi:10.1021/ja058574x. PMID 16608355.
- Blum AS, Soto CM, Wilson CD et al.. An Engineered Virus as a Scaffold for Three-Dimensional Self-Assembly on the Nanoscale. Small. 2005;7:702. doi:10.1002/smll.200500021. PMID 17193509.
- Cello J, Paul AV, Wimmer E. Chemical synthesis of poliovirus cDNA: generation of infectious virus in the absence of natural template. Science. 2002;297(5583):1016–8. doi:10.1126/science.1072266. PMID 12114528. Bibcode:2002Sci...297.1016C.
- Coleman JR, Papamichail D, Skiena S, Futcher B, Wimmer E, Mueller S. Virus attenuation by genome-scale changes in codon pair bias. Science. 2008;320(5884):1784–7. doi:10.1126/science.1155761. PMID 18583614. Bibcode:2008Sci...320.1784C.
- Genomes. "NIH viral genome database". Ncbi.nlm.nih.gov. Retrieved 2014-12-19.
- Shors p. 331
- Artenstein AW, Grabenstein JD. Smallpox vaccines for biodefense: need and feasibility. Expert Review of Vaccines. 2008;7(8):1225–37. doi:10.1586/147605188.8.131.525. PMID 18844596.
- Collier, Leslie; Balows, Albert; Sussman, Max (1998) Topley and Wilson's Microbiology and Microbial Infections ninth edition, Volume 1, Virology, volume editors: Mahy, Brian and Collier, Leslie. Arnold. ISBN 0-340-66316-2.
- Dimmock, N.J; Easton, Andrew J; Leppard, Keith (2007) Introduction to Modern Virology sixth edition, Blackwell Publishing, ISBN 1-4051-3645-6.
- Knipe, David M; Howley, Peter M; Griffin, Diane E; Lamb, Robert A; Martin, Malcolm A; Roizman, Bernard; Straus Stephen E. (2007) Fields Virology, Lippincott Williams & Wilkins. ISBN 0-7817-6060-7.
- Shors, Teri (2008). Understanding Viruses. Jones and Bartlett Publishers. ISBN 0-7637-2932-9.
Find more about
at Wikipedia's sister projects
|Definitions from Wiktionary|
|Media from Commons|
|News stories from Wikinews|
|Quotations from Wikiquote|
|Source texts from Wikisource|
|Textbooks from Wikibooks|
|Learning resources from Wikiversity|
- ViralZone A Swiss Institute of Bioinformatics resource for all viral families, providing general molecular and epidemiological information
- David Baltimore online Seminar: "Introduction to Viruses and HIV"
- Ari Helenius online seminar: "Virus entry"
- "A Gazillion Tiny Avatars", article on viruses by Olivia Judson, NY Times, 15 Dec 2009
- Khan Academy, video lecture
- Viruses – an Open Access journal
- 3D virus structures in EM Data Bank (EMDB) | https://en.wikipedia.org/wiki/Virus |