text
stringlengths
237
516k
id
stringlengths
47
47
dump
stringclasses
1 value
url
stringlengths
17
499
date
stringlengths
20
20
file_path
stringclasses
370 values
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
58
105k
The following questions may be helpful in guiding lesson planning: 1. Who are my students? 2. What do I know of their age, this time, this place? (What is the context?) 3. What are my aims? (Overarching educational goals – long range goals) 4. What do I want students to understand? (key ideas that you want the students to grasp) 5. How will students demonstrate what they have learned? (actions in which students show what they have learned – students will be able to explain meaning of ...). 6. What materials will I need? 7. What methods are appropriate to this lesson? 8. What are the...
<urn:uuid:fff0fe5e-48b3-4234-aaa4-56cffde1a1f3>
CC-MAIN-2013-20
http://www.wyzant.com/Blogs/Subjects/hebrew.aspx
2013-05-19T11:10:48Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697420704/warc/CC-MAIN-20130516094340-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.966791
142
Renovate Right: Prevent Lead Poisoning in Children In this podcast, Dr. Maria Doa, Director of the Environmental Protection Agency's (EPA) National Program Chemicals Division, discusses EPA's new rule for renovations, repairs, and painting activities. The new rule includes information on lead-safe work practices when conducting renovations, repairs, and painting in pre-1978 homes and schools to prevent the spread of lead dust. Created: 10/2/2008 by National Center for Environmental Health (NCEH). Date Released: 10/2/2008. Series Name: CDC Featured Podcasts. [Announcer] This podcast is presented by the Centers for Disease Control and Prevention. CDC - safer, healthier people. [Susan Laird] Welcome to this podcast on lead poisoning prevention. I’m your host, Susan Laird. Joining me by phone today is Dr. Maria Doa, Director of the Environmental Protection Agency’s National Program Chemicals Division. Dr. Doa, I understand that EPA issued a rule in April 2008 that, for the first time, requires renovators, repairers, and painters to take steps to protect occupants from being poisoned by lead dust. Why is EPA so concerned? [Dr. Maria Doa] Well, home renovation can generate a lot of dust if the work area is not properly contained and cleaned. In homes with lead-based paint, this can result in elevated blood-lead levels in young children, and sometimes leading to serious learning and behavioral problems. Childhood lead poisoning is a preventable disease and our goal is to eliminate it. [Susan Laird] What evidence does EPA have that renovation, repair, and painting activities cause an increase in children’s blood-lead levels? [Dr. Maria Doa] There are numerous studies which show that renovation activities have resulted in increased blood-lead levels in children. For example, in one study, EPA evaluated the relationship between children’s blood-lead levels and renovation activities in homes with lead-based paint and, according to the study, children living in a home while renovation and remodeling were conducted were more than 30 percent more likely to have elevated blood-lead levels than if there was no renovation in the home. In particular, removing paint by using flame torches, using high temperature heat guns, and preparing surfaces by mechanical sanding, significantly increased the risk of elevated blood-lead levels in children. [Susan Laird] Well, what about children who visit grandma’s house or families who move into a home right after it’s renovated? Shouldn’t they be protected too? [Dr. Maria Doa] Well, the greatest risk is to young children who live in a home during renovation. And our existing rules also require notification to all homeowners of the potential dangers to children of lead dust from renovation. Our new rule gives the homeowner the option of choosing to have a certified renovator not follow lead-safe work practices. However, if they choose not to use the certified renovator and lead-safe work practices, the homeowner must certify that no children or pregnant women reside in the home. [Susan Laird] About how many builders, painters, plumbers, electricians, and other contractors will be affected by the new lead rule? [Dr. Maria Doa] EPA estimates that approximately 210,000 firms will be affected. This is the estimated number of companies that will become certified to engage in renovation, repair, or painting activities. [Susan Laird] Dr. Doa, tell us what’s covered by the rule. [Dr. Maria Doa] The rule applies to paid contractors working in pre-1978 housing, childcare facilities, and schools with lead-based paint. Contractors include home improvement contractors, maintenance workers in multi-family housing, painters, and other specialty trades. The covered facilities include pre-1978 residential, public, or commercial buildings where children under age six are present on a regular basis, as well as all rental housing. The rule applies to renovation, repair, or painting activities where more than six square feet of lead-based paint is disturbed in a room or where 20 square feet of lead-based paint is disturbed on the exterior. [Susan Laird] What does the rule require? [Dr. Maria Doa] Lead-safe work practice standards require renovators to be trained in the use of lead-safe work practices, that renovators and firms be certified, that providers of renovation training be accredited, and that renovators follow specific lead-safe work practice standards. [Susan Laird] Tell us more about lead-safe work practices. [Dr. Maria Doa] First, certain dangerous work practices are prohibited for every renovation, including minor maintenance or repair jobs. Prohibited practices include open flame burning or torching; sanding, grinding, needle gunning, or blasting with power tools not equipped with a shroud and a High Efficiency Particulate Air, or HEPA, vacuum attachment. The rule also prohibits using a heat gun at temperatures greater than 1100º F. The rule also requires that: • Firms must post signs clearly defining the work area and warning occupants and other persons not involved in the renovation to remain outside of the work area. • Before beginning the renovation, the firm must isolate the work area so that no dust or debris leaves the work area while the renovation is being performed. • Waste from renovation activities must be contained to prevent releases of dust and debris. • And after the renovation is complete, the firm must clean the work area. The certified renovator must verify the cleanliness of the work area using a procedure involving disposable cleaning cloths. [Susan Laird] What are the responsibilities of the renovation firm? [Dr. Maria Doa] Firms performing renovations also must ensure that: • All persons performing renovation activities are certified renovators or have received on-the-job training by a certified renovator; • A certified renovator is assigned to each renovation performed by the firm; and • All renovations are performed in accordance with applicable work practice standards. [Susan Laird] How does a firm become certified? [Dr. Maria Doa] Firms that perform renovations for compensation need to apply to EPA or to a state that has an approved program for certification to perform renovations. Firms will have to apply for re-certification every five years. [Susan Laird] When did the new EPA rule become effective? [Dr. Maria Doa] The rule was effective on June 23, 2008, and contains procedures for the authorization of States, Territories, and Tribes to administer and enforce these standards and regulations. The renovation program in the States, Territories, and Tribes that do not have an authorized renovation program will be administered by EPA’s federal program. The rule will be implemented as follows: • States, Territories, and Tribes may apply for program authorization now by submitting their application to their EPA Regional Office. • Training programs may apply for accreditation beginning in April 2009. • Renovation firms may apply for certification beginning October 2009 and must be certified by April 2010. • After April 2010, all renovations must be performed by certified firms in accordance with the rule’s work practice standards and associated recordkeeping requirements. [Susan Laird] Where can our listeners get more information about the new rule and lead in general? [Dr. Maria Doa] Listeners can visit our web site at www.epa.gov/lead. They can also call our National Lead Information Center by calling 1-800-424-LEAD, that’s 1-800-424-5323. [Susan Laird] Thank you, Dr. Maria Doa, for sharing this important information about EPA’s new lead renovation, repair, and painting rule. [Announcer]To access the most accurate and relevant health information that affects you, your family and your community, please visit www.cdc.gov.
<urn:uuid:8c11824d-04af-4267-886b-81f1a429c040>
CC-MAIN-2013-20
http://www2c.cdc.gov/podcasts/player.asp?f=10121
2013-05-19T10:53:32Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697420704/warc/CC-MAIN-20130516094340-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.944405
1,662
NEW YORK--Captured in mostly black-and-white, the life experienced by Japanese Americans who were forced into internment camps during World War II had been shown by the U.S. government until now to emphasize their "American-ness." But a recently published book of photos taken by an internee with color film shows that elements of Japanese culture such as sumo and traditional Bon period dance were being preserved in the camps to a greater degree than previously believed. The volume is a rare collection of color photos taken by amateur photographer Bill Manbo. The photos taken by Manbo show various aspects of Japanese culture that the internees clung to, even while living behind barbed wire, despite the fact a majority were American citizens. In total, about 120,000 Japanese Americans were sent to the camps as the U.S. government questioned their loyalty after Japan attacked Pearl Harbor in December, 1941. Manbo was a second-generation Japanese American who was forced to move with his family from his home in California to an internment camp at Heart Mountain, Wyoming, in 1942. Soon after Japan attacked Pearl Harbor and brought the United States into World War II, cameras were confiscated from Japanese Americans due to fears the equipment could be used for spying. However, Manbo may have been allowed to hold on to his camera to take family photos because the camp was located a considerable distance from the West Coast. Some of the photos were of his wife and son. After the end of World War II, Manbo eventually returned to California, and he kept the photos until his death in 1992. Since then, his son has held on to the photos. Even after more than 60 years, the color in the photos is still clear and vivid because Manbo shot with positive film used in making slides. The book, titled "Colors of Confinement," was published by the University of North Carolina Press. Eric L. Muller, a law professor at the University of North Carolina, said the color photos added familiarity about life in the internment camps because all the photos until now had been black-and-white. Manbo's photos show young women dressed in kimono for a Bon dance as well as internees engaging in sumo while wearing the traditional "mawashi." While most of the photos are of daily life in the camp, there are some with a political message. In 1943, the U.S. government implemented a loyalty test for internees, asking them if they were willing to serve in the U.S. military and if they were prepared to pledge loyalty to the United States rather than the emperor of Japan. The several hundred who answered "No" to both questions were sent to another internment facility in California. While Manbo was not among those who were sent to California, he did take a photo of the crowd of internees who gathered to see off those being sent to California. In response to the question about loyalty to the United States, he is said to have responded that he would be loyal if his rights as an American citizen were restored. "If we get all our rights back," Manbo wrote. "Who wants to fight for a c.c. (concentration) camp?" - « Prev - Next »
<urn:uuid:2d5b01b8-1344-4e50-a5e6-37bb4204b420>
CC-MAIN-2013-20
http://ajw.asahi.com/article/behind_news/social_affairs/AJ201209150016
2013-05-22T07:54:51Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.986983
677
A publication of the Archaeological Institute of America Volume 65 Number 5, September/October 2012 Death on the Roman Empire's eastern frontier In the first century A.D. Roman army veterans arrived in what is now northern Macedonia and settled near the small village of Scupi. The veterans had been given the land by the emperor Domitian as a reward for their service, as was customary. They soon began to enlarge the site, and around A.D. 85, the town was granted the status of a Roman colony and named Colonia Flavia Scupinorum. (“Flavia” refers to the Flavian Dynasty of which Domitian was a member.) Over the next several centuries Scupi grew at a rapid pace. In the late third century and well into the fourth, Scupi experienced a period of great prosperity. The colony became the area’s principal religious, cultural, economic, and administrative center and one of the locations from which, through military action and settlement, the Romans colonized the region. Scupi, which gives its name to Skopje, the nearby capital of the Republic of Macedonia, has been excavated regularly since 1966. Since that time archaeologists have uncovered an impressive amount of evidence, including many of the buildings that characterize a Roman city— a theater, a basilica, public baths, a granary, and a sumptuous urban villa, as well as remains of the city walls and part of the gridded street plan. Recently, however, due to the threat from construction, they have focused their work on one of the city’s necropolises, situated on both sides of a 20-foot-wide state-of-the art ancient road. In the Roman world, it was common practice to locate necropolises on a town’s perimeter, along its main roads, entrances, and exits. Of Scupi’s four necropolises, the southeastern one, which covers about 75 acres and contains at least 5,000 graves spanning more than 1,500 years, is the best researched. The oldest of its burials date from the Late Bronze and Early Iron Age (1200–900 B.C). These earlier graves were almost completely destroyed as Roman burials began to replace them in the first century. According to Lence Jovanova of the City Museum of Skopje, who is in charge of the necropolis excavations, the burials have provided much new information crucial to understanding the lives of ancient Scupi’s residents, including the types of household items they used, their life spans, building techniques, and religious beliefs. In just the last two years alone, nearly 4,000 graves have been discovered and about 10,000 artifacts excavated, mostly objects used in daily life such as pots, lamps, and jewelry. Among the thousands of graves there is a great variety of size, shape, style, and inhumation practice. There are individual graves, family graves, elaborate stone tombs, and simple, unadorned graves. Some burials are organized in regular lines along a grid pattern parallel to the main road, as was common in the Roman world. Other individuals are buried in seemingly random locations within the necropolis area, more like a modern cemetery that has been in use for a long time. The oldest Roman layers, dating to the first through mid-third centuries A.D., contain predominantly cremation burials. The later Roman layers, however, containing graves from the third and fourth centuries A.D., are, with very few exceptions, burials of skeletons. According to Jovanova, this variety in burial practice is normal for this time and reflects a complex, long-term, and regionwide demographic change resulting not only from an increased number of settlers coming from the east, but also from internal economic, social, and religious changes. This past summer, Jovanova’s team was finishing excavations in one section of the southeast necropolis, where she hopes to uncover more evidence about Scupi’s history and its inhabitants among the 5,000 to 10,000 graves she thinks are left to investigate. Although there are construction pressures on archaeological work in the necropolis, the ancient city is legally protected from any modern building, so future work will focus on excavating the city walls and buildings. There are also plans to create an archaeological park on the site. Matthew Brunwasser is a freelance writer living in Istanbul.Share
<urn:uuid:6499c543-de38-4527-a231-8bea0a9eb674>
CC-MAIN-2013-20
http://archive.archaeology.org/1209/features/scupi_macedonia_roman_colony_bronze_age.html
2013-05-22T08:26:01Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.969093
925
Contents - Previous - Next This is the old United Nations University website. Visit the new site at http://unu.edu Better understanding of environmental conditions and trends Many international assessment activities launched under UNEP's "Earthwatch" programme since 1972 drew on well-established procedures; international scientific activities in Antarctica started in the last century, and cooperative investigations in the fields of meteorology and health were well advanced long before the UN came into being. Antarctica shows how cooperative scientific research contributed to an international regime of the sort that may now be developing for "sustainable development" and "global change." Under the first International Polar Year of 1882-1883, 12 nations studied meteorology, geomagnetism, and the Aurora Borealis and Australis. The clear advantages of cooperative research and observations led, in 1932-1933, to the Second International Polar Year, in which 44 countries joined forces for scientific work in the polar regions. Then, in 1957, the proposed Third Polar Year was converted into the International Geophysical Year (IGY), during which 12 countries established 48 new stations in Antarctica, while man's first artificial satellite - Sputnik - demonstrated the feasibility of synoptic observation of Earth phenomena on a global scale. Only two years later, in 1959, the US convened the 11 other states that had collaborated in Antarctica for negotiation of what became the 1959 Antarctica Treaty.8 This treaty paved the way for much that was to follow, including the 1967 Outer Space Treaty,9 and laid the basis for the concept that commons areas beyond national jurisdiction deserved protection under international legal regimes. IGY successes in 1957-1958 also inspired similar programmes in related fields, notably the International Biological Programme that was subsequently converted into Unesco's Man and the Biosphere (MAB) programme, and highlighted the need to protect natural resources such as biological diversity; this information laid the basis for a convention now being negotiated to protect biological diversity. With this background it was not surprising that when governments set up a new Environment Fund in 1972, they defined its use primarily in terms of supporting assessment functions with a rich information component, while giving less-specific attention to other purposes such as improved management and public awareness. The new fund was to support regional and global monitoring, assessment and data-collecting systems, including, as appropriate, costs for national counterparts; the improvement of environmental quality management; environmental research; information exchange and dissemination; public education and training; assistance for national, regional and global environmental institutions; the promotion of environmental research and studies for the development of industrial and other technologies best suited to a policy of economic growth compatible with adequate environmental safeguards; and such other programmes as the Governing Council may decide upon.10 In carrying out these "programmes of general interest," the Assembly directed that "due account should be taken of the special needs of the developing countries," and a major portion of UNEP's "catalytic" funding ever since has been for this purpose, as in supporting research and monitoring stations in developing-country regions under UNEP's Global Environmental Monitoring System (GEMS).11 UNEP's "Earthwatch" programmes have proven their worth by developing reliable data on global environmental conditions and trends ranging from pesticide levels in human tissue and methyl mercury levels in regional fisheries, to rates of loss of arable soil and changes in the mass balance of glaciers as an indicator of climate change. These activities are now ready for expansion and there is widespread agreement on the need for major reinforcement of UNEP's capacity to take the lead in providing "early warning" of major environmental risks, assessing these risks and helping states to develop cooperative measures required to reduce them or mitigate the consequences - such as through contingency planning. But to serve these purposes information must be assessed and converted into credible statements about conditions and trends and their significance for human well-being. To cope with scientific uncertainty about likely changes and basic cause-and-effect relationships - that often cross over sectoral lines -assessment processes have relied on international groups of experts to assemble the best scientific capabilities. A major problem is the large number of countries that lack human and institutional capabilities to gather information and assess its significance in terms of local planning and decision-making. UNDP's "Sustainable Development Network" is intended to address this problem and will undoubtedly figure prominently at UNCED, as will the new Global Environment Facility, discussed below. The growing influence of international experts working with international civil servants in the UN system reflects the fact that even highly developed countries gain by pooling their expertise and setting up international groups of experts for this purpose. An early such group was the UN Scientific Group of Experts on the Effects of Atomic Radiation - UNSCEAR - set up by the General Assembly some 30 years ago.12 Their work contributed to negotiation of one of the few international agreements that measurably lowered global risk; the Partial Test Ban Treaty of 1963 reduced man-made radiation from about 7 percent of natural background radiation in the early 1960s to less than I per cent in 1980.13 Another group well-known in environmental circles is GESAMP the Group of Experts on Scientific Effects of Marine Pollution, whose third decadal review was recently published.14 Experts in GESAMP are selected and appointed by international organizations to provide advice and assessments they need to improve the effectiveness of the UN system in dealing with related matters. GESAMP assessments have gained increasing credibility, in part because its expert members - many of whom come from government institutions- are expressly working in an expert, non-instructed status. More recently, the Intergovernmental Panel on Climate Change (IPCC) was set up by the governing bodies of WMO and UNEP in order to harness scientific and other expertise in preparing the 1990 Second World Climate Conference. Three working groups were established - on research, on climate impacts, and on possible response strategies. In effect, IPCC supplanted the work of an earlier interagency group - the Advisory Group on Greenhouse Gases (AGOG) - in which experts chosen by WMO, UNEP, and ICSU provided scientific advice and issued a consensus warning in 1985 about the likelihood of climate change and the need to study the implications.15 An important difference between the AGGG and IPCC is that IPCC experts are under government instruction, whereas those in AGGG - whose message was clearly unpalatable to some governments - were not.16 "Impartiality" of international organizations and civil servants has been demanded and contested ever since the League of Nations, but here the issue is whether scientific judgement should be in the hands of instructed or uninstructed experts. Clearly it is useful to engage government experts whether or not under instructions in the study of policy matters, especially when the economic consequences may be severe, as in the case of policy responses to climate warming. But problems of credibility arise when international expert groups on scientific assessments are dominated by experts from developed countries or by "instructed" experts - whether from government, industry, or various "pressure" or "special-interest" groups. In the production of assessments of environmental conditions and trends the presence of such experts is bound to raise questions as to the reliability and credibility of the final product and thereby reduce its ability to help decision makers.17 As the scope of required information expands from scientific to include economic and social data, it will be increasingly desirable to broaden participation in international assessments and strengthen strict peer-review proceedings that are fully "transparent." This is merely an extension of the increasing role of international nongovernmental organizations (INGOs) in partnership with the UN system - like ICSU's role with WMO and UNEP in climate assessment, and IUCN's role alongside UNEP and FAO in drawing up action plans to protect tropical forests and biological diversity. But national NGOs also are playing roles of greater significance, and their participation may be critical when it comes to trying to apply international findings locally, where decisions are made that determine whether or not sustainable development can be achieved. International environmental impact assessments? For over two decades the conventional approach to reducing harmful risks has been first to improve "assessment" processes of monitoring, research, and information exchange so that current environmental conditions and trends can be measured and their significance for human well-being weighed, and then to find ways to incorporate this information into "management" decisions to make them less harmful.18 Accordingly, attention has focused on the need to develop environmental information and integrate it in development planning processes so as, in the words of the World Commission on Environment and Development (WCED), "to make development sustainable - to ensure that it meets the needs of the present without compromising the ability of future generations to meet their own needs.19 Considerable progress has been made in refining this approach and applying it nationally at the project level in what are called "Environmental Impact Assessment" (EIA) procedures.20 Growing awareness of international risks arising from local acts suggests that EIA procedures should now be strengthened and applied internationally, at least where international financial assistance is being provided. This leads to contentious issues about interference in domestic affairs and "conditionality" in assistance. The obligation to avoid harmful external impacts has been evolving since Stockholm. It has long been recognized that local actions can have environmental effects far beyond the place (and time) of origin, and nations were able to agree at the 1972 Environment Conference on the principle of responsibility not to cause external damage: States have, in accordance with the Charter of the United Nations and the principles of international law, the sovereign right to exploit their own resources pursuant to their own environmental policies, and the responsibility to ensure that activities within their jurisdiction or control do not cause damage to the environment of other States or of areas beyond the limits of national jurisdiction21. But state sensitivities on this issue were highlighted at Stockholm when an "information" principle was approved calling for promotion of scientific research and support for "the free flow of up-to-date scientific information... to facilitate the solution of environmental problems..."22 It was noteworthy that this principle could only be adopted after deletion of a disputed portion: Relevant information must be supplied by States on activities or developments within their jurisdiction or under their control whenever they believe, or have reason to believe, that such information is needed to avoid the risk of significant adverse effects on the environment in areas beyond their national jurisdictions.23 Despite the reluctance of many states as a matter of principle to release information on possible external environmental impacts, when the issue arose in specific terms at a regional level agreement was possible; a provision was incorporated into the 1978 Kuwait Convention under UNEP's Regional Seas Programme to the effect that, each Contracting State shall endeavor to include an assessment of the potential environmental effects in any planning activity entailing projects within its territory, particularly in the coastal areas, which may cause significant risks of pollution in the Sea Area.24 A further provision encouraged the development of procedures for disseminating this information with an undertaking to develop technical guidelines "to assist the planning of development projects in such a way as to minimize their harmful impact on the marine environment."25 A similar obligation was agreed upon in the Abidjan Convention of 1981, according to which parties "shall develop technical and other guidelines to assist the planning of their development projects in such a way as to minimize their harmful impact on the Convention area," and to include an assessment of potential environmental effects in the area and develop procedures for dissemination of such information.26 To encourage the incorporation of assessment information in planning and decision-making for activities that risk environmental impacts, UNEP's Governing Council in 1982 requested that appropriate "guidelines, standards and model legislation" be drawn up in the field of "Environmental Impact Assessment."27 The resulting 13 principles were approved by UNEP's Governing Council "for use as a basis for preparing appropriate national measures, including legislation, and for international cooperation... "28 To ensure that environmental effects are taken fully into account before decisions are taken by "the competent authority or authorities," and to encourage reciprocal procedures for information ex change, notification, and consultation between states when proposed activities are likely to have significant transboundary effects, these principles recommend that government agencies, members of the public, experts in relevant disciplines, and interested groups should have an opportunity and time to comment on EIA information before any decision is made; that any such decision should be in writing, state the reasons therefor, and include the provisions, if any' to prevent, reduce, or mitigate damage to the environment, and that: when information provided as part of an EIA indicates that the environment within another State is likely to be significantly affected by a proposed activity, the State in which the activity is being planned should, to the extent possible: (a) notify the potentially affected State of the proposed activity; (b) transmit to the potentially affected State any relevant information from the EIA, the transmission of which is not prohibited by national laws or regulations; and (c) when it is agreed between the States concerned, enter into timely consultations.29 However, a recent hint of continuing sensitivity on this issue is found in a 1989 definition of the term "sustainable development" that was adopted in UNEP's Governing Council and has since been cited in other fore, including the 1990 Climate Conference: "Sustainable development is development that meets the needs of the present without compromising the ability of future generations to meet their own needs and does not imply in any way encroachment upon national sovereignty."30 Whereas free exchange of environmental information among technicians is, fortunately, far less difficult in practice than in principle, an examination of the treatment of information exchange in the Law of the Sea Treaty suggests that while state sensitivity about research information is less pronounced for environmental information than for non-environmental information, coastal state sensitivity increases as information gathering approaches land, especially within the exclusive economic zone (EEZ). For example, under Part XII of the Law of the Sea Treaty on "Protection and Preservation of the Marine Environment," states are encouraged without qualification to cooperate in scientific research and exchange of information and data about marine pollution (Art. 200) and, "to observe, measure, evaluate and analyse... risks or effects of pollution of the marine environment" (Art. 204), and "publish reports of the results obtained" (Art 205). But under Part XIII on the broader topic of "Marine Scientific Research," international cooperation is to be promoted "in accordance with the principle of respect for sovereignty and jurisdiction and on the basis of mutual benefit" (Art. 242), and much stricter requirements are applied to near-shore activities where coastal states have "the exclusive right to regulate, authorize and conduct marine scientific research in the territorial sea" (Art. 245) and, in their EEZ and continental shelf, have full authority to withhold consent for scientific research when it is "of direct significance for the exploration and exploitation of natural resources, whether living or non-living," among other characteristics (Arts. 245 and 246).31 Information to strengthen international agreements Information - scientific or otherwise - is not static in a changing world and the need for flexibility in international agreements is now well accepted. During negotiation of the London Ocean Dumping Convention in 1971-1972, it became clear that flexibility was needed in selecting substances that require strict international control and in adjusting them to take account of evolving information about their toxicity or other characteristics. This led to the use of "black" and "grey" annexes attached to the formal agreement, with eased provisions for revision of these annexes as new knowledge came to light. This device has been used since then in a number of other treaties, notably in those regulating dumping in regional seas and, most recently, in the 1987 Montreal Protocol32 and the 1989 Basel Convention on the Control of Transboundary Movements of Hazardous Wastes and Their Disposal.33 A recent example of the changing nature and greater intrusiveness of information needed to improve and measure performance under an environmental agreement is found in the 1987 Montreal Protocol on Stratospheric Ozone. Here information is called for on production and consumption levels of controlled substances (including import and export figures), as well as the transfer of such production as permitted under "industrial rationalization" provisions (Articles 1 and 2). Parties are required to provide initial statistical data on production, imports, and exports or "best possible estimates of such data where actual data are not available," and thereafter to provide statistical data to the secretariat on its annual production (with separate data on amounts destroyed by technologies to be approved by the Parties), imports, and exports to Parties and non-Parties, respectively. of such substances for the year during which it becomes a Party and for each year thereafter. It shall forward the data no later than nine months after the end of the year to which the data related Additional information is called for on "Research, Development, Public Awareness and Exchange of Information": 1. The Parties shall cooperate, consistent with their national laws, regulations and practices and taking into account in particular the needs of developing countries, in promoting, directly or through competent international bodies, research, development and exchange of information on: (a) best technologies for improving the containment, recovery, recycling or destruction of controlled substances or otherwise reducing their emissions; (b) possible alternatives to controlled substances, to products containing such substances, and to products manufactured with them; (c) costs and benefits of relevant control strategies. 2. The Parties, individually, jointly or through competent international bodies, shall cooperate in promoting public awareness of the environmental effects of the emissions of controlled substances and other substances that deplete the ozone layer. 3. Within two years of the entry into force of the Protocol and every two years thereafter, each Party shall submit to the secretariat a summary of the activities it has conducted pursuant to this Article.35 There is a corresponding obligation on the secretariat of the Montreal Protocol to "receive and make available, upon request by a Party, data provided pursuant to Article 7, to prepare and distribute regularly to the Parties reports based on information received pursuant to Articles 7 and 9, and to provide, as appropriate, the information and requests referred to (above) to such non-party observers." The Basel Convention provides another example of new kinds of information parties are obliged to provide: - to share information with a view to promoting environmental! sound management of hazardous wastes, including 'harmonization of technical standards and practices" for their management; - to cooperate in "monitoring the effects" of waste management on human health and the environment; - to cooperate in the development of "new environmentally sound low-waste technologies" with a view to "eliminating, as far as practical, the generation of hazardous wastes... "; - to cooperate in the "transfer of technology and management systems" including in "developing the technical capacity among Parties, especially those which may need and request technical assistance in this field"; - to cooperate in developing "appropriate technical guidelines and/or codes of practice."36 At a time when some governments are finding it difficult to keep up with reporting requirements under international agreements, the expansion of information sought is adding a burden that will require assistance if all states are to comply. Ecosystem and resource information with policy implications Pollutants per se are a significant part of the problem but far from the whole environmental dimension of "global change," especially as food and other shortages reflect population pressures as well as the degradation of natural resources and the capacity of natural systems to perform functions vital for human well-being. Now that anthropogenic contributions are seen to be driving global change, non-pollutant environmental impacts are gaining more attention; especially the impact on natural systems of larger numbers of consumers. Information needed to cope with these problems tends to focus on national resources important to a country's economy and is frequently "ecosystem" specific, i.e., it may require aggregation of data across frontiers that can raise concerns about national sovereignty. This new direction was signalled in 1980 when the General Assembly approved the World Conservation Strategy, in which "conservation" was defined in terms that foreshadowed "sustainable development": "the management of human use of the biosphere so that it may yield the greatest sustainable benefit to present generations while maintaining its potential to meet the needs of future generations."37 The strategy raised additional issues that some countries find difficult in that it also encouraged a strong role for local NGOs in collecting and analysing resource information, as well as increased access to planning and decision-making on the part of people who may be affected. This approach was warmly endorsed in the 1987 Brundtland Report. Over time, monitoring and other assessment functions have significantly improved human understanding of processes and trends of change, and attention has focused on providing the kinds of data that should be useful for economic planning and decision-making in the "development" context.38 The precautionary principle The obvious need for caution when proceeding rapidly in the face of uncertainty led ministers from countries of the ECE region meeting in May 1990 to adopt the "Bergen Ministerial Declaration on Sustainable Development in the ECE Region," containing what has since become known as the "precautionary principle": In order to achieve sustainable development, policies must be based on the precautionary principle. Environmental measures must anticipate, prevent and attack the causes of environmental degradation. Where there are threats of serious or irreversible damage, lack of full scientific certainty should not be used as a reason for postponing measures to prevent environmental degradation.39 Acknowledging that "environmental problems require greater and more systematic use of science and scientific knowledge," ECE ministers agreed to "invite the international science community to contribute towards the advancement of sustainable development policies and programmes. Scientific analyses and forecasts are especially needed to help identify longer term policy options." The "symbiotic nature of economy and the environment" was reflected in a call in the Bergen Declaration for development of "sound national indicators for sustainable development to be taken account of in economic policy making" by means of supplementary national accounting systems to reflect as fully as possible the importance of natural resources as depletable or renewable economic assets.40 As a measure of sustainability, or the ability of a society to protect the interests and equity of future generations, better indicators than are now available will be needed; for example, a measure of changing ratios between food production and population growth, or of arable soil per capita in those countries unable to afford high-energy food-production techniques. Education and public awareness were also recognized in the Stockholm Declaration as ... essential in order to broaden the basis for an enlightened opinion and responsible conduct by individuals, enterprises and communities in protecting and improving environment in its full human dimensions. It is also essential that mass media of communication... disseminate information of an educational nature, on the need to protect and improve the environment in order to enable man to develop in every resect.41 In a section of the Bergen Declaration on Awareness Raising and Public Participation, a number of more specific steps for '`optimizing democratic decision making related to environment and development issues" are proposed, among them: - to integrate and use environmental knowledge in all sectors of society; - to stimulate national and international exchanges of environmental information and foster scientific and technological cooperation in order to achieve sustainable development; - to encourage... schemes for informing the consumer of the environmental qualities and of the risks of industrial products from "cradle to grave"; - to develop further national and international systems of periodic reports of the state of the environment; - to undertake the prior assessment and public reporting of the environmental impact of projects; - to reaffirm and build on the CSCE conclusions regarding the rights of individuals, groups and organizations concerned... to have access to all relevant information and to be consulted...; - to develop rules for free and open access to information on the environment; and - to ensure that members of the public are kept informed and that every effort is extended to consult them and to facilitate their participation in the decision-making process on plans to prevent industrial and technological hazards in areas where they live or work.42 Because the Bergen meeting was a regional contribution to the preparations for UNCED, it is likely that these new approaches will be reflected in decisions in 1992, but not all countries are likely to find these new information requirements convenient.43 Another current example of expanding information requirements is found in the EEC Council Regulation on the establishment of the European Environment Agency and the European environment in formation and observation network intended to provide "objective, reliable and comparable information" to enable these governments "to take the requisite measures to protect the environment, to assess the results of such measures and to ensure that the public is properly informed about the state of the environment."44 In furnishing information which can be directly used in the implementation of Community environmental policy,...priority will be given to the following areas of work: air quality and atmospheric emissions, water quality, pollutants and water resources, the state of the soil, of the fauna and flora, and of biotopes, land use and natural resources, waste management, noise emissions, chemical substances which are hazardous for the environment, and coastal protection.45 While an approach along these lines might be highly desirable to reduce global risk, in many regions of the world it would be difficult to find human and institutional capabilities up to the task of providing the required information. A new regime for sustainable development Looking back at the 1972 Stockholm Declaration of Principles, the Brundtland Commission suggested in 1987 the need to "consolidate and extend relevant legal principles in a new charter to guide state behaviour in the transition to sustainable development." Towards this end they suggested that 22 principles should be negotiated, first in a declaration, then in a Convention on Environmental Protection and Sustainable Development. Five of these applied specifically to information: - cooperation in the exchange of information; -prior notice of planned activities; - cooperative arrangements for environmental assessment and protection; and - cooperation in emergency situations.46 Other suggested principles would assert the fundamental human right to an environment adequate for health and well-being; require states to conserve natural resources for the benefit of present and future generations and maintain ecosystems and related ecological processes so that benefits are available indefinitely; promote optimum sustainability; establish specific environmental standards and both collect and disseminate data and carry out prior environmental impact assessments and inform all persons in a timely manner who may be affected and grant those persons access to and due process in judicial proceedings. States also would ensure that natural resources and the environment are an integral part of development planning and would cooperate with other states or through international organizations in fulfilling their obligations. With regard to transboundary aspects, shared natural resources should be used in a reasonable and equitable manner, and serious risks of substantial harm should be prevented or abated and compensated for under special procedures for negotiations between states without discrimination between external and internal detrimental effects. Finally, states should be held responsible under these principles and resolve any disputes peacefully through a step-by-step approach including, as a last resort, a binding process of dispute settlement. The World Commission was assisted in this work by an Experts Group on Environmental Law that drew up specific recommendations to strengthen the international legal framework in support of sustainable development. In his Foreword to this report, former President of the World Court of Justice, Nagendra Singh, observed that the general principles recommended do not merely apply in areas beyond the limits of national jurisdiction, or in the transboundary context; they are also intended to apply in the entirely domestic domain, and thus purport to break open traditional international law on the use of natural resources or environmental interferences and follow the practice that has developed since the 1948 adoption of the Human Rights Declaration.47 Looking to the future this group called for a new UN Commission for Environmental Protection and Sustainable Development based on a membership of "competent individuals serving in a personal capacity...elected preferably by secret ballot by States Parties to the Convention." The proposed functions of the Commission would be to review regular reports from states and the UN system and other international governmental and non-governmental organizations on actions taken in support of the Convention. The Commission would be empowered to issue periodic public reports, assess and report on alleged violations, and review recommendations for proposed improvements to the Convention and other relevant international agreements. They also recommended the appointment by the Commission of a UN High Commissioner for Environmental Protection and Sustainable Development with functions similar to an "ombudsman" and "trustee" for the environment, who would assess communications from private entities on compliance or violations of the Convention (and related agreements) and who could submit such cases for consideration by the UN Commission or other appropriate organizations. The High Commissioner would have special responsibilities for areas beyond national jurisdiction, as well as for representing the interests of future generations. Unfortunately, in the absence of any follow-up to the Commission recommendations, there is no basis on which to judge the feasibility of these far-reaching proposals. Perhaps their feasibility can only be tested after improvements have been made in information-handling capabilities of states now weak in them. Possible future directions with regard to information are suggested by two other recent developments: a new facility at the World Bank, and the International Geophysical Biological Program. Both developments offer the prospect of mobilizing financial and human resources to strengthen information-handling capabilities in developing countries without which it is difficult to see how a "precautionary" approach could be widely applied. The Global Environmental Facility (GEF) In 1989 France proposed at the World Bank that a new special facility be set up alongside, but separate from, the Bank's "soft-loan" affiliate, the International Development Association, with a target of $1-$1.5 billion for concessional aid devoted to preservation of natural resources' protection of atmosphere, energy efficiency, and other activities aimed at reducing global risk and supportive of sustainable development. As agreed in late 1990 by 25 developed and developing countries with the World Bank, UNDP, and UNEP, GEF is a "pilot program to obtain practical experience in promoting and adopting environmentally sound technologies and in strengthening country-specific policy and institutional frameworks to support prudent environmental management."48 It will also provide operational information relevant in formulating other global conventions and in advancing the agenda that governments will be addressing at UNCED in June 1992. GEF has four objectives: 1. to support energy conservation, the use of energy sources that will not contribute to global warming, forestry management, and reforestation to absorb carbon dioxide in order to limit the increase in greenhouse gas emissions; 2. to preserve areas of rich ecological diversity; 3. to protect international waters where transboundary pollution has had damaging effects on water purity and the marine environment; and 4. to arrest the destruction of the ozone layer by helping countries make the transition from the production and use of CFCs, haloes, and other gases to less damaging substitutes.49 GEF shows that new concern over "global" risks has led governments to set up new funds to cover the extra costs of specific actions that countries in need of assistance could take in a common effort to reduce global risks. According to this new approach, when such actions should also be taken to reduce local risks or impacts, these costs should be covered by normal development funds; only when the costs of these actions cannot be internalized domestically are they eligible for coverage by GEF funds. The case is clear that many countries need additional help as an incentive to join a common effort to reduce global risk. The incentive, under both the Montreal Protocol and GEF, is the provision of financial support to cover such aspects as the difference between a fair commercial price for technology to reduce global risk and what the intended user can afford. Hopefully, this approach will reduce abstract arguments about the sanctity of intellectual property to a more practical basis on which progress can be made. The current appreciation of actions needed at local levels to reduce global risks - notably concerning depletion of stratospheric ozone by CFCs, and actions to slow the less-certain threat of climate change - has led to the identification of technical-assistance costs and capital investments that must be provided if all countries are to join in an agreed attack on the problem. Under present conditions of indebtedness and lack of capital flows into developing countries, the need for greater international financing is clear if preventive action is to be taken to reduce risks from future actions in the developing world. International Geophysical Biological Program (IGBP) Traditionally, precise data and information requirements for "assessment" purposes have been identified by the relevant scientific community, such as the work of ICSU/SCOPE in helping the design in 1971 of what since became UNEP's Global Environment Monitoring System (GEMS), and current work under ICSU in relation to the forthcoming International Geophysical Biological Program (IGBP), better known as "Global Change," to: describe and understand the interactive physical, chemical, and biological processes that regulate the total Earth system, the unique environment that it provides for life, the changes that are occurring in this system, and the manner in which they are influenced by human activities.50 Along with the World Climate Research Program (WCRP) and other international research efforts, IGBP will address critical unknowns related to global environmental change that can provide insights necessary if future development is to be put on a sustainable basis. Increasingly the needs of non-scientific users are recognized as vital if human impacts on natural systems are to be made less destructive, especially "policy and decision-makers" at the national and local levels, where key decisions are made daily. An important proposal late in 1990 was "START" - the Global Change System for Analysis, Research, and Training. It calls for major strengthening of regional networks and national capabilities, both to contribute information needed for global assessments and to strengthen local capabilities to employ it for planning.51 The concept is a world-encompassing system of Regional Research Networks, each of which would have a Regional Research Centre to serve as the information centre for the regional network. Each regional centre would engage in five functions supporting national institutes within the region: - research, including documentation of environmental change -training - data management - synthesis and modelling - communications between scientists and private and public-sector decision makers. Contents - Previous - Next
<urn:uuid:064737c8-6c53-418b-b34d-fe95b8e5e5d3>
CC-MAIN-2013-20
http://archive.unu.edu/unupress/unupbooks/uu25ee/uu25ee0c.htm
2013-05-22T08:25:47Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.940512
7,013
Reproducibility is a key part of science, even though almost nobody does the same experiment twice. A lab will generally repeat an experiment several times and look for results before they get published. But, once that paper is published, people tend to look for reproducibility in other ways, testing the consequences of a finding, extending it to new contexts or different populations. Almost nobody goes back and repeats something that's already been published, though. But maybe they should. At least that's the thinking behind a new effort called the Reproducibility Initiative, a project hosted by the Science Exchange and supported by Nature, PLoS, and the Rockefeller University Press. There are good reasons that scientists usually don't do straight-up repeats of published experiments. Funding agencies have little interest in paying for work that's redundant or derivative, and few journals are willing to run something that's essentially a do-over. Plus, as a researcher, it's simply hard to get excited about doing an experiment where you think you already know what the answer is going to be. With so little incentive for reproducing results, it's not surprising that most people only try to reproduce something if they think the original report was wrong. How does the Reproducibility Initiative hope to get past this? They've got a partial solution. PLoS one has agreed to create a special reproducibility section, where they'll publish both the original finding, and any results that come out of attempts to reproduce it. That should allow researchers the possibility of getting a second paper out of a single set of results. If the original paper that's being reproduced was published in a Nature or Rockefeller Press publication, they'll link in to the report of reproduction. Data from the verification will be hosted on the Figshare site. That still leaves a couple of big issues: who does the work, and how does it get paid for? This is where a bit of enlightened self-interest may be at play. The Initiative is hosted by the Science Exchange, which makes money by linking researchers in need of expertise to labs that have it. A researcher could advertise that they need a specific assay done—say, a challenging bit of mouse genotyping—and labs that are good at genotyping can submit bids to perform the work. When a bid is accepted, Science Exchange takes a cut of the price. Science Exchange is interested in the Reproducibility Initiative because it's set up so that, when a lab wants to see its own work reproduced, it is supposed to find a contractor to do so via the company's service. The missing piece? Someone willing to pay to see an experiment replicated as precisely as possible. The site promises that there will be announcements soon regarding groups that are willing to put up the money but, so far, there are no specifics. If that can be sorted out, then there's no reason this wouldn't work. Researchers have an incentive—a second publication for minimal effort—and the people who actually do the experiments get paid for doing something they're presumably good at. But is it really necessary? Here, the answer is a bit more complicated. In principle, it would be good to know what percentage of results can actually be reproduced. But my expectation is that they'd vary dramatically from field to field. A lot of behavioral studies are done on small populations of undergrads from a single university, and it's probably safe to assume there's a risk that undergrads in Beijing, Boston, and BYU could produce significantly different results. But that's probably a minimal risk in the case of something like structural biology. Then, unless someone messes up data or an algorithm, it's hard for things to go wrong, since generating math is mostly a matter of well understood calculations. For that and similar fields, problems with reproducibility mostly focus on the code that performs these calculations, which could be restricted by a variety of licenses, which may or may not allow others to even look at the code involved. Between these extremes, the value of direct reproduction is probably going to be hit or miss. A highly significant result will end up being tested in various ways, simply as a result of different labs following up on it. But some ideas that have been wrong have stuck around and influenced thinking for a while, and sorting those out quickly through reproduction could move science along faster than it would have on its own. Whether it succeeds or not, the effort is a tacit admission that, with the huge volume of scientific publication and continuing problems with both honest mistakes and outright fraud, it's time to at least be considering ways with which we could provide a greater degree of confidence in scientific findings.
<urn:uuid:28a0ad0b-01dd-4991-85b9-152e43f5dbef>
CC-MAIN-2013-20
http://arstechnica.com/science/2012/08/scientific-reproducibility-for-fun-and-profit/?comments=1&post=23169520
2013-05-22T08:28:47Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.971432
949
The n-body simulation code essentially uses Euler's method for stepwise integration of the gravity equations. At each cycle, the force vector between each of the bodies is calculated and summed to find the net force vector acting on each body. According to Newton's 2nd law of motion, F = ma, the force on each body, when divided by its mass, determines its acceleration. The accelaration over the time interval dt of the cycle determines the change in velocity for that cycle, and then the new velocity applied over the same interval determines the new position of the body. This is a first-order Euler simulation, which tends to result in fairly high errors. A slight improvement would average the force at time t and t + dt2, which would result in a second-order solution. The goals for this system, however, are less concerned with the accuracy of the simulation than with making interesting-looking movement that results in interesting music. Some quirks of the algorithm: - Discrete algorithms are known not to conserve momentum. The system tends to gain energy over time. - Because of this, there is "friction." The velocities decrease by a world-dependent factor each cycle. - This isn't straight Newtonian gravity: I've implemented attraction at the inverse square law, and repulsion at the inverse cube law. That's why the bodies appear to bounce off each other. - The bodies (and rocks) with rings inside them are mutators. Every time they hit something, they randomly change its mass. - Unless a world has mutators to add randomness, it is a strict simulation, and will "play" the same motions and music every time. - The boundary of the universe is a semi-permeable membrane. Bodies can get caught outside until friction slows them down and they sneak back in. When bodies collide, they light up for a bit. If they also play a note, they light up even more. Each body (or rock) can have a note, melody, or sequence attached to it, as well as specify the instrument or midi channel on which to play the note. When a body is involved with a collision, it plays its note, or the next note in its melody or sequence. Those notes are specified as note name and octave, Eg. C3, Ab4, or D#1, with C4 corresponding to middle C (midi note 60). Notes can also be rests. The midi velocity of each note that is played is determined by the world-velocity of the body that the note is attached to. A world-specific formula derives the midi velocty (0-127) from the world-velocity. The default is a one-to-one mapping with a specified floor and ceiling (e.g. from 35 to 102). The system currently has no notion of determining note duration from the physical properties of the ball. (That will happen when the use of a midi sequencer is implemented). Each instrument has a queue of currently playing notes. When a ball assigned to that instrument plays a note, it looks to see if the queue is full, and if so, stops playing the earliest-started note that is still playing (i.e. send a midi NOTE-OFF message on that channel). The default is 3 notes playing at a time for each instrument, though that can be changed in the world file on a per-instrument basis.
<urn:uuid:8459a81b-a443-47f7-acb2-ce755f707784>
CC-MAIN-2013-20
http://art.net/Studios/Visual/Simran/GenerativeMusic/Kepler_algorithm.html
2013-05-22T08:12:38Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.937767
723
Animal Species:Magnificent Cuttlefish – Sepia opipara (Iredale, 1926) The cuttlebone is elongate oval, about twice as long as broad. There is a long, pointed spine present, and v-shaped striations. Up to 15 cm mantle length The Magnificent Cuttlefish is found in Southern Indo-Pacific waters; from Shark Bay, Western Australia around to southern Queensland. Distribution by collection data Sepia opipara has a known depth range from 83 to 184 metres. Lu, C.C (1998) A Synopsis of Sepiidae in Australian waters (Cephalopoda: Sepiodiea). In: Voss, N.A., Vecchione, M., Toll, R.B. & Sweeney, M.J (Eds) Systematics and Biogeography of Cephalopods. Smithsonian Institution Press, Washington DC, Vol. 586, 159-190.
<urn:uuid:5ff0d28a-aa2b-4f59-b1de-7a070ff005cb>
CC-MAIN-2013-20
http://australianmuseum.net.au/Magnificent-Cuttlefish-Sepia-opipara-Iredale-1926
2013-05-22T08:19:55Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.727034
202
the birds at your feeder is critical to the quality of the data you submit and to the success of Project FeederWatch. Since relatively few species of birds visit most feeders, these species can become very familiar to you with a little practice and careful observation. We encourage you to acquaint yourself with the birds in your area by studying the Common Feeder Birds Poster included in your first research kit. You can download a mini version of the poster for free. We also recommend that you consult a current field guide to learn more about the species at your feeder and their winter ranges. For more information about identifying birds, visit Bird Identification in the About Birds and Bird Feeding section of this web site. For help with similar looking birds, such as finches, woodpeckers, or accipiters, visit the Tricky Bird ID page. A bird guide for most North American birds can be found on the Lab's All About Birds web site. Learn more about rare, unusual-looking, or sick birds and how to report them to Project FeederWatch.
<urn:uuid:24b9e561-cf90-487e-8957-43d6aca59ea3>
CC-MAIN-2013-20
http://birds.cornell.edu/pfw/InstruxandUpdates/Identify_birds.htm
2013-05-22T08:26:14Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.910128
225
A study published yesterday in Science Translational Medicine from a group at Johns Hopkins University set out to determine the best-case power of genetics to provide clinically meaningful information about risk of common diseases. The authors did this using data from identical twin pairs: by seeing how often twins with identical genomes develop the same diseases, we can judge the extent to which those diseases are determined by genetics, and the extent to which they might be predictable if we completely understood their genetics. The study’s key findings were that most people could obtain a result that they are at significantly elevated risk of at least one disease. But for most diseases, they would learn that their genetic risk places them not far from the average risk across the population. In a few cases, the authors found that a genetic test could potentially identify most individuals who eventually will develop a disease: this was true for thyroid autoimmune disease, type 1 diabetes, Alzheimer’s disease, and coronary heart disease in men. The authors conclude that genetic testing will not replace conventional preventive medicine. We agree with the authors that it is important to set reasonable expectations for what genetic testing can and cannot do. We think the positive finding that genetic testing can have some clinical utility for risk assessment for common disease is encouraging. We also think it is very encouraging that there are a few diseases where genetic testing could be particularly powerful — Alzheimer’s disease and coronary heart disease are not small potatoes and early alerts of increased risk of these conditions could have substantial public health benefits. Many of our more than 125,000 23andMe customers receive a risk report for a common disease that indicates that they are at substantially increased or decreased risk, using criteria similar to the ones used in this study. In addition to disease risks, 23andMe customers also receive information about their carrier status for inherited diseases and possible drug responses based on genetics, both of which already make an impact in personalized medicine today. On a more technical note, the study’s finding that negative genetic test results are mostly uninformative is largely an artifact of the authors’ mathematical modeling procedure and the fact that risk predictions for each disease were binned into just two buckets for “negative” and “positive” results. The authors pre-specify a baseline risk that for most diseases is not much smaller than the population average risk, and disease risk associated with different genotype classes is constrained to fall between the baseline value and 100%. Thus, it is impossible for anyone to have disease risk much less than the population average, and most individuals with “negative” results are pegged at the baseline risk level. Models that place some individuals at substantially lower risk would also be consistent with the observed twin data, but this study is not effectively exploring the potential utility of these negative findings. Do we believe that genetic testing could ever substitute for conventional preventive medicine? No, and we don’t think it should. Instead, it is more useful to think about genetic testing as one of many sources of information — along with family history, lifestyle, and conventional clinical testing — that can inform disease risk assessment and preventive medicine. Genetics will not be equally informative for everyone, and will not replace these other approaches, but this is actually quite compatible with personalized medicine — that we should find and leverage the information that has the biggest impact on care at an individual level.
<urn:uuid:80f02adf-1c12-4fce-b2e2-398fc7bdaee7>
CC-MAIN-2013-20
http://blog.23andme.com/health-traits/second-opinion-great-expectations-for-personal-genomes/
2013-05-22T08:11:47Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.961887
680
This project started after receiving an email from Christoph Römhild. He had compiled a list of cross-references found in the Bible and was looking for advice on how to visualize these connections. After several email exchanges and a copy of Christoph’s data, I was able to produce the arc diagram below. Due to the extremely high number of cross-references, this lands more on the aesthetic side of the information visualization spectrum. Different colors are used for various arc lengths, creating a rainbow like effect. The bar graph running along the bottom shows every chapter in the Bible and their respective lengths (in verses). Books alternate in color between white and light gray. Nerd alert! I recently got back into programming in Java. I started by retrofitting my old Texas Hold’em Odds Calculator so 1) it runs about an order of magnitude faster, and 2) it includes icons for the card suits. Tenis suggested a new applet to work on: He wants an applet for Texas Hold’em that, given his starting hand, tells him what opponent starting hands would likely beat him and what opponent hands he would likely beat. Because of the sheer combinatorics, I scoped the problem down and refined it a bit: Given your starting hand and the flop, list the percent chance that the given starting hand will beat or be beaten by all other possible starting hands. I took up the challenge and tonight I finished the engine behind the applet; I still have to write the user interface, which can take a while. While I go do that, here’s some math and sample input/output (let me know if you see any bugs): - There are 1,326 (52 choose 2) possible starting hands in Texas Hold’Em - Since we’re given your starting hand and the flop, we are left with 1,081 (47 choose 2) starting hands that oppose you - We compare your starting hand against each of the 1,081 opponent starting hands one at a time - For one such comparison, we know your starting hand, your opponents’ starting hand, and the flop, which leaves 45 cards in the deck for the turn and the river - In other words, there are 990 (45 choose 2) possible combinations for the river and turn - This means that to analyze one starting hand + flop combination, the applet examines 990 combinations for each of the 1,081 opponent starting hands, or 1,070,190 hand comparisons (990 x 1,081) Edit: Here’s the first draft of the new Texas Hold’em Matchups calculator Hands up if you bought diamond jewelry between 1994 and 2006. Well, if you did you can claim some of the $295 million that has been set aside in the DeBeers class action lawsuit. I haven’t had time to investigate this yet, so please comment if you know anything about its authenticity. I want a piece! When writing short copy (taglines, headlines, etc.) give yourself one minute per word. If you don’t have a great five-word headline in the first five minutes of brainstorming, take a break and try again later. If you become lost in flight and fuel is not a critical issue, climb. You will gain more perspective on landmarks, increase your radio range, and buy gliding time in case of an emergency. When negotiating, listen to the idioms the other party uses – do they “look at the big picture” and “see what you mean,” or does something “ring a bell” and “sound good”? When you reply, use expressions that reference the same senses as the expressions they use. If you’re playing improvised music and flub a note or phrase in a scale, repeat the mistake and there is none. My second food-art post today! Behold, the White City of Minas Tirith is under siege by one of the largest, and certainly the tastiest army ever to walk Middle Earth. For two days the evil host, under the brutal licorice fist of the Witch King of Angmar has bombarded the ancient city with stone and fire. Much more here: And a video:
<urn:uuid:08b67fd6-9bdd-4e65-94bd-55ca0a36aa7b>
CC-MAIN-2013-20
http://blog.ben61a.com/?m=200801
2013-05-22T08:02:37Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.932128
901
It may not be a big market, but it’s presumably a lucrative one: To meet the needs of consumers who are in the business of transmitting classified national secrets, physicists are working on an absolutely secure communication system that uses the strange laws of quantum mechanics to encode information. The latest experiments in this field, called quantum cryptography, produced a system that researchers say would theoretically work to transmit information around the globe. The system relies on a concept known as quantum entanglement to establish hack-proof communication. Entanglement allows two particles to be quantum-mechanically connected even when they are physically separated. Although the specific condition of either particle cannot be precisely known, taking measurements of one will instantly tell you something about the other. The trick can’t be used to actually send information, because each particle’s condition is random until it is measured. But entanglement can be used for encrypting data if a sender and a receiver make measurements on a number of entangled particles and then compare their results [Nature News]. After performing the measurements, they use their data to generate a quantum mechanical ‘key’ that can be used to share top-secret information. Any eavesdropper will disrupt the entanglement, ruining the key and causing the sender and receiver to break off their communication [Nature News]. These entangled particles are usually photons which are transmitted via fiber optic cables. Previously, attempts at quantum communication over distances of more than 60 miles have failed, because the photons that are sent through fiber optic cables eventually get disrupted. But in a new study, published in the journal Nature [subscription required], researchers describe a way to overcome this problem. The researchers developed a robust “quantum repeater node” that could, if developed further, send high fidelity signals over segments that, when linked to similar nodes, can form the building blocks of a quantum communication network to span the world [Telegraph]. Quantum communication is beginning to be used beyond the lab: Last fall, a secure QC line built by Geneva-based Id Quantique was used to transmit voting data in the Swiss national elections. And New York–based MagiQ Technologies has sold “a moderate number” of systems to clients in military and intelligence agencies, financial institutions and telecom companies [Popular Mechanics].
<urn:uuid:aae40f52-0090-46e8-ad17-fc8d30d188b0>
CC-MAIN-2013-20
http://blogs.discovermagazine.com/80beats/2008/08/28/harnessing-quantum-weirdness-to-make-spy-proof-email/
2013-05-22T08:26:53Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.921321
471
Has India lost its ‘cartoon’ humour? The Indian government’s decision to withdraw a controversial cartoon from a political science textbook this week couldn’t have been more ironic. Just a day earlier, India had observed the 60th anniversary of the first sitting of its parliament, seen as one of the pillars of the world’s largest democracy. While it is best left to our imagination as to why the cartoon, roughly as old as the Indian republic itself, created the controversy now, the government’s reaction to the row is alarming and sets a dangerous precedent. The cartoon shows India’s first prime minister, Jawaharlal Nehru, holding a whip as the father of the Indian constitution, B R Ambedkar, is seated on a snail. It was first published in 1949, and was reprinted in a textbook a few years ago – without anyone batting an eyelid. The cartoonist’s intent was to caricature the slow pace at which the constitution was being finalised. The government’s decision now to withdraw the cartoon and subsequently review all textbooks could be perceived as an attempt to pacify a certain section of society. Ambedkar is an icon for the cause of the Dalits — India’s former “untouchables” – and is deeply revered by millions in the country today. But has the Indian state gone too far to regulate the freedom of expression? A few instances in the past are a case in point. In 2011, the government passed a law to regulate content on the Internet. In June, New Delhi police sparked an outcry with a heavy-handed crackdown on anti-corruption protesters camped out overnight. Last August, Gandhian activist Anna Hazare was arrested ahead of his fast against corruption — drawing thousands of protesters onto the streets of the capital. And most recently, the government asked a TV network to move the premiere of the National Award winning ‘The Dirty Picture’ to the late night slot. It looks like the government is taking a leaf out of Mamata Banerjee’s book. The chief minister of West Bengal sparked an outcry after a university professor was arrested for sharing a cartoon which poked fun at her. It’s strange to see such apparently mild cartoons causing a ripple in a country’s establishment. It is even more curious as to why the Nehru/Ambedkar cartoon ended up being the sole target of the current row, especially when the textbook contained cartoons depicting other leaders as well. Cartoons offer an interesting mode of academic engagement in classrooms. But thanks to the intolerance demonstrated by some of India’s politicians, students may be deprived of interesting ways of learning about their own past. So what is the government’s next move? Ban all cartoons from being published in the press? Or ban all newspapers and magazines?
<urn:uuid:d1e957fd-622a-4427-ae91-352ce1e34099>
CC-MAIN-2013-20
http://blogs.reuters.com/india/2012/05/15/has-india-lost-its-cartoon-humour/comment-page-1/
2013-05-22T08:19:00Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.96148
593
These three events in American history targeted the Non-American, restricting their freedom in their homes. The removal of the Native Americans, two in history known as the Trail of Tears and the Navajos’ Long Walk, could be seen as oppression from the American government on the native people. Local Native Americans were forcefully relocated due to America’s greed in incorporating new land into their empire. The Chinese Exclusion Act, one of the many acts targeting immigrants, imposed many restrictions on the Chinese immigrants. They were derided as inferior. Chinese families, especially women, were barred from entering the United States in 1875 to prevent the growth of the Asian population in America. Chinese, who were already lived in America, were forced to assimilate and suffered discrimination. Jim Crow laws, which was promoted by the Plessy v. Ferguson case, targeted African Americans throughout America. This approved unfair treatment towards African Americans which included denying them into social gatherings such as bars and restaurants, seating them in a generally specified seats, which were less desired, on buses and railroad carts, and to some extent, beating and harassing African Americans. The environment set up by the Jim Crow laws persisted into modern history. In Birmingham, non-violent protests were confronted with violence by the authority. African Americans, however, continued to fight for their freedom and eventually, they received it.
<urn:uuid:ae733190-eb43-49f6-b591-a16140b60020>
CC-MAIN-2013-20
http://blsciblogs.baruch.cuny.edu/his1005fall2010/2010/12/06/restrictions-throughout-american-history/
2013-05-22T07:54:42Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.989425
276
The Book of Ecclesiastes is part of the "wisdom literature" of the Bible. It concerns itself with universal philosophical questions, rather than events in the history of Israel and in the Hebrews' covenant with God. Koheleth, the speaker in this book, ruminates on what -- if anything -- has lasting value, and how -- if at all -- God interacts with humankind. Koheleth expresses bewilderment and frustration at life's absurdities and injustices. He grapples with the inequities that pervade the world and the frailty and limitations of human wisdom and righteousness. His awareness of these discomfiting facts coexists with a firm believe in God's rule and God's fundamental justice, and he looks for ways to define a meaningful life in a world where so much is senseless. Ecclesiastes is traditionally read on the Jewish holiday Sukkot, the harvest festival.
<urn:uuid:0cf057a1-1c61-418c-9937-79fdda3c801d>
CC-MAIN-2013-20
http://bookcenter.dts.edu/ecclesiastes-the-traditional-hebrew-text-with-the-new-jps-translation-the-jps-bible-commentary
2013-05-22T07:54:24Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.900198
192
When you first download and install Firefox, it can handle basic browser tasks immediately. You can also add extra capabilities or change the way Firefox behaves by installing add-ons, small additions that extend Firefox's power. Firefox extensions can pimp your browser, but they can also collect and transmit information about you. Before you install any add-on, keep in mind to choose add-ons from trusted sources. Otherwise, an add-on might share information about you without your knowing, keep a record on the sites you have visited, or even harm your computer. There are several kinds of add-ons: For the topics covered in this book we are only going to need extensions. We will look at some add-ons that are particularly relevant for dealing with Internet security. The variety of available extensions is enormous. You can add dictionaries for different languages, track the weather in other countries, get suggestions for Web sites that are similar to the one you are currently viewing, and much more. Firefox keeps a list of current extensions on its site (https://addons.mozilla.org/firefox), or you can browse them by category at https://addons.mozilla.org/firefox/browse. Caution: We recommend that you never install an add-on for Firefox unless it is available from the Firefox add-on pages. You should also never install Firefox unless you get the installation files from a trusted source. It is important to note that using Firefox on someone else's computer or in an Internet caf increases your potential vulnerability. Know that you can take Firefox on a CD or USB-stick (check our chapter on that issue). While no tool can protect you completely against all threats to your online privacy and security, the Firefox extensions described in this chapter can significantly reduce your exposure to the most common ones, and increase your chances of remaining anonymous. HTTP is considered unsafe, because communication is transmitted in plain text. Many sites on the Web offer some support for encryption over HTTPS, but make it difficult to use. For instance, they may connect you to HTTP by default, even when HTTPS is available, or they may fill encrypted pages with links that go back to the unencrypted site. The HTTPS Everywhere extension fixes these problems by rewriting all requests to these sites to HTTPS. Although the extension is called "HTTPS Everywhere", it only activates HTTPS on a particular list of sites and can only use HTTPS on sites that have chosen to support it. It cannot make your connection to a site secure if that site does not offer HTTPS as an option. Please note that some of those sites still include a lot of content, such as images or icons, from third party domains that is not available over HTTPS. As always, if the browser's lock icon is broken or carries an exclamation mark, you may remain vulnerable to some adversaries that use active attacks or traffic analysis. However, the effort required to monitor your browsing should still be usefully increased. Some Web sites (such as Gmail) provide HTTPS support automatically, but using HTTPS Everywhere will also protect you from SSL-stripping attacks, in which an attacker hides the HTTPS version of the site from your computer if you initially try to access the HTTP version. Additional information can be found at: https://www.eff.org/https-everywhere. First, download the HTTPS Everywhere extension from the official Web site: https://www.eff.org/https-everywhere. Select the newest release. In the example below, version 0.9.4 of HTTPS Everywhere was used. (A newer version may be available now.) Click on "Allow". You will then have to restart Firefox by clicking on the "Restart Now" button. HTTPS Everywhere is now installed. To access the HTTPS Everywhere settings panel in Firefox 4 (Linux), click on the Firefox menu at the top left on your screen and then select Add-ons Manager. (Note that in different versions of Firefox and different operating systems, the Add-ons Manager may be located in different places in the interface.) Click on the Options button. A list of all supported Web sites where HTTPS redirection rules should be applied will be displayed. If you have problems with a specific redirection rule, you can uncheck it here. In that case, HTTPS Everywhere will no longer modify your connections to that specific site. Once enabled and configured, HTTPS Everywhere is very easy and transparent to use. Type an insecure HTTP URL (for example, http://www.google.com). Press Enter. You will be automatically redirected to the secure HTTPS encrypted Web site (in this example: https://encrypted.google.com). No other action is needed. Your network operator may decide to block the secure versions of Web sites in order to increase its ability to spy on what you do. In such cases, HTTPS Everywhere could prevent you from using these sites because it forces your browser to use only the secure version of these sites, never the insecure version. (For example, we heard about an airport Wi-Fi network where all HTTP connections were permitted, but not HTTPS connections. Perhaps the Wi-Fi operators were interested in watching what users did. At that airport, users with HTTPS Everywhere were not able to use certain Web sites unless they temporarily disabled HTTPS Everywhere.) In this scenario, you might choose to use HTTPS Everywhere together with a circumvention technology such as Tor or a VPN in order to bypass the network's blocking of secure access to Web sites. You can add your own rules to the HTTPS Everywhere add-on for your favorite Web sites. You can find out how to do that at: https://www.eff.org/https-everywhere/rulesets. The benefit of adding rules is that they teach HTTPS Everywhere how to ensure that your access to these sites is secure. But remember: HTTPS Everywhere does not allow you to access sites securely unless the site operators have already chosen to make their sites available through HTTPS. If a site does not support HTTPS, there is no benefit to adding a ruleset for it. If you are managing a Web site and have made an HTTPS version of the site available, a good practice would be to submit your Web site to the official HTTPS Everywhere release. Adblock Plus (http://www.adblockplus.org) is mainly known for blocking advertisements on websites. But it also can be used to block other content that may try to track you. To keep current with the latest threats, Adblock Plus relies on blacklists maintained by volunteers. Extra Geek info: How does Adblock Plus block addresses? Once you have Firefox installed: Adblock Plus by itself doesn't do anything. It can see each element that a Web site attempts to load, but it doesn't know which ones should be blocked. This is what Adblock's filters are for. After restarting Firefox, you will be asked to choose a filter subscription (free). Which filter subscription should you choose? Adblock Plus offers a few in its dropdown menu and you may wish to learn about the strengths of each. A good filter to start protecting your privacy is EasyList (also available at http://easylist.adblockplus.org/en). As tempting as it may seem, don't add as many subscriptions as you can get, since some may overlap, resulting in unexpected outcomes. EasyList (mainly targeted at English-language sites) works well with other EasyList extensions (such as region-specific lists like RuAdList or thematic lists like EasyPrivacy). But it collides with Fanboy's List (another list with main focus on English-language sites). You can always change your filter subscriptions at any time within preferences. Once you've made your changes, click OK. AdBlock Plus also lets you create your own filters, if you are so inclined. To add a filter, start with Adblock Plus preferences and click on "Add Filter" at the bottom left corner of the window. Personalized filters may not replace the benefits of well-maintained blacklists like EasyList, but they're very useful for blocking specific content that isn't covered in the public lists. For example, if you wanted to prevent interaction with Facebook from other Web sites, you could add the following filter: The first part (||facebook.*) will initially block everything coming from Facebook's domain. The second part ($domain=~facebook.com|~127.0.0.1) is an exception that tells the filter to allow Facebook requests only when you are in Facebook or if the Facebook requests come from 127.0.0.1 (your own computer) in order to keep certain features of Facebook working. A guide on how to create your own Adblock Plus filters can be found at http://adblockplus.org/en/filters. You can see the elements identified by AdBlock Plus by clicking on the ABP icon in your browser (usually next to the search bar) and selecting "Open blockable items". A window at the bottom of your browser will let you enable or disable each element on a case-by-case basis. Alternatively, you can disable AdBlock Plus for a specific domain or page by clicking on the ABP icon and ticking the option "Disable on [domain name]" or "Disable on this page only". The same method by which NoScript protects you can alter the appearance and functionality of good Web pages, too. Luckily, you can adjust how NoScript treats individual pages or Web sites manually - it is up to you to find the right balance between convenience and security. Once restarted, your browser will have a NoScript icon at the bottom right corner, where the status bar is, indicating what level of permission the current Web site has to execute content on your PC. To add a site that you trust to your whitelist, click on the NoScript icon and select: (You can also use the "Temporarily allow" options to allow content loading only for the current browsing session. This is useful for people who intend to visit a site just once, and who want to keep their whitelist at a manageable size.) Alternatively, you can add domain names directly to the whitelist by clicking on the NoScript button, selecting Options and then clicking on the Whitelist tab. If you want to permanently prevent scripts from loading on a particular Web site, you can mark it as untrusted: just click the NoScript icon, open the "Untrusted" menu and select "Mark [domain name] as Untrusted". NoScript will remember your choice, even if the "Allow Scripts Globally" option is enabled. Below is a short list of extensions that are not covered in this book but are helpful to further protect you. Flagfox - puts a flag in the location bar telling you where the server you are visiting is most probably located. https://addons.mozilla.org/en-US/firefox/addon/flagfox/ BetterPrivacy - manages "cookies" used to track you while visiting websites. Cookies are small bits of information stored in your browser. Some of them are used to track the sites you are visiting by advertisers. https://addons.mozilla.org/en-US/firefox/addon/betterprivacy/ GoogleSharing - If you are worried that google knows your search history, this extension will help you prevent that. https://addons.mozilla.org/en-us/firefox/addon/googlesharing/
<urn:uuid:07d42049-c688-4516-93f5-887d7930fe87>
CC-MAIN-2013-20
http://booki.flossmanuals.net/basic-internet-security/noscript/
2013-05-22T08:26:49Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.913414
2,363
The toll of dead and missing from Japan’s Fukushima earthquake and tsunami approaches 30,000, and entire towns are rubble. At this point, no one has died from Fukushima’s nuclear radiation, and it is quite possible that no one ever will. But radiation hysteria dominates the world’s front pages and television screens. The anti-nuclear community exults as many countries now consider shutting down or delaying construction of nuclear power plants. While a through review of nuclear plant safety is important, be careful what you wish for. Replacing the annual electric power output of a typical one gigawatt (GW) nuclear plant will require burning five million tons of coal or its equivalent in natural gas. As much as ten million tons of carbon dioxide will then be released to warm us. In the case of coal, the most available substitute, large quantities of mercury, sulfur, and other elements will also join the atmosphere. The primary public concern over Fukushima is the continuing release of radioactivity. Our bodies receive an average of 300 millirems of natural radiation per year from radon, cosmic rays, certain foods like bananas(potassium 40) and medical exams. The body repairs cell damage from this and larger amounts, or humanity would not exist. The most radiation exposed persons at Fukushima are some workers inside the plants who got doses in the 15 to 25 rem range. Immediately after World War II, a joint U.S. and Japanese medical team began a 70-year study of 90,000 survivors of Hiroshima and Nagasaki. Those survivors got radiation doses similar to those received by the most affected workers at Fukushima.Seventy years of studies report that the bomb survivors are living longer on average than the Japanese population as a whole with lower cancer rates. California residents are buying up iodine pills, because Fukushima radiation is being detected on our Pacific Coast. That new radiation is one millionth of the amount that the average American receives annually from those natural sources. The energy producing fission process shut down in all Fukushima reactors. But the radioactive fission products in the active fuel, and spent fuel in the water pools continue to provide heat from beta decay. This produces about 6 percent of the heat created by the fully operating reactor. Without cooling, the temperatures in the reactor from decay heat will rise indefinitely.Normally, backup cooling is done by circulating water with diesel engines, now damaged by the tsunami. At Chernobyl, a large steam/hydrogen explosion blew out radioactive graphite and other material from a fissioning reactor. 134 plant and emergency workers received very high radiation doses. 28 of them died within a few months. 19 more died within the next 20 years, though from causes not associated with radiation exposure. They have been parents to 14 children, all normal. Radiation from Chernobyl spread over several countries. The Feb. 28, 2011 UN Chernobyl update states: “In the three most affected countries, the only evidence of health effects due to radiation is an increase in thyroid cancer among people exposed as children in 1986. There were more than 6,000 cases reported from 1991 to 2005 in Belarus, Ukraine and Russia. By 2005, 15 of the cases had proven fatal. Radiation is diluted by distance and time. Media panic after Three Mile Island helped end the growth of nuclear energy in the U.S., leaving us with lots of polluting coal plants. Let’s not lose the benefits from our largest supply of clean electric energy which causes fewer injuries than any other major energy source. ROLF E. WESTGARD, of St. Paul and Deerwood, is a professional member, Geological Society of America, and a member of the Brainerd Dispatch advisory board.
<urn:uuid:d2ab555c-7e2c-4c04-a23e-3bec9757270f>
CC-MAIN-2013-20
http://brainerddispatch.com/opinion/guest-columns/2011-04-03/radiation-hysteria-gone-wild
2013-05-22T07:57:57Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.951027
757
History and Development The Green Building Guidelines are a comprehensive resource of best practices for green building. They offer recommendations for improving indoor air quality, increasing energy and water efficiency, conserving natural resources, and planning for livable and vibrant communities. The original New Home Guidelines were based on the Alameda County New Home Construction Green Building Guidelines, which were written collaboratively by builders, green building experts, and StopWaste.org in 2000. The Guidelines were updated in 2005 to apply statewide, address changes in Title 24, and incorporate measures from other residential green building initiatives. This update was informed by the Green Residential Environmental Action Team (GREAT), a task force of state agencies including the California Integrated Waste Management Board, California Energy Commission, Office of Environmental Health Hazard Assessment, Office of the State Architect, Department of General Services, Department of Water Resources, and California Air Resources Board. Updates to the Guidelines Build It Green's Green Building Guidelines are revised periodically. Check back for announcements about Guideline updates. Summaries of previous Guidelines changes and updates are available for you to review. If you would like more information on previous updates, please email [email protected].
<urn:uuid:5676b53a-7965-4601-afb9-8951360f6680>
CC-MAIN-2013-20
http://builditgreen.org/guidelines-development/
2013-05-22T08:12:23Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.917029
243
Antibody Tests (Coombs Test) Antibody tests are done to find certain antibodies that attack red blood cells. Antibodies are proteins made by the immune system. Normally, antibodies bind to foreign substances, such as bacteria and viruses, and cause them to be destroyed. The following conditions cause antibodies to be made. Human blood is typed by certain markers (called antigens) on the surface of red blood cells. If you get a blood transfusion, the transfused blood must match your type. That means the transfused blood must have the same antigens as your red blood cells. If you get a transfusion of blood with antigens different from yours (incompatible blood), your immune system destroys the transfused blood cells. This is called a transfusion reaction and can cause serious illness or even death. This is why matching blood type is so important. Rh is an antigen. The full name for this antigen is Rhesus factor. If a pregnant woman with Rh-negative blood is pregnant with a baby (fetus) with Rh-positive blood, Rh sensitization may occur. The baby may have Rh-positive blood if the father has Rh-positive blood. Rh sensitization happens when the baby's blood mixes with the mother's blood during pregnancy or delivery. This causes the mother's immune system to make antibodies against the baby's red blood cells in future pregnancies. This antibody response is called Rh sensitization and, depending on when it happens, can destroy the red blood cells of the baby before or after it is born. If sensitization happens, a fetus or newborn can develop mild to severe problems (called Rh disease or erythroblastosis fetalis). In rare cases, if Rh disease is not treated, the fetus or newborn may die. A woman with Rh-negative blood can get a shot of Rh immunoglobulin (such as RhoGAM) that almost always stops sensitization from occurring. Problems from Rh sensitization have become very rare since Rh immunoglobulin was developed. Autoimmune hemolytic anemia A type of hemolytic anemia called autoimmune hemolytic anemia is a rare disease that causes antibodies to be made against a person's own red blood cells. Two blood tests can check for antibodies that attack red blood cells: the direct Coombs test and the indirect Coombs test. The direct Coombs test is done on a sample of red blood cells from the body. It detects antibodies that are already attached to red blood cells. The indirect Coombs test is done on a sample of the liquid part of the blood (serum). It detects antibodies that are present in the bloodstream and could bind to certain red blood cells, leading to problems if blood mixing occurs. Why It Is Done Direct Coombs test The direct Coombs test finds antibodies attached to your red blood cells. The antibodies may be those your body made because of disease or those you get in a blood transfusion. The direct Coombs test also may be done on a newborn baby with Rh-positive blood whose mother has Rh-negative blood. The test shows whether the mother has made antibodies and if the antibodies have moved through the placenta to her baby. Indirect Coombs test The indirect Coombs test finds certain antibodies that are in the liquid part of your blood (serum). These antibodies can attack red blood cells but are not attached to your red blood cells. The indirect Coombs test is commonly done to find antibodies in a recipient's or donor's blood before a transfusion. A test to determine whether a woman has Rh-positive or Rh-negative blood (Rh antibody titer) is done early in pregnancy. If she is Rh-negative, steps can be taken to protect the baby. How To Prepare You do not need to do anything before you have this test. How It Is Done The health professional drawing blood will: - Wrap an elastic band around your upper arm to stop the flow of blood. This makes the veins below the band larger so it is easier to put a needle into the vein. - Clean the needle site with alcohol. - Put the needle into the vein. If the needle is not placed correctly or if the vein collapses, more than one needle stick may be needed. - Hook a tube to the needle to fill it with blood. - Remove the band from your arm when enough blood is collected. - Put a gauze pad or cotton ball over the needle site as the needle is removed. - Put pressure to the site and then a bandage. How It Feels The blood sample is taken from a vein in your arm. An elastic band is wrapped around your upper arm. It may feel tight. You may feel nothing at all from the needle, or you may feel a quick sting or pinch. There is very little chance of a problem from having blood sample taken from a vein. - You may get a small bruise at the site. You can lower the chance of bruising by keeping pressure on the site for several minutes. - In rare cases, the vein may become swollen after the blood sample is taken. This problem is called phlebitis. A warm compress can be used several times a day to treat this. - Ongoing bleeding can be a problem for people with bleeding disorders. Aspirin, warfarin (Coumadin), and other blood-thinning medicines can make bleeding more likely. If you have bleeding or clotting problems, or if you take blood-thinning medicine, tell your doctor before your blood sample is taken. Antibody tests (Coombs tests) are done to find antibodies that attack red blood cells. No antibodies are found. This is called a negative test result. - Direct Coombs test. A negative test result means that your blood does not have antibodies attached to your red blood cells. - Indirect Coombs test. A negative test result means that your blood is compatible with the blood you are to receive by transfusion. A negative indirect Coombs test for Rh factor (Rh antibody titer) in a pregnant woman means that she has not developed antibodies against the Rh-positive blood of her baby. This means that Rh sensitization has not occurred. - Direct Coombs test. A positive result means your blood has antibodies that fight against red blood cells. This can be caused by a transfusion of incompatible blood or may be related to conditions such as hemolytic anemia or hemolytic disease of the newborn (HDN). - Indirect Coombs test. A positive test result means that your blood is incompatible with the donor's blood and you can't receive blood from that person. If the Rh antibody titer test is positive in a woman who is pregnant or is planning to become pregnant, it means that she has antibodies against Rh-positive blood (Rh sensitization). She will be tested early in pregnancy to check the blood type of her baby. If the baby has Rh-positive blood, the mother will be watched closely throughout the pregnancy to prevent problems to the baby's red blood cells. If sensitization has not occurred, it can be prevented by a shot of Rh immunoglobulin. What Affects the Test Reasons you may not be able to have the test or why the results may not be helpful include: - Having a blood transfusion in the past. - Being pregnant within the past 3 months. - Taking some medicines, such as cephalosporins, sulfa medicines, tuberculosis medicines, insulin, and tetracyclines. What To Think About A newborn baby (whose mother has Rh-negative blood) may have a direct Coombs test to check for antibodies against the baby's red blood cells. If the test is positive, the baby may need a transfusion with compatible blood to prevent anemia. Other Works Consulted - Chernecky CC, Berger BJ (2008). Laboratory Tests and Diagnostic Procedures, 5th ed. St. Louis: Saunders. - Fischbach FT, Dunning MB III, eds. (2009). Manual of Laboratory and Diagnostic Tests, 8th ed. Philadelphia: Lippincott Williams and Wilkins. - Pagana KD, Pagana TJ (2010). Mosby’s Manual of Diagnostic and Laboratory Tests, 4th ed. St. Louis: Mosby Elsevier. |Primary Medical Reviewer||E. Gregory Thompson, MD - Internal Medicine| |Specialist Medical Reviewer||W. David Colby IV, MSc, MD, FRCPC - Infectious Disease| |Last Revised||December 30, 2011| Last Revised: December 30, 2011 To learn more visit Healthwise.org © 1995-2012 Healthwise, Incorporated. Healthwise, Healthwise for every health decision, and the Healthwise logo are trademarks of Healthwise, Incorporated.
<urn:uuid:b5e7ebce-cea1-486c-8d6f-a344db18ab4f>
CC-MAIN-2013-20
http://cancer.dartmouth.edu/pf/health_encyclopedia/hw44015
2013-05-22T08:12:05Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.907119
1,849
Vitamin E is actually not a single compound, but rather a group of eight similar fat-soluble compounds called tocotrienols. Certain vegetables are particularly good sources of vitamin E, including chard, spinach, bell peppers and asparagus. You have probably heard that you can benefit from supplements or from a diet that is rich in vitamin E, but you may not know just why this vitamin can be so important for your physical health. Read on to discover eleven fascinating and important reasons why a higher intake of vitamin E could help to boost your health and improve your quality of life. 1) It promotes healthy and young looking skin: Multiple studies have shown that vitamin E can help to protect your skin from the damage caused by ultraviolet radiation. As a result, it reduces your risk of developing skin cancer and also helps to stop your skin from developing the wrinkles and fine lines that are characteristic of premature aging. Vitamin E has these health benefits whether ingested or applied topically via a moisturizing cream. 2) It can boost your metabolism: When you ingest vitamin E, it interferes with the development of compounds called nitrosamines that form in the stomach. This has the result of improving your body’s metabolic rate, helping you to burn more calories at a faster rate. When you have a faster metabolic rate, it is easier to lose weight or to avoid gaining unwanted extra weight. 3) It can offer some relief from fibrocystic breast disease: Fibrocystic breast disease involves painful breasts that sometimes develop non-cancerous lumps around the time of a woman’s menstrual period. Studies have shown that vitamin E supplements can dramatically reduce the severity of this condition, though the relationship is currently rather mysterious. 4) It reduces your risk of developing Alzheimer’s disease: A study performed by scientists at Rush University has revealed that high levels of vitamin E can lower your chances of developing from Alzheimer’s disease and other forms of dementia. Specially, it appears to make you up to 67% less likely to suffer from dementia, so a diet high in vitamin E can help you to retain good cognitive function well into old age. This health benefit may be connected to the fact that vitamin E promotes nervous system health by protecting the myelin sheaths that cover your nerves. Interestingly, it appears that you can only decrease your risk of Alzheimer’s by getting vitamin E from food; the same risk reduction is not observed in people who take vitamin E in supplement form. 5) It can lower your risk of developing bladder cancer: According to the American Association of Cancer Research, regular consumption of vitamin E is associated with an impressive 50% drop in your risk of developing bladder cancer. This is a significant finding, as bladder cancer is the fourth most deadly cancer in men (striking women much less often). 6) It has anti-inflammatory properties: Vitamin E has been shown to influence inflammatory disorders like arthritis, asthma, and ulcerative colitis. Regular consumption effectively reduces chronic inflammation and the pain that is associated with it. 7) It can prevent problems with the liver or gallbladder: People who are deficient in vitamin E are much more likely to develop medical conditions involving the liver (such as nonalcoholic steatohepatitis). Similarly, this vitamin appears to play a role in preventing gallbladder problems (especially the formation of gallstones). 8. It may help to protect your lungs from pollution: Tests on animals have shown that consuming plenty of vitamin E seems to reduce the amount of lung damage that results from inhaling polluted air, especially if that air contains ozone or nitrogen dioxide. 9) It discourages the formation of unwanted blood clots: Vitamin E stops blood platelets from sticking together in clumps, so it helps to keep your blood appropriately thin. As a result, a diet high in vitamin E is associated with a reduced risk of deep vein thrombosis and pulmonary embolisms. 10) It prevents oxidative stress: Vitamin E has antioxidant properties, so it is capable of preventing oxidative stress (i.e. damage that is caused by free radicals inside your body). It is speculated that free radicals are capable of causing the sort of cell damage that can lead to cancer and hardened arteries, so consuming plenty of antioxidants should help to improve your general health. Some researchers even believe that vitamin E is possibly the most important nutrient when it comes to preventing oxidative stress. 11) It helps to lower your cholesterol levels: Vitamin E’s ability to protect your arteries means that it helps to prevent LDL cholesterol (i.e. ‘bad’ cholesterol) from attaching to your arterial walls. This in turn will substantially reduce your risk of suffering from a heart attack or a stroke. As is obvious from the above health benefits, consuming plenty of vitamin E is extremely important to your physical well-being. In addition to the vegetables mentioned above, there are other foods that are great sources of this vitamin. Try to eat plenty of sunflower seeds, almonds, and cayenne pepper as well.
<urn:uuid:50a3ec89-58c0-4f12-8011-13bc2423d3b0>
CC-MAIN-2013-20
http://cathe.com/eleven-important-reasons-why-you-need-more-vitamin-e-in-your-diet
2013-05-22T08:02:18Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.939144
1,051
A metropolitan area is a large population center consisting of a large metropolis and its adjacent zone of influence, or of more than one closely adjoining neighboring central cities and their zone of influence. A metropolis (from the Greek μήτηρ mētēr meaning 'mother' and πόλις pólis meaning 'city/town' is a big City, in most cases with A city is an Urban area with a large Population and a particular Administrative, Legal, or Historical status One or more large cities may serve as its hub or hubs, and the metropolitan area is normally named after either the largest or most important central city within it. A metropolitan area usually combines an agglomeration (the contiguous built-up area) with peripheral zones not themselves necessarily urban in character, but closely bound to the center by employment or commerce; these zones are also sometimes known as a commuter belt, and may extend well beyond the urban periphery depending on the definition used. In the study of human settlements an agglomeration is an extended City or Town area comprising the built-up area of a central place ( usually a Municipality It is mainly the area that is not part of the city but is connected to the city. For example, Pasadena, California would be added to Los Angeles, California's metro area. Pasadena ( is a city in Los Angeles County, California, United States. Los Angeles (lɑˈsændʒələs los ˈaŋxeles in Spanish) is the largest City in the state of California and the American West While it isn't the same city, it is connected. The core cities in a polycentric metropolitan area need not be physically connected by continuous built-up development, distinguishing the concept from conurbation, which requires urban contiguity. A conurbation is an Urban area or Agglomeration comprising a number of Cities, large Towns and larger urban areas that through Population In a metropolitan area, it is sufficient that central cities together constitute a large population nucleus with which other constituent parts have a high degree of integration. In practice the parameters of metropolitan areas, in both official and unofficial usage, are not consistent. Sometimes they are little different from an urban area, and in other cases they cover broad regions that have little relation to the traditional concept of a city as a single urban settlement. Thus all metropolitan area figures should be treated as interpretations rather than as hard facts. Metro area population figures given by different sources for the same place can vary by millions, and there is a tendency for people to promote the highest figure available for their own "city". However the most ambitious metropolitan area population figures are often better seen as the population of a "metropolitan region" than of a "city". The term metropolitan area is sometimes abbreviated to 'metro', for example in Metro Manila and Washington, DC Metro Area, which in the latter case should not be mistaken to mean the metro rail system of the city. Metropolitan Manila ( Filipino: Kalakhang Maynila, Kamaynilaan) or the National Capital Region (NCR ( Filipino: Pambansang The Washington DC Metropolitan Area, formally known as the Washington-Arlington-Alexandria DC-VA-MD-WV MSA is a U A rapid transit, underground, subway, elevated railway or metro(politan system is an electric passenger railway Although it can be compared in composition to many of the world's metropolitan areas, in France the term for the region around an urban core linked by commuting ties is an aire urbaine (officially translated as "urban area"). This article is about the country For a topic outline on this subject see List of basic France topics. The aire urbaine is an INSEE (the national statistics office of France statistical region comprising a Couronne périurbaine commuter belt around a contiguous In Japan that would be toshiken (都市圏? lit. For a topic outline on this subject see List of basic Japan topics. bloc of cities). In Australia, Statistical Divisions (SDs) are defined by the Australian Bureau of Statistics as areas under the unifying influence of one or more major towns or cities. For a topic outline on this subject see List of basic Australia topics. The Australian Bureau of Statistics ( ABS) is Australia 's national statistical agency. Each capital city forms its own Statistical Division, and the population of the SD is the most-often quoted figure for that city's population. Statistical Districts are defined as non-capital but predominantly urban areas. The statistical divisions that encompass the capital cities are commonly though unofficially called 'metropolitan areas'. The Office of Management and Budget defines "Core Based Statistical Areas" used for statistics purposes among federal agencies. The Office of Management and Budget (OMB is a Cabinet -level office and is the largest office within the Executive Office of the President of the United States (EOP Each CBSA is based on a core urban area and is composed of the counties which comprise that core as well as any surrounding counties that are tightly socially or economically integrated with it. A county of the United States is a local level of government created as a subdivision of a state by the state government or by the federal or territorial government as a subdivision These areas are designated as either metropolitan or micropolitan statistical areas, based on population size; a "metro" area has an urban core of at least 50,000 residents, while a "micro" area has less than 50,000 but at least 10,000. Table of United States Metropolitan Statistical Areas|Table of United States Core Based Statistical AreasIn the United States, the Office of Management and Budget (OMB has produced Table of United States Micropolitan Statistical Areas|Table of United States Core Based Statistical Areas United States Micropolitan Statistical Areas (µSA where the initial Greek letter At the turn of the 19th century only 3 percent of the world was urbanized. During the 20th and into the 21st century the presence of humans in urban areas has increased dramatically. Within the first quarter of the 21st century it is expected that more than half of the world's population will live in urban areas, if this is not already the case. By 2025, according to the Far Eastern Economic Review, Asia alone will have at least 10 hypercities, those with 20 million or more, including Delhi (~20 million), Jakarta (24. Delhi (दिल्ली ਦਿੱਲੀ دلی d̪ɪlːiː sometimes referred to as Dilli) is the second largest metropolis of India, with a population Jakarta (also DKI Jakarta) is the Capital and largest city of Indonesia. 9 million people), Dhaka (25 million), Karachi (26. Dhaka (also known as Dacca ( Bangla: ঢাকা ɖʱaka is the Capital of Bangladesh and the principal city of Dhaka District. (ڪراچي) is the largest city in Pakistan. It is the world's second largest city proper behind Mumbai in terms of population which exceeds 10 million 5 million), Shanghai (27 million) and Mumbai (33 million). Shanghai ( 上[[wikt 海|海]] is the largest city in China in terms of population and one of the largest urban areas in the world with over 20 million Mumbai ( Marathi:,, IPA: formerly Bombay, is the capital of the Indian state of Maharashtra and the financial Lagos has grown from 300,000 in 1950 to an estimated 15 million today, and the Nigerian government estimates that city will have expanded to 25 million residents by 2015. Lagos ( pron ˈleɪgɒs or /ˈlɑːgoʊs/ overseas is the most populous Conurbation in Nigeria with If several metropolitan areas are located in succession, metropolitan areas are sometimes grouped together as a megalopolis (plural megalopoleis, also megalopolises). A megalopolis consists of several interconnected cities (and their suburbs), between which people commute, and which are so close together that suburbs can claim to be suburbs of more than one city. A city is an Urban area with a large Population and a particular Administrative, Legal, or Historical status Another name for a megalopolis is a metroplex (short for metropolitan complex) or connurbation. A megalopolis (or megapolis) is defined as an extensive Metropolitan area or a long chain of roughly continuous metropolitan areas A metroplex is large Metropolitan area containing several cities and their suburbs A conurbation is an Urban area or Agglomeration comprising a number of Cities, large Towns and larger urban areas that through Population This concept was first proposed by the French geographer Jean Gottmann in his book Megalopolis, a study of the northeastern United States. (Iona Jean Gottmann ( October 10, 1915 &ndash February 28, 1994) was a French Geographer who was most widely known for coining the One famous example is the BosWash megalopolis consisting of Boston, Providence, Hartford, New York City, Newark, Philadelphia, Wilmington, Baltimore, Washington, and vicinity. Northeastern_United_States#The_Northeast_as_a_megalopolis BosWash (also referred to as BoWash, BosNYwash, the Northeast Corridor, the The City of New York Newark is the largest city in New Jersey, United States and the County seat of Essex County. Philadelphia (ˌfɪləˈdɛlfiə Wilmington is the largest city in the state of Delaware, United States and is located at the confluence of the Christina River and Brandywine Washington DC ( formally the District of Columbia and commonly referred to as Washington, the District, or simply D The biggest one is the Taiheiyō Belt (the Pacific Megalopolis) in Japan consisting of Tokyo MA, Shizuoka MA, Nagoya MA, Osaka MA, Okayama MA, Hiroshima MA, Fukuoka MA and vicinity. The also known as Tokaido corridor is the name for the Megalopolis in Japan extending from Ibaraki Prefecture in the north all the way to Fukuoka Prefecture Guangdong Province's Pearl River Delta is a huge megalopolis with a population of 48 million that extends from Hong Kong and Shenzhen to Guangzhou. Guangdong ( EFEO: Kouangtong; Pinyin Guǎngdōng; Postal map spelling: Kwangtung) is a province on the The Pearl River Delta Region (PRD ( in southern China occupies the low-lying areas alongside the Pearl River Estuary Hong Kong ( officially the Hong Kong Special Administrative Region, is a territory located on China 's south coast on the Pearl River Delta, and borders Shenzhen is a city of sub-provincial administrative status in southern China's Guangdong province situated immediately Guangzhou ( Jyutping: Gwong²zau¹; Yale: Gwóngjàu) is the Capital and a Sub-provincial city Some projections assume that by 2030 up to 1 billion people will live in China's urban areas. China ( Wade-Giles ( Mandarin) Chung¹kuo² is a cultural region, an ancient Civilization, and depending on perspective a National Even rather conservative projections predict an urban population of up to 800 million people. In its most recent assessment, the UN Population Division estimated an urban population of 1 billion in 2050. The megalopoleis in Europe are the Ruhr Area in Germany, the Randstad (Knooppunt Arnhem-Nijmegen and Brabantse Stedenrij are counted with the Randstad) in the Netherlands, the Flemish Diamond in Belgium, Ile de France in France and the metropolitan area of London, as well as several 'smaller' agglomerations, such as the Meuse-Rhine Euregion, the Ems-Dollart Euregion, and the Lille-Kortrijk-Tournai Euregion. The Ruhr Area, ( German Ruhrgebiet, colloquial Ruhrpott, Kohlenpott or Revier) is an Urban area in North Rhine-Westphalia ImageRandstad_with_scalepng|400px|thumb|right|Schematic map of the Randstadcircle 528 380 26 Schiphol rect 426 356 498 436 Haarlemmermeer rect 399 166 479 245 The Flemish Diamond (in Dutch: Vlaamse Ruit) is a name of an area consisting of the central provinces of Flanders, Belgium. Île-de-France ( pronounced /il d̪ə fʁɑ̃s/ literally "Island of France" is one of the twenty-six administrative regions of France. London ( ˈlʌndən is the capital and largest urban area in the United Kingdom. In the study of human settlements an agglomeration is an extended City or Town area comprising the built-up area of a central place ( usually a Municipality Together this megalopolis has an estimated population of around 50 million. Africa's first megalopolis is situated in the urban portion of Gauteng Province in South Africa, comprising the conurbation of Johannesburg, and the metropolitan areas of Pretoria and the Vaal Triangle, otherwise known as the PWV. Gauteng (xaʊˈtɛŋ Sotho xɑ́útʼèŋ̀ is a province of South Africa. The Republic of South Africa (also known by other official names) is a country located at the southern tip of the continent of Africa Johannesburg ( Pronounced /jō-hān'ĭs-bûrg'/ is the largest city in South Africa. Pretoria is a city located in the northern part of Gauteng Province, South Africa. The Vaal Triangle is a triangular area of land formed by Vereeniging, Vanderbijlpark and Sasolburg - together they comprise a substantial urban It has been suggested that the whole of south-eastern, Midland and parts of northern England will evolve into a megalopolis dominated by London. England is a Country which is part of the United Kingdom. Its inhabitants account for more than 83% of the total UK population whilst its mainland London ( ˈlʌndən is the capital and largest urban area in the United Kingdom. Clearly when usage is stretched this far, it is remote from the traditional conception of a city. Megacity is a general term for agglomerations or metropolitan areas which usually have a total population in excess of 10 million people. A megacity is generally defined as a Metropolitan area with a total Population in excess of 10 million people In Biology a population is the collection of inter-breeding organisms of a particular Species; in Sociology In Canada, megacity can also refer informally to the results of merging a central city with its suburbs to form one large municipality. Country to "Dominion of Canada" or "Canadian Federation" or anything else please read the Talk Page A Canadian "megacity", however, is not necessarily an entirely urbanized area, as many cities so named have both rural and urban portions. Moreover, Canadian "megacities" do not constitute large metropolitan areas in a global sense. Census population of a metro area is not the city population. However, it better demonstrates the population of the city. Los Angeles may only have a city population of near 4,000,000, but has two metropolitan area populations, depending on definition, 13 million in the core area and 18 million in the Combined statistical area. Los Angeles (lɑˈsændʒələs los ˈaŋxeles in Spanish) is the largest City in the state of California and the American West
<urn:uuid:5612b520-3e69-4446-beb6-4a7a90a0bca1>
CC-MAIN-2013-20
http://citizendia.org/Metropolitan_area
2013-05-22T08:26:14Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.929651
3,350
Antarctica Picture | Antarctica Cruise | Site Map | FIDS / OAE's Antarctica Fact File | What's it like in Antarctica? page 1 page 2 | Fascinating Facts | Antarctica animals | Antarctic glossary A - H I - Z | Antarctic slang | Antarctica Views | Antarctica blogs | Quiz | Antarctica Lite |Cold and survival: Humans | Hypothermia | Food | More on Food | Clothing | Clothing 2 | Penguins | Animal Adaptations| |Climate / Weather | Weather phenomena | graphs: Comparisons | Australian Coastal | Deep South| |Climate Change: Global Warming | GW Antarctica | Misconceptions | Carbon sinks | Carbon cycle | Prevention | Offsetting | Tree Planting| |1/ How can this man stand next to an iceberg?| drift around the Southern Ocean carried by the currents and blown by the winds. In the winter the sea-ice freezes around them and effectively glues them in place until the spring when the ice breaks up and they can begin to move again. During this frozen-in time, it is possible to travel out across the sea-ice and walk right up to the bergs. |2/ Why does the iceberg look different when the sun shines?| can be quite magical standing next to an ice-berg, especially when the sun is shining and glistening off the ice. The sun can also penetrate the ice and be reflected off inner surfaces giving a whole variety of effects and colours from white through a range of vivid blues, quite an unreal experience. |3/ How big is the "tip of the iceberg"| tip of the "ice-berg." Everybody knows that most of an iceberg lies under the water, but most don't know that the amount beneath the surface varies from about 50% The cause of the variation is largely in the amount of air that is trapped in the ice so affecting its buoyancy. An average iceberg will be about 80-90% beneath the surface. Very low lying pieces of ice of whatever size in the water are known as "growlers". These often have a green tinge to them. They are known as growlers because they present a particular hazard to shipping with the small amount visible above the water and the dark colour making them especially difficult to see and therefore especially |4/ When is an iceberg not an iceberg?| are lots of different names for different kinds of ice. Large pieces of ice that were once part of an iceberg that broke up are known as "bergy bits" if they are too small to be considered as icebergs themselves (I never did discover when a "bergy bit" was big enough to be a "berg", I think it's a matter of These bergy bits in the picture are trapped in the frozen sea-ice in the winter making it possible to walk out to them. In the distance can be seen trapped icebergs and the long low landmass of a nearby island, the two peaks to the left are about 40 miles (64 kilometers) way. |5/ How are icebergs made?| Icebergs are made of freshwater ice and not of frozen sea water. They form from the edge of glaciers when the glacier reaches the sea and either breaks off in pieces to form an iceberg, or in the case of an ice shelf, begin to float on the sea and then breaks off from the rest of the glacier as a large slab. Icebergs are made up of snow that has fallen over many hundreds or even thousands of years. The stripes and different coloured layers in icebergs represent different layers of snowfall and the weather conditions under which the snow fell. If it is very cold then a light open layer with much air included will be formed, this gives a paler or white layer. The darker, bluer layers come from snow fall in relatively warm, maybe even wet conditions when little or no air is trapped in the layer. In addition to this, air is squeezed out of the lower layers of a glacier as more and more snow falls and so the weight of snow builds up. |6/ Are you sure this thing's safe?| next to an iceberg such as this one can be quite a scary experience. In addition to being stuck in the sea ice, this particular berg has been grounded on the sea bed. It was probably blown towards shore by strong winds or a storm, and on a high tide. When the wind died down and the tide fell, the berg was left resting, stuck on the sea bed. A result of this is that when the tide rises and falls the sea ice rises and falls with it but the iceberg doesn't. There are all kinds of creaking and groaning noises made by the sea ice as it is forced to rub up and down the uneven sides of the berg with the tide. To add to these unsettling sounds are an assortment of creaks, groans and bangs made by the iceberg above water as the sun heats up the surface. The fear is that either a large lump of ice will come tumbling down or worse still, the iceberg becomes unstable and tips up to a new more stable position. This tipping up rarely happens in the winter, more commonly it takes place in warmer summer temperatures, but it is not unknown and if it happens can cause waves and ripples that break up the surface of the sea ice for miles around. Neither of these events are ones that you want to witness while standing on the sea ice surrounding the iceberg! Site Map | Antarctica Stock Photos | Antarctica Travel | Antarctic Clothing |
<urn:uuid:e88866e0-8386-45cc-a657-37465fd1a021>
CC-MAIN-2013-20
http://coolantarctica.com/Antarctica%20fact%20file/antarctica%20environment/icebergs1.htm
2013-05-22T07:53:36Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.92445
1,191
The fight for the Morris Farm on the first day of the battle of Bentonville marked the high tide of the Confederate effort to destroy Slocum's wing of Sherman's army. The remnants of the Army of Tennessee, supported by Taliaferro's division of Hardee's corps, drove a wedge into the Federal center and broke against determined Federal infantry supported by several batteries. See a map of the assault here (requires Adobe PDF Reader). The Federals marched onto the field, stacked rifles, and began to entrench. They made quick work of the soft, sandy Carolina soil and soon completed an imposing earthwork. The front rank sheltered within the entrenchment, while the rear rank crouche just behind, still gaining some protection from the mound of earth facing the oncoming rebels. Federal artillery, posted to the rear, opened on the advancing enemy. The rebels, emerging from the tree line opposite the Federal works, advanced cautiously. Cavalry probed for an opening on the left, finding none. The rebels attempted to close, but the steady volleys from the boys in blue kept them at bay. The engagement ended in an uneasy stalemate, both sides aware that more fighting would need to be done tomorrow. 16 hours ago
<urn:uuid:51650dfc-aa0f-4fca-9ef7-0eb0e5484adf>
CC-MAIN-2013-20
http://cwbattlefields.blogspot.com/2010_04_01_archive.html
2013-05-22T08:27:25Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.931165
254
January 15, 2009 Acronyms are some of my favorite writing exercises. I am repeatedly impressed with the amount and quality of helpful information that can surface through the use of acronyms. Acronyms are helpful when you get stuck. They are also particularly helpful when addressing a topic head-on or “with logic” is getting you nowhere. Sometimes, it is better to take a more gentle, roundabout, less direct approach. Let the information and feelings surface on their own without having to break the no-talk rules that are often so deeply embedded within. Acronyms are particularly helpful when you just can’t quite figure out how to say what is going on for you. Or, when the parts inside are struggling with whether to tell you or not, and they don’t want to say it directly. Acronyms are a creative way of “telling without telling.” Pick any word or phrase or theme that describes how you feeling or what you are thinking at that moment. For example: - What’s bothering me today? Upset about school; Angry with my boss; Blocked feelings - How would I describe how I feel today? Frustrated and mad; totally numb; scared of everything - What about my relationship with _________. My mother is stupid; Afternoons with Suzie; Uncle Sam is weird - I am remembering ________. Nights at that house; Visits from Ted; Nightmares - I keep thinking about __________. Voices I hear; Seeing others inside; My puppy Patches Write this word or phrase vertically on the page. As you think of that theme, take one letter at a time, and write down the first word or phrase that you think of that starts with that particular letter. Again, there is no right or wrong, just write down the words that come to mind as you think about your theme word. If you immediately think of more than one word for any particular letter, you can write down both words if you want to. If you get stuck on a letter that is difficult, you can adjust the exercise however you see fit. The easiest option is to turn the difficult letter into any “miscellaneous” letter of your choice, allowing you to fill that spot in with any words that come to mind about your theme. Once you have completed the list of words for your acronym, read through what you have written. Take this writing exercise a step further by using that same list of words as parts of a paragraph. The words can be used in any order in combination with as many other words as needed to complete your paragraph. Read through your paragraph. Is there a particular phrase, or word that stands out to you? Again, there is no right or wrong answer. Pick a word or phrase that either needs further explanation, or seems to summarize your thoughts the best, or just “hits you” as important. Using this new word or phrase, start the exercise again. Repeat this process as many times as necessary – with a new acronym, a new list of words, a new summary paragraph. You can repeat this process again and again because each new acronym will lead to greater understanding of the issue at hand. Example of Acronym Writing: Reaching the inside is not as hard as you might think. Yes, they have experienced terrible things that no one should ever have to endure. They need reassurance that they will never have to do that yucky stuff ever again. Let each part of you live a safe life. R real scared C crying, comfort I understand that everybody feels real scared about writing, and talking, and telling. It is important to know the reality of what has happened so you can learn how to become safe. It is ok now for each of the child parts to have comfort. They are still crying because they have been hurt again and again. They need to know they can always be safe. I am here to help you find safety. Nobody deserves to be hurt, not even the inside parts that are named Nobody. Pick the word or phrase that sticks out for you in this second paragraph. Do a third acronym with those words, then a fourth acronym, then a fifth, etc. Keep going until you have reached some answers to the words and feelings you were searching for. Kathy Broady LCSW
<urn:uuid:4d16ed81-7f8e-4dea-9ecb-858e08faf3d2>
CC-MAIN-2013-20
http://discussingdissociation.wordpress.com/2009/01/15/
2013-05-22T08:12:51Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.959015
917
The capitalized word "STREAMS" refers to the STREAMS programming model and facilities. The word "Stream" refers to an instance of a full-duplex path using the model and facilities between a user application and a driver. A Stream is a data path, that passes data in both directions between a STREAMS driver in kernel space, and a process in user space. An application creates a Stream by opening a STREAMS device (see Figure 1-1). A Stream-head is the end of the Stream nearest the user process. It is the interface between the Stream and the user process. When a STREAMS device is first opened, the Stream consists of only a Stream head and a STREAMS driver. A STREAMS module is a defined set of kernel-level routines and data structures. A module does "black-box" processing on data that passes through it. For example, a module converts lowercase characters to uppercase, or adds network routing information. A STREAMS module is dynamically pushed on the Stream from user level by an application. Full details on modules and their operation are covered in Chapter 10, Modules. A character device driver that implements the STREAMS interface. A STREAMS device driver exists below the Stream head and any modules. It can act on an external I/O device, or it can be an internal software driver, called a pseudo-device driver. The driver transfers data between the kernel and the device. The interfaces between the driver and kernel are known collectively as the Solaris 2.x Device Driver Interface/Driver Kernel Interface (Solaris 2.x DDI/DKI). The relationship between the driver and the rest of the UNIX kernel is explained in Writing Device Drivers. Details of device drivers are explained in Chapter 9, STREAMS Drivers. The means by which all I/O is done under STREAMS. Data on a Stream is passed in the form of messages. Each Stream head, STREAMS module, and driver has a read side and a write side. When messages go from one module's read side to the next module's read side they are said to be traveling upstream. Messages passing from one module's write side to the next module's write side are said to be traveling downstream. Kernel-level operation of messages is discussed on "Messages". A container for messages. Each Stream head, driver, and module has its own pair of queues, one queue for the read side and one queue for the write side. Messages are ordered into queues, generally on a first-in, first-out basis (FIFO), according to priorities associated with them. Kernel-level details of queues are covered on "Queues". The ioctl(2) interface performs control operations on and through device drivers that cannot be done through the read(2) and write(2) interfaces. ioctl(2) operations include pushing and popping modules on and off the Stream, flushing the Stream, and manipulating signals and options. Certain ioctl(2) commands for STREAMS operate on the whole Stream, not just the module or driver. The streamio(7I) manual page describes STREAMS ioctl(2) commands. Chapter 4, STREAMS Driver and Module Interfaces details Inter-Stream communications. The modularity of STREAMS allows one or more upper Streams to route data into one or more lower Streams. This process is defined as multiplexing (mux). Example configurations of multiplexers start on "Multiplexing". Polling within STREAMS allows a user process to detect events occurring at the Stream head, specifying the event to look for and the amount of time to wait for it to happen. An application might need to interact with multiple Streams. The poll(2) system call allows applications to detect events that occur at the head of one or more Streams. Chapter 3, STREAMS Application-level Mechanisms describes polling. Flow control regulates the rate of message transfer between the user process, Stream head, modules, and driver. With flow control, a module that cannot process data at the rate being sent can queue the data to avoid flooding modules upstream with data. Flow control is local to each module or driver and voluntary. Chapter 8, Messages - Kernel Level describes flow control.
<urn:uuid:9295a9d1-ad8e-4210-922e-4c4078017df5>
CC-MAIN-2013-20
http://docs.oracle.com/cd/E19504-01/802-5893/6i9kci4qb/index.html
2013-05-22T08:15:27Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.909184
894
As Beijing prepared to host the 2008 Summer Olympics, the city of Qingdao, roughly 550 kilometers (340 miles) to the southeast, prepared its coastal waters for the games’ sailing competitions. With the games looming just weeks away, Chinese officials and residents of Qingdao (also known as Tsingtao) struggled with a stubborn adversary: algae. On June 28, 2008, the Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA’s Aqua satellite captured these images of Qingdao and the bay of Jiaozhou Wan. The top image is a natural-color image similar to what a digital camera would photograph. The bottom image is a false-color image made from a combination of light visible to human eyes and infrared light our eyes cannot see. In this image, vegetation appears vibrant green, including the strips of algae floating in the bay and in the nearby coastal waters. These images show the bay at the beginning of a local cleanup effort. (Daily images of the area are available from the MODIS Rapid Response Team.) With sailing events scheduled to begin on August 9, Chinese officials ordered the algae cleanup to be completed by July 15. A spokesman for the Qingdao Sailing Committee planned to complete the project by July 10. The cleanup effort included 20,000 people and 1,000 boats. According to news reports, opinions differed on the cause of the larger-than-normal algal bloom, with some people citing increased rainfall and unusually warm waters in the Yellow Sea. Others blamed wastewater, and industrial and agricultural pollution for providing excess nutrients on which the algae could thrive. Regardless of the cause, many locals agreed that the algae bloom was the worst they had seen. - Ransom, I. (2008, July 7). Olympic sailing venue algae-free by July 10, says official. The Guardian. Accessed July 8, 2008. - Yardley, J. (2008, July 1). To save Olympic sailing races, China fights algae. The New York Times. Accessed July 8, 2008. NASA image courtesy the MODIS Rapid Response Team. Caption by Michon Scott and Rebecca Lindsey. - Aqua - MODIS
<urn:uuid:c1ca84af-bec5-42a0-bbf4-34fc74a539ee>
CC-MAIN-2013-20
http://earthobservatory.nasa.gov/IOTD/view.php?id=8897
2013-05-22T08:33:31Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.937855
444
Arrhenuis, Suante: defined acids and bases in terms of behavior in water (172-173). Aspdin, Joseph: rediscovered the Roman use of CaO for cement (827). Avogado, Amedeo: Avogardo's Hypothesis in 1811, Equal volumes of different gases (at room temperature and pressure) contain equal numbers of particles (13-15). Balmer, Johann: found hydrogen atoms emit a series of lines in visible light: v = [1/4 - 1/n2]x 3.29x 1015 s-1 (534). Bartlett, Neil: found a reaction with Xe and fluorine (800-801). Becquerel, Antoine: discoverer of radioactivity; Observed potassium uranyl sulfate exposed photographically in the dark (499). Binnig, Gerd: eveloped the Scanning Tunnel Microscope (STM), which images low energy atoms (30). Black, Joseph: Showed that marble (CaCO3) dissociated when burned to form CaO and CO2 (100). Bohr, Niels: Proposed H model with energy states; Bohr's model of the atom provided a bridge from theoretical physics to quantum mechanics. (534, 538, 542). Boltzmann, Ludwig: Found f(u) for gas molecules of mass (m) at temperature (T);[Maxwell-Boltzmann speed distribution (113, 116-117). Boyle, Robert: Wrote the influential book, The Skeptical Chemist in 1660; Published The Spring of Air and its Effects properties of gas; Boyles Law: PV=C (3-4, 103-104). Bragg, William: Nobel Prize for diffraction of x-ray by crystals (698). Bronsted, Johannes: Defined acids and bases; acids donate H+ and bases accept H+ (315). Carnot, Sadi: Concluded there is no devise that can transfer heat from a colder to a warmer reservoir without net expenditure of work. (251-252). Cavendish, Henery: Noticed residue gas (Argon) when O2 and N2 were removed from air (23, 621, 768). Charles, Jaques: First human flight in hydrogen balloon, 1783 ( 106, 735). Clausius, Rudolf: Stated the Second Law of Thermodynamics as It is impossible to construct a device that will transfer heat from a cold reservoir to a hot reservoir in a continuous cycle with no net expenditure of work. (113, 255-256). Coulomb, Charles: Noticed + and - forces and electrical charge between charged batteries; Coulomb (C) (A. 16). Curie, Marie: Isolated radium; Curium (Cu) (8, 18). Curie, Pierre: Isolated radium (18). Dalton, John: Showed evidence of atoms existence: Atomic Theory of Matter (10-14, 111). Davy, Humphry: Obtained aluminum as an alloy of iron and proved it to have a metallic nature (409, 746, 783). Deacon, Henry: Developed the process to convert hydrogen chloride into chlorine (784). de Broglie, Louis: Proved that any particle moving with linear momentum (p) has wave-like properties and a wavelength (h/p) associated with it; gave wavelength properties to electrons in atoms (542-546). Einstein, Albert: Was responsible for the theory of relativity, which stated that a change in mass always accompanies a change in energy; Said that spontaneous transformations of one nucleus into other can occur only if the combined mass of products is smaller than the mass of the original nuclide; Wrote a letter to President Franklin Roosevelt warning him of the possible military uses of fission: he was concerned that Germany might try to develop a fission bomb during WWII; Used Planck’s quantum hypothesis to explain the photoelectric effect (4, 8, 193, 493, 508). Faraday, Michael: Stated Faraday’s laws, which summarize the stoichiometry of electrochemical processes (192, 407, 409). Enrico Fermi: has the element fermium, with atomic number 100, named after him. He led a group of physicists to study the relationship between neutrons and the reactions with nearby nuclei and their radioactive products (8, 507-510). Franck, James: made an apparatus, along with Gustav Hertz that test Bohr’s hypothesis. This contraption showed how the atoms were quantized or moved from ground state to an excited state while emitting energy. Frank and Hertz evaluated this emitted light using a spectrograph (534-535). Franklin, Rosalind: was a British biophysicist that used x-ray diffraction to determine that there was more than one strand involved in the DNA structure that twisted into a helix (873-4). Frasch, Herman: devised a method, called the Frasch process, to mine sulfur by making it a liquid and pumping it to the surface (759). Galilei, Galileo: was an Italian astronomer and physicist (101). Gay-Lussac, Joseph: made the law of combining volumes such as in writing equations to show how reactions take place. He also determined the freezing point of water in 1802 to be 267K. He was close in approximating this because we now know that it is closer to 273.15K. In 1835, he built a tower to reduce the venting of NO gas into the air. This tower, in a round-a-bout way converts this NO gas back to less dangerous NO2 (13-14, 106, 756). Graham, Thomas: developed a law of effusion of gas particles through a hole in a vacuum. He says that this rate is inversely proportional to the square root of the molar mass (120). Haber, Fritz: along with Max Born, developed a thermodynamic cycle that can measure ionic lattice energies. It is called the Born-Haber cycle. Also, Haber and Walther Nernst studied ammonia and the elements that make it up when affected by temperature and by pressure. Then Haber and Bosch developed a process for synthesizing sulfur called yes that’s right, the Haber-Bosch process (715, 768-769). Hert, Gustav: experimented along with James Franck in 1914 with Bohr’s hypothesis on the energy of atoms and how they are quantized. Their apparatus contained a cathode where electrons from a low-pressure gas were emitted. These electrons sped toward the anode and passed through the holes into a collector plate (534-535). Joule, James: an English physicist who, with German physician Julius Mayer, demonstrated that a substance’s temperature could be increased not only by adding heat to it, but by doing work on it. The SI unit of energy, the joule, was named after him (211-212, 236). Kelvin, Whilliam Thomas Lord: agreed with Sadie Carnot’s claim that it is not possible to create an engine that is 100 percent efficient. He stated this more generally when he said, “There is no device that can transform heat withdrawn from a reservoir completely into work with no other effect (252). Lavoisier, Antoine: a French chemist who proved that the total mass of products equals the total mass of reactants in a chemical reaction. He introduced the law of conservation of mass, and he helped discover that air is made mostly of oxygen and nitrogen (9-10, 99 693). LeBlanc, Nicholas: a French physician and amateur chemist who discovered a way to produce sodium carbonate from sodium chloride. The process was eventually named after him (the Leblanc process), but he wasn’t honored for the discovery while he was alive (782). Le Chatelier, Henri: famous for “Le Chatelier’s principle,” which states, “A system in equilibrium that is subjected to a stress will react in a way that tends to counteract the stress.” This principle can be used to predict the direction a system under external stress will change (300). Lewis, G.N.: gave the name “photons” to the particles of light (530). Lowry, Thomas: introduced the modern definitions of acids and bases in the same year as, but working independently from, Johannes Bronsted. Acids known as “Bronsted-Lowry acids” contain a hydrogen ion, and bases known as “Bronsted-Lowry bases” can accept this hydrogen ion (315). Maxwell, James Clerk: a physicist who introduced the kinetic theory of gases along with Rudolf Clausius and Ludwig Boltzmann. He also, about the same time as but independently from Boltzmann, introduced the Maxwell-Boltzmann speed distribution, which is defined in Oxtoby as the probable distribution for the speeds of molecules in a gas at thermal equilibrium (15, 113, 116-117). Meiner, Lise: a colleague of Otto Hahn who introduced the term fission (508). Mendeleev, Dmitri: a Russian who discovered the element mendelevium and introduced the periodic table. Lothar Meyer also introduced a periodic table, but Mendeleev’s was more accurate (8, 22-23, 568). Millikan, Robert: an American physicist who, with his student H.A. Fletcher, determined the electrical charge of an electron (17). Mullikan, Robert: defined the electronegativity of each atom in 1934. Mullikan stated that the electronegativity of an atom is proportional to the average of its ionization energy and electron affinity (58-59, 77). Muller, Karl Alex: teamed with J. Georg Bednorz to search for perovskite ceramics with higher transition temperatures than had previously been recorded. The pair found a transition temperature of 35 K in Ba–L a–Cu–O perovskite phase. This was nearly 12 Kelvins higher than any previous superconductors (831). Natta, Giulio: used the Ziegler catalyst, TiCl4, to make isotactic polypropylene in 1954. He also developed his own catalyst, VCl4, to make syndiotactic polypropylene (861). Nernst, Walther: did much of his research on equilibrium. His study of the effects of temperature on the entropy of a pure substance gave rise to the Nernst heat theorem and eventually the third law of thermodynamics. Nernst is perhaps best known for his equation that illustrates the relationship between free energy change and the reaction quotient (264, 768). Newton, Isaac: known widely for his interactions with apples, Newton was the father of classical mechanics. His gravitational force and projectile motion equations are still used today (523, A.13) . Oersted, Hans Christian: prepared the first pure form of aluminum in 1825, just 16 years after it’s discovery. He did this by reducing aluminum chloride with an amalgam of potassium (746). Oppenheimer, J. Robert: teamed with Max Born in becoming the first to illustrate electron energy levels for diatomic molecules. Their schematic diagram of energy levels clearly depicts vibrational, rotational, and excited electronic states (609). Pauling, Linus: associated with many facets of chemistry, including the analysis of protein structure and the production of an electronegativity scale. Pauling also correctly hypothesized xenon chemistry nearly 30 years before it was ever produced (58, 77, 78c, 800). Petit, Alex: noted for his work with metallic heat capacities. His research resulted in the Dulong and Petit rule that states 25 J is necessary to heat one mole of a metallic element by one Kelvin (236p, 239p). Phillips, Peregrine: employed platinum as a catalyst to convert SO2 to SO3. This discovery went largely unnoticed for forty years until the German dye industry demanded a way of producing a more concentrated form of sulfuric acid than was previously possible (757). Planck, Max: Is known for solving the blackbody radiation paradox in 1900. He felt that it wasn’t possible to put a small amount of energy into an oscillator with a certain frequency, but that the oscillator must gain and lose energy in packets called quanta. With this theory, he used statistical thermodynamics to find the average energy in quantized oscillators as a function of temperature. Today this average is known as Planck’s constant: h = 6.62608 x 10-34 J s. Einstein later used this for his photoelectric effect (527-532). Priestly, Joseph: Along with Lavoisier and other scientists, Joseph Priestly showed that air is made mostly of Oxygen and Nitrogen. He used the reaction: 2HgO(s) ----> 2Hg(l) +O2(g) in discovering oxygen. He is also known for studying the composition of the atmosphere and introduced the “rubber” that is known as eraser today (99-100, 621, 862). Proust, Joseph: Joseph Proust was one who disagreed with Berthollet’s findings and felt that any variation in compounds was because of experimental error. He is known for coming up with the Law of Definite Proportions, which states: In a given chemical compound, proportions by mass of elements that compose it are fixed, independent of origin of the compound or it’s mode of preparation. This finding was a crucial step in modern chemistry (10). Ramsey, William: Along with Lord Rayleigh, William Ramsey isolated Argon and predicted the existence of a new group of elements. He also studied Helium and was among the first scientists to study the atmosphere’s composition (23, 621). Raoult, Francois Marie: Francois Marie Raoult came up with the law known as Raoult’s law, which states that for some solutions, a plot of solvent vapor pressure versus solvent mole fraction can be fitted by a close line. The resulting equation is: P1 = X (P01). This law also forms the basis for the four coligative properties of dilute solutions (177). Strutt, John William: John William Strutt (Lord Rayleigh) discovered the gaseous element--now known as Argon--that didn’t fit with previously found elements. He was also responsible for developing a detailed description on the charged particle oscillator model of blackbody radiation (755). Roebuck, John: John Roebuck was the first to employ the room-size lead chamber that expanded the manufacture of sulfuric acid in 1746 (755). Roentgen, Wilhelm: Wilhelm Roentgen was responsible for discovering x-rays in 1895. This proved to be a valuable tool for determining crystal structures (697). Rohrer, Heinrick: Along with Binnig, Heinrich Rohrer developed the scanning tunneling microscope, which uses low energy electrons to get atom images. This confirms features such as the size of atoms and distances between them. For his work, he won the Nobel Prize in 1986 (30). Roosevelt, Franklin: As president, Franklin Roosevelt authorized the Manhattan District Project. This was an effort by many to make a fission bomb (508). Thompson, Benjamin (Count Rumford): Benjamin Thompson suggested qualitative equivalence of heat and work as a means of energy transfer. By observing cannons, he saw that the quantity of heat made in the boring was proportional to the amount of work done (211). Rutherford, Ernest: In 1911 Rutherford proposed a model of the atom in which the nucleus possesses most of the mass of the atom. The model is as follows: An atom with the atomic number of Z comprises a dense, central nucleus of positive charge with magnitude Ze surrounded by a total of Z individual electrons moving around the nucleus. This model is still currently accepted. Rutherford’s new theory was at odds with the previous theory. Attempts to reconcile the differences lead to the creation of quantum mechanics (18, 499, 507, 518, A.17). Sanger, Frank: Complete the first sequence of the 51 amino acids in bovine insulin. This discovery earned Sanger the Noel Prize in chemistry in 1958 (870). Schelle, Carl Wilhem: Swedish chemist who discovered chlorine in 1774. Schelle prepared it in its elemental form through the reaction of hydrochloric acid and prolusite (130p, 783-784). Schrodinger, Erwin: Austrian physicist who developed a fundamental explanation of the origin of energy qunatinization. He did this through an analogy concerning the theory of vibrations. (A form of quantization was already understood in this area.) He also developed a fundamental equation of quantum mechanics in 1925, known as the Schrodinger Equation. The equation cannot be derived, but the “particle in a box” can be used to describe the equation (542, 546). Seaborg, Glenn: One of the key people in the production of plutonium and other elements with high atomic mass (8, 513). Simon, Pierre, Marquis de Laplace: French astronomer and mathematician who authored the work, Memoir on Heat (237p). Smalley, Richard: Discovered long-chain molecules by spectroscopically studying radioastronomers in the vicinity of red giant stars. In 1985 he (along with Harold Kroto and Robert Curl) reproduced the environment by vaporizing graphite with a concentrated laser beam. This revealed long-chain molecules, including C60. C60 was named buskminsterfullerene (the buky-ball) after the architect Buckminster Fuller, the inventor of the geodesic dome (618). Solvay, Ernet: Belgian chemist and industrialist who developed and subsequently patented the process for improved carbonators and stills to recovered ammonia. The Solvay process produced very pure sodium carbonate. The process was also continuous which gave it a great advantage over batch processes (786). Strassmann, Fritz: Originally sought to characterize the supposed transuranic elements. He (and Otto Hahn) discovered that barium was among the products of the bombardment of uranium by neutrons (508). Strutt, John William, Lord Rayleigh: Discovered a gaseous element that did fit into Mendeleev’s model of the periodic table. Also developed a sophisticated classical description based on the oscillating electromagnetic waves within the cavity (23, 529). Sullivan, J.H: Investigated the effect of illuminating the reaction sample of H2 + I2 = 2HI His discoveries lead to the conclusion that that the iodine atoms participated in the reaction. This was contrary to the previous belief (465). Tennant, Charles: was a Scottish chemist that made important advances in the bleaching of cotton. In 1799 he patented a substance which he called “bleaching powder”. This substance was made by saturating slaked lime with chlorine. Using this chemical along with sulfuric acid, bleaching cotton only took about a week instead of months (783-784). Thompson, Benjamin; Count Rumford: suggested that qualitative equivalence of heat and work as means of energy transfer. He proposed this in 1798. He was a military advisor for the King of Bavaria. He noticed the heat produced while boring canons was proportional to the amount of work done upon it (211). Thomson, J.J.: was a renowned British Physicist, who in 1897 determined that cathode rays were actually negatively charged particles called “electrons”. He used a series of experiments to prove this hypothesis. The world of chemistry was greatly improved because of this discovery (15-17, 24). Thomas Kelvin; Lord William: was responsible for our present day absolute temperature scale. Henceforth we get the unit of Kelvins. He used experiments to determine absolute zero (252). Torricelli, Evangelista: who lived from 1608 to 1647. Torricelli was an Italian scientist who assisted Galileo with many of his experiments. Torricelli is best known for creating the first barometer. He covered a tube filled with mercury and placed it upside down with the open end underneath the surface of a pool of mercury. The units torr are named in honor of Torricelli (101-102) . van der Waals, Johannes: Dutch physicist who made one of the earliest and most important contributions to the ideal gas theory. He determined what we call today Van der Waals equation of State: P = n R T _ a n^2 V - n b V^2 He also is associated with Van der Waals radii. He is associated with these intermolecular forces because of the extensive work he did concerning nonbonded interactions between molecules and their influence on properties of elements (123-124, 145). van’t Hoff, Jacobus: Dutchman who in 1887 discovered an important relationship relating to the pressure of gases. He determined that the osmotic pressure of a gas was the product of the concentration of molecules, the universal gas constant, and the absolute temperature. He is also associated with what is known as van’t Hoff’s equation: ln K1 = _ d H * 1 1 K2 R T1 T2 Volta, Alessandro: discovered the electrochemical cell in 1800. This was done by stacking discs of zinc and silver with paper saturated in salt solution between each disc. For this contribution we associate electrical power with volts (409, 424-425). von Laue, Max: suggested that crystals might serve as three dimensional gratings for diffraction of electromagnetic radiation with wavelength comparable to distance between planes of atoms (698). von Weizsacker, Carl: was one of the scientist to propose the process of hydrogen burning that goes on in the stars. In this reaction two hydrogen nuclei are fused together to form a deuteron atom, a positron, and a high powered photon. The deuteron and another hydrogen nucleus fuse together to form 32He and also emit a gamma ray. Then two of these helium atoms fuse together to make a stable helium atom and two hydrogen nuclei are emitted as byproducts to complete the cycle (512). Waage, P.: Norwegian chemist, along with his brother-in-law, C.M. Guldberg, first stated the law of mass action in 1864. The Law of mass action is the relationship between concentrations or partial pressures of reactants and products of a chemical reaction at equilibrium, which is denoted by the empirical equilibrium constant (284). Walton, Ernest: Irish Physicist (along with Sir John Douglas Cockcroft) who devised an accelerator that generated large numbers of particles at lower energies, they went on to disintegrate lithium nuclei with protons, in 1932. It was for this that both Walton and Cockcroft received the 1951 Nobel Prize in Chemistry for the development of the first nuclear particle accelerator, now known as the Cockcroft-Walton accelerator (518p). Ward, Joshua: is credited with improving the Lead-Chamber process, in 1736, by replacing the earthenware vessels in which the sulfur was burned with glass bottles arranged in series to speed up the process (755). Watson, James: American geneticist and biophysicist, along with Francis Crick proposed the famous double-helix structure of deoxyribonucleic acid (DNA) in 1953. They proposed that DNA consisted of two helical strands of nucleic acid polymer bound together by hydrogen bonding, which also lead to new theories in the way of polymer synthesis. For this accomplishment they received the 1962 Nobel Prize in Physiology or Medicine, along with Maurice Wilkins (873). Werner, Alfred: Alsatian-Swiss chemist is credited with pioneering the field of coordination chemistry. In 1891 he presented the coordination theory, a simple classification of inorganic compounds, which extended the concept of isomerism. In 1913 he received the Nobel Prize in Chemistry for his research into the structure of coordination compounds (670-672). Wilkins, Maurice: British biophysicist is credited with the significant findings in the X-ray diffraction of deoxyribonucleic acid (DNA). Which proved crucial to the determination of DNA’s molecular structure by James Watson and Sir Francis Crick. For this the trio was awarded the 1962 Nobel Prize in Physiology or Medicine (873). Wilson, T.L.: credited with the discovery of calcium carbide. In 1892 Wilson was trying to make elemental calcium from lime and tar in an electric-arc furnace, where he obtained a product that was not calcium. He proceeded to dispose of it into a stream when there upon it reacted violently, giving off large amounts of combustible gas, thus came about the discovery of calcium carbide (768). Wohler, Friedrich: credited with first synthesizing urea in 1828, from ammonia and cyanic acid. His work demonstrated that organic compounds could be synthesized from strictly inorganic starting materials (770). Yalow, Rosalyn: an American medical physicist is credited with inventing the radioimmunoassay technique. By combining techniques from radioisotope tracing and immunology, she developed RIA, which simplified the means for measuring minute concentrations of biological and pharmacological substances in bodily fluids. It was for this that she received jointly the Nobel Prize in Physiology or Medicine in 1977 with Andrew V. Schally and Roger Guillemin (507). Ziegler, Karl: German chemist, is credited with showing that ethylene could be polymerized with a catalyst consisting of TiCl4 and an organoaluminum compound (such as Al(C2H5)3.) He received the Nobel Prize jointly with Giulio Natta in Chemistry for his research in improving the quality of plastics (860-861).
<urn:uuid:2467b432-21e9-4739-b58f-a810cc5e9f9b>
CC-MAIN-2013-20
http://ed.augie.edu/~mlgrandb/ScientistPage.html
2013-05-22T07:54:08Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.937313
5,541
What is Degenerative Arthritis? Degenerative arthritis is a medical condition involving the wearing or degeneration of the cartilage in the joints leading to contact between the two bones. Degenerative arthritis is also termed as osteoarthritis. The cartilage in the joints is the one responsible for the motion of the joint as well as limiting the friction and tension in between the bones in the joints. In degenerative arthritis, the cartilage is damaged leading to the increased in the friction between the bones. Degenerative arthritis is most commonly seen in weight bearing joints in the body such as the neck, knees and hips, but it can also occur in other joints. The damage in the joint also causes certain changes in the joint capsule as a result of inflammatory response and healing. The initial response of the joints to damage is the formation of osteophytes in the damaged cartilage. Because of this, there is further narrowing in the joint space. The osteophytes result in the growth of subchondral cysts in the cartilage. Degenerative Arthritis Image Image source: health-reply.com Symptoms of Degenerative Arthritis Symptoms of degenerative arthritis include: - Joint pain - The constant friction in the bones in the joint causes inflammatory response on the area causing pain. Pain is characterized as burning and sharp on the joint area, including the muscles and the joints. Cold temperatures usually aggravate the pain. The pain is relieved by gentle motions, but becomes severe with high impact use. - Stiffness of joints - The other structures such as the muscles, ligaments and tendons in the area are also affected leading to limited motion of the joint. The cartilage that normally allows for smooth motion of the joints is damaged that causes stiffness in the area. - Swelling of the joint - The joint also appears swollen because of underlying inflammatory process in the area. - Crepitus - When moving the joint, a characteristic crackling sound is heard because of contact between the bones in the joint. - Heberden’s nodes - Heberden’s nodes are bony enlargements on the distal interphalangeal joint. These appear because of the growth of subchondral cysts in the joints. These are often not painful. - Bouchard’s nodes - These are similar to Bouchard’s nodes only that they occur on the proximal interphalangeal joints. - Joint effusion - The joints may also develop collection of water or fluid as a result of accumulation of synovial fluid in the area. Swelling of the joints Degenerative Arthritis at Various Places Degenerative arthritis is most commonly located at weight bearing joints, but it can also occur in all joints in the body. The most common locations include: Degenerative Arthritis of the Spine The spine is a common site of degeneration because of bearing the weight of the back and the head. The spine can also be affected by trauma and various inflammatory conditions such as ankylosing spondylitis leading to degenerative arthritis. Degenerative Arthritis of the Knee The knee is also a common location of the condition especially among obese patients because of increased tension in the area as the knees bear the weight of the body. Degenerative Arthritis of the Neck The neck or cervical spine carries the weight of the cranium and its content. It also has several range of motion, which increases the stress in the cervical joints. Types of Degenerative Arthritis Primary degenerative arthritis is a type of joint disorder that is caused by the actual destruction of the joint caused genetic factors. Primary osteoarthritis causes the water content of the cartilage to decrease. The loss of the fluid is associated with aging. The absence of protective fluid or proteoglycan causes the cartilage to become susceptible to damage. Primary osteoarthritis is classified into nodal osteoarthritis and erosive osteoarthritis. Nodal osteoarthritis involves the formation of nodes in the joints, while erosive type involves the progressive destruction of the joint. Erosive from tends to be more severe, but less common than nodal osteoarthritis. Secondary osteoarthritis is a type of arthritis that results from the degeneration of the joints as a result of underlying conditions that hastens the degeneration of the weight bearing joints. Causes of Degenerative Arthritis - Degenerative arthritis is caused by certain factors. Primary and secondary osteoarthritis have different causative factors. The main cause of primary osteoarthritis is genetics. Aging is an aggravating factor because of the normal degeneration of the joint. Secondary osteoarthritis is caused by various factors such as: - Obesity - Obesity is one of the most common causes of secondary arthritis because of increased tension on the cartilage on the weight bearing joints. - Congenital disorders - Congenital disorders of the joints involve the presence of defects on the area since birth leading to increased risk for developing degenerative arthritis. - Inflammatory disorders - Inflammatory disorders such as Lyme disease and Perthe’s disease increases the tendency to develop similar inflammatory condition in the joint. - Metabolic disorders - Disease such as diabetes and Wilson’s disease also leads to increased degradation of the joints. - Trauma - Trauma to the joints or ligaments also leads to tearing and injury to the area. Treatment of Degenerative Arthritis Various treatments for degenerative arthritis include: - Weight Loss - Losing weight is an important management for osteoarthritis to prevent further stress and tension in the joints. - Exercise - Exercise is also essential for patients with osteoarthritis. Stretching exercises are more beneficial because it strengthens the muscles and structures around the joints thereby providing more support. - Administration of Anti-inflammatory medications - Medications such as ibuprofen and naproxen relieve inflammation and pain. Aspirin may also be used, but it can cause adverse reactions such as gastric irritation and bleeding on the joints. Newer medications such as celecoxib can prevent these effects. Cymbalta, an antidepressant drug is also effective in reducing inflammation and pain. - Apply warm compress - Warm compress may be placed for 15 minutes three times a day to relax the muscles in the area relieving pain. - Alternative medications - Chondroitin and glucosamine can also be given to patients as food supplements. These substances are a natural component of the synovial fluid. Taking these drugs may help in the synthesis of collagen in the cartilage thereby increasing the integrity of joints. - Surgery - Surgery becomes the last treatment for degenerative arthritis. This may involve osteomy or the removal of the subchondral cysts and chondroplasty or the repair of the cartilage. Severe cases may require joint replacement by artificial prostheses.
<urn:uuid:339297f4-d90a-43da-b20e-91b041d54633>
CC-MAIN-2013-20
http://ehealthwall.com/degenerative-arthritis-treatment-symptoms-causes-types/
2013-05-22T08:00:50Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.924223
1,458
Saylor.org's Ancient Civilizations of the World/Roman Literature "If the Romans had been obliged to learn Latin, they would never have found time to conquer the world."---Heinrich Heine, 19th Century German poet The language the Romans spoke and wrote was known as Latin. The importance of the Latin language in the modern world is immense. Mainly because of the territorial size of the Roman Empire, the Latin alphabet as well as vocabulary and grammar spread throughout the entirety of Western Europe. Although not all Western European languages are "Romantic" or Latin-based (Italian, French, Spanish, Portugese), even modern Germanic languages (German, English, Dutch, Danish, Swedish, Norwegian) use the Latin alphabet. Because Latin remained the base for so many European languages, as well as the fact that Latin continued to thrive during the Middle Ages as the language of the Roman Catholic Church, the ability to read and understand pieces of Roman literature was not lost during the millennium between the fall of Rome and the rekindled interest in classical texts during the Renaissance. The Renaissance of Western Civilization owes much to the preservation of both Latin and many pieces of ancient Roman literature. Characteristics of Latin Literature Latin literature typically reflects the Roman interest in rhetoric, the art of speaking and persuading. The art of rhetoric was a vital skill during the Republican era when votes often hinged on the case made for or against a candidate or proposition. In the Empire, the need for rhetorical skill diminished in value in the civic arena, however the art form retained a place in literature. Due to Latin's highly inflected nature, with multiple forms of the same word, Latin sentences can be remarkably brief and pithy in comparison to, say, English. On the other hand, long and elaborate sentences in Latin can still make sense, thanks to the tight grammatical and syntactical rules of the language. Following the expansion of the Roman Republic (509–27 BCE) into several Greek territories between 270–240 BCE, Rome encountered Greek drama. From the later years of the republic and by means of the Roman Empire (27 BCE-476 CE), theatre spread west across Europe, around the Mediterranean and reached England; Roman theatre was more varied, extensive and sophisticated than that of any culture before it. While Greek drama continued to be performed throughout the Roman period, the year 240 BCE marks the beginning of regular Roman drama. From the beginning of the empire, however, interest in full-length drama declined in favour of a broader variety of theatrical entertainments. The first important works of Roman literature were the tragedies and comedies that Livius Andronicus wrote from 240 BCE. Five years later, Gnaeus Naevius also began to write drama. No plays from either writer have survived. While both dramatists composed in both genres, Andronicus was most appreciated for his tragedies and Naevius for his comedies; their successors tended to specialise in one or the other, which led to a separation of the subsequent development of each type of drama. By the beginning of the 2nd century BCE, drama was firmly established in Rome and a guild of writers (collegium poetarum) had been formed. The Roman comedies that have survived are all fabula palliata (comedies based on Greek subjects) and come from two dramatists: Titus Maccius Plautus (Plautus) and Publius Terentius Afer (Terence). In re-working the Greek originals, the Roman comic dramatists abolished the role of the chorus in dividing the drama into episodes and introduced musical accompaniment to its dialogue (between one-third of the dialogue in the comedies of Plautus and two-thirds in those of Terence). The action of all scenes is set in the exterior location of a street and its complications often follow from eavesdropping. Plautus, the more popular of the two, wrote between 205 and 184 BCE and twenty of his comedies survive, of which his farces are best known; he was admired for the wit of his dialogue and his use of a variety of poetic meters. All of the six comedies that Terence wrote between 166 and 160 BCE have survived; the complexity of his plots, in which he often combined several Greek originals, was sometimes denounced, but his double-plots enabled a sophisticated presentation of contrasting human behaviour. No early Roman tragedy survives, though it was highly regarded in its day; historians know of three early tragedians—Quintus Ennius, Marcus Pacuvius and Lucius Accius. From the time of the empire, the work of two tragedians survives—one is an unknown author, while the other is the Stoic philosopher Seneca. Nine of Seneca's tragedies survive, all of which are fabula crepidata (tragedies adapted from Greek originals); his Phaedra, for example, was based on Euripides' Hippolytus. Historians do not know who wrote the only extant example of the fabula praetexta (tragedies based on Roman subjects), Octavia, but in former times it was mistakenly attributed to Seneca due to his appearance as a character in the tragedy. The sole ancient Roman novel to survive in its entirety is The Metamorphoses or The Golden Ass (Asinus aureus) by Apuleius. Written sometime in the late Second Century CE, The Golden Ass tells the story of Lucius who is driven by an insatiable desire to see and practice magic. When Lucius attempts to use a spell to turn into a bird, he is actually turned into a jackass. The novel shares many traits with the Picaresque genre which would emerge in the 16th century. Hallmarks of the Picaresque include an episodic structure with the protagonist moving from amusing scenario to amusing scenario. The writing is satirical, imaginative and irrevrent and features many side stories within the main narrative drawn from Roman folklore and mythology, most notably that of Cupid and Psyche. The myth tells of how the goddess Venus grew jealous of the beauty of the mortal woman Pysche and sent her son, Cupid, to destroy Pysche. However, when Cupid sees her, he immediately falls in love and the rest of the story concerns the drama of their romance. The tale of Cupid and Pysche is the best known of the stories in The Golden Ass and became a popular subject for post-Renaissance art and literature. Not to be confused with Apuleius' Metamorphoses, is The Metamorphoses by Ovid. Ovid's work is a narrative poem in fifteen books that describes the history of the world from its creation to the deification of Julius Caesar within a loose mythico-historical framework. Completed in 8 CE, it is recognized as a masterpiece of Golden Age Latin literature. One of the most-read of all classical works during the Middle Ages, the Metamorphoses continues to exert a profound influence on Western culture. Although historical records from the Roman Kingdom and early Republic years are scarce, several Roman historians from the late Republic and Imperial eras wrote historical works which have provided modern historians with the foundation of much of what we now know about the Roman world. Roman historians, although indebted to the Greek form of historiography, are richer in detail and reveal much more Roman concerns than their Greek predecessors. Famous Roman historians and their works include: Livy: Perhaps the most famous of the Roman historians, Titus Livius (Livy) wrote Ab Urbe Condita ("From the Founding of the City") remaining one of the best sources about early Roman history. This massive work, written between 27 and 25 BCE, tells the narrative of Roman history in its entirety from 753 BCE to the era in which Livy lived. Unfortunately, of the 142 books which made up Livy's history, only books 1-10 and 21-45 survived. Thus what remains in Livy's work pertains to the earliest years of the Roman Kingdom and Republic, the era most prone to myth and legend (Livy turns to both the Aeneas and Romulus and Remus versions of the city's founding). Livy crafted his his history with many rhetorical and factual embellishments. While these embellishments were not always inaccurate, Livy had an agenda in writing Ab Urbe Condita. Living in the opening years of the Roman Empire, Livy saw a decline in moral values as a decline in the greatness of Rome. He intended that his history would rekindle the morality of his countrymen, thus making Rome great again. Julius Caesar: The De Bello Gallico is Caesar’s account of the Gallic Wars. As the Wars were raging on, Caesar fell victim to a great deal of criticisms from Rome. De Bello Gallico is a response to these criticisms, and a way for Caesar to justify these Wars. His argument is that the Wars were both just and pious, and that he and his army attacked Gaul in self-defense. The Helvetians were forming a massive migration straight through the provinces. When a group of neighboring allies came to Caesar himself asking for help against these invading Helvetians, that was all the justification Caesar needed to gather his army. By creating an account that portrays himself as a superb military hero, Caesar was able to clear all doubts in Rome about his abilities as a leader. While it is obvious that Caesar used this account for his own gain, it is not to say that the De Bello Gallico is at all unreliable. Many of the victories that Caesar has written about did, in fact, occur. Smaller details, however, may have been altered, and the word choice makes the reader more sympathetic to Caesar’s cause. De Bello Gallico is an excellent example of the ways in which retellings of actual events can be spun to a person’s advantage. For this reason, De Bello Gallico is often looked at as a commentary, rather than a piece of actual historiography. Tacitus: Tacitus was born c. 56 AD in, most likely, either Cisalpine or Narbonese Gaul. Upon arriving in Rome, which would have happened by 75, he quickly began to lay down the tracks for his political career. By 88, he was made praetor under Domitian, and he was also a member of the quindecimviri sacris faciundis. From 89 to 93, Tacitus was away from Rome with his newly married wife, the daughter of the general Agricola. 97 saw Tacitus being named the consul suffectus under Nerva. It is likely that Tacitus held a proconsulship in Asia. His death is datable to c. 118. There is much scholarly debate concerning the order of publication of Tacitus’ works; traditional dates are given here. - 98 – Agricola (De vita Iulii Agricolae). This was a laudation of the author’s father-in-law, the aforementioned general Cn. Iulius Agricola. More than a biography, however, can be garnered from the Agricola: Tacitus includes sharp words and poignant phrases aimed at the emperor Domitian. - 98 – Germania (De origine et situ Germanorum). "belongs to a literary genre, describing the country, peoples and customs of a race". - c. 101/102– Dialogus (Dialogus de oratoribus). This is a commentary on the state of oratory as Tacitus sees it. - c. 109 – Histories. This work spanned the end of the reign of Nero to the death of Domitian. Unfortunately, the only extant books of this 12-14 volume work are 1-4 and a quarter of book 5. - Unknown – Annales (Ab excessu divi Augusti). This is Tacitus’ largest and final work. Some scholars also regard this as his most impressive work. The date of publication and whether it was completed at all are unknown. The Annales covered the reigns of Tiberius, Caligula, Claudius, and Nero. Like the Histories, parts of the Annales are lost: most of book 5, books 7-10, part of book 11, and everything after the middle of 16. Tacitus’ familiar invective is also present in this work. Tacitus’ style is very much like that of Sallust. Short, sharp phrases cut right to the point, and Tacitus makes no bones about conveying his point. His claim that he writes history "sine ira et studio" (“without anger and partiality”) (Annales I.1) is not exactly one that is true. Many of his passages ooze with hatred towards the emperors. Despite this seemingly obvious partisan style of writing, much of what is said can go under the radar, which is as Tacitus wanted things to be. His skill as an orator, which was praised by his good friend Pliny, no doubt contributes to his supreme mastery of the Latin language. Not one to mince words, Tacitus does not waste time with a history of Rome ab urbe condita. Rather, he gives a brief synopsis of the key points before he begins a lengthier summary of the reign of Augustus. From there, he launches into his scathing account of history from where Livy would have left off. Sallust:C. Sallustius Crispus, more commonly known as Sallust, was a Roman historian of the 1st century BCE, born c. 86 BCE in the Sabine community of Amiternum. There is some evidence that Sallust’s family belonged to a local aristocracy, but we do know that he did not belong to Rome’s ruling class. Thus he embarked on a political career as a “novus homo,” serving as a military tribune in the 60s BCE, quaestor from 55 to 54 BCE, and tribune of the plebs in 52 BCE. Sallust was expelled from the senate in 50 BCE on moral grounds, but quickly revived his career by attaching himself to Julius Caesar. He served as quaestor again in 48 BCE, as praetor in 46 BCE, and governed the new province in the former Numidian territory until 44 BCE. Sallust’s political career ended upon his return to Rome and Caesar’s assassination in 44 BCE. We possess in full two of the historical works that have been convincingly ascribed to Sallust, the monographs, Bellum Catilinae and Bellum Jugurthinum. We have only fragments of the third work, the Historiae. There is less agreement about the authorship of some other works that have, at times, been attributed to him. In Bellum Catilinae, Sallust outlines the conspiracy of Catiline, a brash and ambitious patrician who tried to seize power in Rome in 63 BCE. In his other monograph, Sallust used the Jugurthine War as a backdrop for his examination of the development of party struggles in Rome in the 1st century BCE. The Historiae describe in general the history of the years 78-67 BCE. Although Sallust’s purposes in writing have been debated over the years, it seems logical to classify him as a senatorial historian who adopted the attitude of a censor. The historical details outlined in his monographs serve as paradigms for Sallust. In Bellum Catilinae, Sallust uses the figure of Catiline as a symbol of the corrupt Roman nobility. Indeed, much of what Sallust writes in this work does not even concern Catiline. The content of Bellum Jugurthinum also suggests that Sallust was more interested in character studies (e.g. Marius) than the details of the war itself. With respect to writing style, the main influences on Sallust’s work were Thucydides and Cato the Elder. Evidence of the former’s influence includes emphasis on politics, use of archaisms, character analysis, and selective omission of details. The use of such devices as asyndeton, anaphora, and chiasmus reflect preference for the old-fashioned Latin style of Cato to the Ciceronian periodic structure of his own era. Whether Sallust is considered a reliable source or not, he is largely responsible for our current image of Rome in the late republic. He doubtless incorporates elements of exaggeration in his works and has at times been described as more of an artist or politician than historian. But our understanding of the moral and ethical realities of Rome in the 1st century BCE would be much weaker if Sallust’s works did not survive. Suetonius:Gaius Suetonius Tranquillus (Suetonius) is most famous for his biographies of the Julio-Claudian and Flavian emperors and other notable historical figures. He was born around 70 to an equestrian family. Living during the times of the Emperor Trajan and having a connection to Pliny the Younger, Suetonius was able to begin a rise in rank in the imperial administration. In c. 102, he was appointed to a military tribune position in Britain, which he did not actually accept. He was, though, among the staff for Pliny’s command in Bithynia. During the late period of Trajan’s rule and under Hadrian, he held various positions, until he was discharged. He had a close proximity to the government as well as access to the imperial archives, which can be seen in his historical biographies. Suetonius wrote a large number of biographies on important literary figures of the past (De Viris Illustribus). Included in the collection were notable poets, grammarians, orators, historians, and philosophers. This collection, like his other works, was not organized chronologically. Not all of it has survived to the present day, but there are a number of references in other sources to attribute fragments to this collection. His most famous work, though, is the De Vita Caesarum. This collection of twelve biographies tells the lives of the Julio-Claudian and Flavian Emperors, spanning from Julius Caesar to Domitian. Other than an introduction genealogy and a short summary of the subject’s youth and death, the biographies do not follow a chronological pattern. Rather than chronicling events as they happened in time, Suetonius presents them thematically. This style allowed him to compare the achievements and downfalls of each emperor using various examples of imperial responsibilities, such as building projects and public entertainment. However, it makes dating aspects of each emperor’s life and the events of the early Roman Empire difficult. It also completely removes the ability to extrapolate a causal sequence from the works. Suetonius’s purpose was not a historical recount of events, though, but rather an evaluation of the emperors themselves. Suetonius’s style is simple; he often quotes directly from sources that were used, and artistic organization and language does not seem to exist. He addresses points directly, without flowery or misleading language, and quotes from his sources often. However, he is often criticized that he was more interested in the interesting stories about the emperors and not about the actual occurrences of their reigns. The style, with which he writes, primarily stems from his overarching purpose, to catalogue the lives of his subjects. He was not writing an annalistic history, nor was he even trying to create a narrative. His goal was the evaluation of the emperors, portraying the events and actions of the person while they were in office. He focuses on the fulfillment of duties, criticizing those that did not live up to expectations, and praising bad emperors for times when they did fulfill their duties. There are a variety of other lost or incomplete works by Suetonius, many of which describe areas of culture and society, like the Roman Year or the names of seas. However, what we know about these is only through references outside the works themselves. "Roman Historiography" (Wikipedia) https://en.wikipedia.org/wiki/Roman_historiography
<urn:uuid:3d6fcf0e-8a64-448c-9e2a-d34ffd819d6e>
CC-MAIN-2013-20
http://en.m.wikibooks.org/wiki/Saylor.org's_Ancient_Civilizations_of_the_World/Roman_Literature
2013-05-22T08:32:49Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.970535
4,225
An evoked potential or evoked response is an electrical potential recorded from the nervous system of a human or other animal following presentation of a stimulus, as distinct from spontaneous potentials as detected by electroencephalography (EEG), electromyography (EMG), or other electrophysiological recording method. Evoked potential amplitudes tend to be low, ranging from less than a microvolt to several microvolts, compared to tens of microvolts for EEG, millivolts for EMG, and often close to a volt for ECG. To resolve these low-amplitude potentials against the background of ongoing EEG, ECG, EMG, and other biological signals and ambient noise, signal averaging is usually required. The signal is time-locked to the stimulus and most of the noise occurs randomly, allowing the noise to be averaged out with averaging of repeated responses. Signals can be recorded from cerebral cortex, brain stem, spinal cord and peripheral nerves. Usually the term "evoked potential" is reserved for responses involving either recording from, or stimulation of, central nervous system structures. Thus evoked compound motor action potentials (CMAP) or sensory nerve action potentials (SNAP) as used in nerve conduction studies (NCS) are generally not thought of as evoked potentials, though they do meet the above definition. Sensory evoked potentials Sensory evoked potentials (SEP) are recorded from the central nervous system following stimulation of sense organs (for example, visual evoked potentials elicited by a flashing light or changing pattern on a monitor; auditory evoked potentials by a click or tone stimulus presented through earphones) or by tactile or somatosensory evoked potential (SSEP) elicited by tactile or electrical stimulation of a sensory or mixed nerve in the periphery. They have been widely used in clinical diagnostic medicine since the 1970s, and also in intraoperative neurophysiology monitoring (IONM), also known as surgical neurophysiology. There are three kinds of evoked potentials in widespread clinical use: auditory evoked potentials, usually recorded from the scalp but originating at brainstem level; visual evoked potentials, and somatosensory evoked potentials, which are elicited by electrical stimulation of peripheral nerve. See below. Long and Allen reported the abnormal brainstem auditory evoked potentials (BAEP) in an alcoholic woman who recovered from Ondine's curse. These investigators hypothesized that their patient's brainstem was poisoned, but not destroyed, by her chronic alcoholism. Steady-state evoked potential An evoked potential is the electrical response of the brain to a sensory stimulus. Regan constructed an analogue Fourier series analyzer to record harmonics of the evoked potential to flickering (sinusoidally modulated) light but, rather than integrating the sine and cosine products, fed them to a two-pen recorder via lowpass filters. This allowed him to demonstrate that the brain attained a steady-state regime in which the amplitude and phase of the harmonics (frequency components) of the response were approximately constant over time. By analogy with the steady-state response of a resonant circuit that follows the initial transient response he defined an idealized steady-state evoked potential (SSEP) as a form of response to repetitive sensory stimulation in which the constituent frequency components of the response remain constant with time in both amplitude and phase. Although this definition implies a series of identical temporal waveforms, it is more helpful to define the SSEP in terms of the frequency components that are an alternative description of the time-domain waveform, because different frequency components can have quite different properties For example, the properties of the high-frequency flicker SSEP (whose peak amplitude is near 40–50 Hz) correspond to the properties of the subsequently discovered magnocellular neurons in the retina of the macaque monkey, while the properties of the medium-frequency flicker SSEP ( whose amplitude peak is near 15–20 Hz) correspond to the properties of parvocellular neurons. Since a SSEP can be completely described in terms of the amplitude and phase of each frequency component it can be quantified more unequivocally than an averaged transient evoked potential. It is sometimes said that SSEPs are elicited only by stimuli of high repetition frequency, but this is not generally correct. In principle, a sinusoidally modulated stimulus can elicit a SSEP even when its repetition frequency is low. Because of the high-frequency rolloff of the SSEP, high frequency stimulation can produce a near-sinusoidal SSEP waveform, but this is not germane to the definition of a SSEP. By using zoom-FFT to record SSEPs at the theoretical limit of spectral resolution ΔF (where ΔF in Hz is the reciprocal of the recording duration in seconds) Regan and Regan discovered that the amplitude and phase variability of the SSEP can be sufficiently small that the bandwidth of the SSEP’s constituent frequency components can be at the theoretical limit of spectral resolution up to at least a 500-second recording duration (0.002 Hz in this case). Repetitive sensory stimulation elicits a steady-state magnetic brain response that can be analysed in the same way as the SSEP. The "simultaneous stimulation" technique This technique allows several (e.g., four) SSEPs to be recorded simultaneously from any given location on the scalp. Different sites of stimulation or different stimuli can be tagged with slightly different frequencies that are virtually identical to the brain, but easily separated by Fourier series analyzers. For example, when two unpatterned lights are modulated at slightly different frequencies (F1 and F2) and superimposed, multiple nonlinear cross-modulation components of frequency (mF1 ± nF2) are created in the SSEP, where m and n are integers. These components allow nonlinear processing in the brain to be investigated. By frequency-tagging two superimposed gratings, spatial frequency and orientation tuning properties of the brain mechanisms that process spatial form can be isolated and studied. Stimuli of different sensory modalities can also be tagged. For example, a visual stimulus was flickered at Fv Hz and a simultaneously presented auditory tone was amplitude modulated at Fa Hz. The existence of a (2Fv + 2Fa) component in the evoked magnetic brain response demonstrated an audio-visual convergence area in the human brain, and the distribution of this response over the head allowed this brain area to be localized. More recently, frequency tagging has been extended from studies of sensory processing to studies of selective attention and of consciousness. The “sweep” technique The sweep technique is a hybrid frequency domain/time domain technique. A plot of, for example, response amplitude versus the check size of a stimulus checkerboard pattern plot can be obtained in 10 seconds, far faster than when time-domain averaging is used to record an evoked potential for each of several check sizes. In the original demonstration of the technique the sine and cosine products were fed through lowpass filters (as when recording a SSEP ) while viewing a pattern of fine checks whose black and white squares exchanged place six times per second. Then the size of the squares was progressively increased so as to give a plot of evoked potential amplitude versus check size (hence “sweep”). Subsequent authors have implemented the sweep technique by using computer software to increment the spatial frequency of a grating in a series of small steps and to compute a time-domain average for each discrete spatial frequency. A single sweep may be adequate or it may be necessary to average the graphs obtained in several sweeps with the averager triggered by the sweep cycle. Averaging 16 sweeps can improve the signal-to-noise ratio of the graph by a factor of four. The sweep technique has proved useful in measuring rapidly adapting visual processes and also for recording from babies, where recording duration is necessarily short. Norcia and Tyler have used the technique to document the development of visual acuity and contrast sensitivity through the first years of life. They have emphasized that, in diagnosing abnormal visual development, the more precise the developmental norms, the more sharply can the abnormal be distinguished from the normal, and to that end have documented normal visual development in a large group of infants. For many years the sweep technique has been used in paediatric ophthalmology (electrodiagnosis) clinics Worldwide. Evoked potential feedback This technique allows the SSEP to directly control the stimulus that elicits the SSEP without the conscious intervention of the experimental subject. For example, the running average of the SSEP can be arranged to increase the luminance of a checkerboard stimulus if the amplitude of the SSEP falls below some predetermined value, and to decrease luminance if it rises above this value. The amplitude of the SSEP then hovers about this predetermined value. Now the wavelength (colour) of the stimulus is progressively changed. The resulting plot of stimulus luminance versus wavelength is a plot of the spectral sensitivity of the visual system. Visual evoked potential In 1934, Adrian and Matthew noticed potential changes of the occipital EEG can be observed under stimulation of light. Ciganek developed the first nomenclature for occipital EEG components in 1961. During that same year, Hirsch and colleagues recorded a visual evoked potential (VEP) on the occipital lobe (externally and internally), and they discovered amplitudes recorded along the calcarine fissure were the largest. In 1965, Spehlmann used a checkerboard stimulation to describe human VEPs. An attempt to localize structures in the primary visual pathway was completed by Szikla and colleagues. Halliday and colleagues completed the first clinical investigations using VEP by recording delayed VEPs in a patient with retrobulbar neuritis in 1972. A wide variety of extensive research to improve procedures and theories has been conducted from the 1970s to today. VEP Stimuli The diffuse light flash stimulus is rarely used due to the high variability within and across subjects. However, it is beneficial to use this type of stimulus when testing infants or individuals with poor visual acuity. The checkerboard and grating patterns use light and dark squares and stripes, respectively. These squares and stripes are equal in size and are presented to one at a time via a television or computer screen. VEP Electrode Placement Electrode placement is extremely important to elicit a good VEP response free of artifact. One electrode is placed 2.5 cm above the inion and a reference electrode is placed at Fz. For a more detailed response, two additional electrodes can be placed 5 cm to the right and left of Oz. VEP Waves The VEP nomenclature is determined by using capital letters stating whether the peak is positive (P) or negative (N) followed by a number which indicates the average peak latency for that particular wave. For example, P50 is a wave with a positive peak at approximately 50 ms following stimulus onset. The average amplitude for VEP waves usually falls between 5 and 10 microvolts. Types of VEP Some specific VEPs are: - Sweep visual evoked potential - Binocular visual evoked potential - Chromatic visual evoked potential - Hemi-field visual evoked potential - Flash visual evoked potential - LED Goggle visual evoked potential - Motion visual evoked potential - Multifocal visual evoked potential - Multi-channel visual evoked potential - Multi-frequency visual evoked potential - Stereo-elicited visual evoked potential - Steady state visually evoked potential Auditory evoked potential Auditory evoked potential can be used to trace the signal generated by a sound through the ascending auditory pathway. The evoked potential is generated in the cochlea, goes through the cochlear nerve, through the cochlear nucleus, superior olivary complex, lateral lemniscus, to the inferior colliculus in the midbrain, on to the medial geniculate body, and finally to the cortex. Auditory evoked potentials (AEPs) are a subclass of event-related potentials (ERP)s. ERPs are brain responses that are time-locked to some “event”, such as a sensory stimulus, a mental event (such as recognition of a target stimulus), or the omission of a stimulus. For AEPs, the “event” is a sound. AEPs (and ERPs) are very small electrical voltage potentials originating from the brain recorded from the scalp in response to an auditory stimulus, such as different tones, speech sounds, etc. Somatosensory evoked potential Somatosensory Evoked Potentials (SSEPs) are used in neuromonitoring to assess the function of a patient's spinal cord during surgery. They are recorded by stimulating peripheral nerves, most commonly the tibial nerve, median nerve or ulnar nerve, typically with an electrical stimulus. The response is then recorded from the patient's scalp. Because of the low amplitude of the signal once it reaches the patient's scalp and the relatively high amount of electrical noise caused by background EEG, scalp muscle EMG or electrical devices in the room, the signal must be averaged. The use of averaging improves the signal-to-noise ratio. Typically, in the operating room, over 100 and up to 1,000 averages must be used to adequately resolve the evoked potential. The two most looked at aspects of an SSEP are the amplitude and latency of the peaks. The most predominant peaks have been studied and named in labs. Each peak is given a letter and a number in its name. For example, N20 refers to a negative peak (N) at 20ms. This peak is recorded from the cortex when the median nerve is stimulated. It most likely corresponds to the signal reaching the somatosensory cortex. When used in intraoperative monitoring, the latency and amplitude of the peak relative to the patient's post-intubation baseline is a crucial piece of information. Dramatic increases in latency or decreases in amplitude are indicators of neurological dysfunction. During surgery, the large amounts of anesthetic gases used can affect the amplitude and latencies of SSEPs. Any of the halogenated agents or nitrous oxide will increase latencies and decrease amplitudes of responses, sometimes to the point where a response can no longer be detected. For this reason, an anesthetic utilizing less halogenated agent and more intravenous hypnotic and narcotic is typically used. Laser evoked potential Conventional SSEPs monitor the functioning of the part of the somatosensory system involved in sensations such as touch and vibration. The part of the somatosensory system that transmits pain and temperature signals is monitored using laser evoked potentials (LEP). LEPs are evoked by applying finely focused, rapidly rising heat to bare skin using a laser. In the central nervous system they can detect damage to the spinothalamic tract, lateral brain stem, and fibers carrying pain and temperature signals from the thalamus to the cortex. In the peripheral nervous system pain and heat signals are carried along thin (C and A delta) fibers to the spinal cord, and LEPs can be used to determine whether a neuropathy is located in these small fibers as opposed to larger (touch, vibration) fibers. Intraoperative monitoring Somatosensory evoked potentials provide monitoring for the dorsal columns of the spinal cord. Sensory evoked potentials may also be used during surgeries which place brain structures at risk. They are effectively used to determine cortical ischemia during carotid endarterectomy surgeries and for mapping the sensory areas of the brain during brain surgery. Electrical stimulation of the scalp can produce an electrical current within the brain that activates the motor pathways of the pyramidal tracts. This technique is known as transcranial electrical motor potential (TcMEP) monitoring. This technique effectively evaluates the motor pathways in the central nervous system during surgeries which place these structures at risk. These motor pathways, including the lateral corticospinal tract, are located in the lateral and ventral funiculi of the spinal cord. Since the ventral and dorsal spinal cord have separate blood supply with very limited collateral flow, an anterior cord syndrome (paralysis or paresis with some preserved sensory function) is a possible surgical sequela, so it is important to have monitoring specific to the motor tracts as well as dorsal column monitoring. Transcranial magnetic stimulation versus electrical stimulation is generally regarded as unsuitable for intraoperative monitoring because it is more sensitive to anesthesia. Electrical stimulation is too painful for clinical use in awake patients. The two modalities are thus complementary, electrical stimulation being the choice for intraoperative monitoring, and magnetic for clinical applications. Motor evoked potentials Motor evoked potentials (MEP) are recorded from muscles following direct stimulation of exposed motor cortex, or transcranial stimulation of motor cortex, either magnetic or electrical. Transcranial magnetic MEP (TCmMEP) potentially offer clinical diagnostic applications. Transcranial electrical MEP (TCeMEP) has been in widespread use for several years for intraoperative monitoring of pyramidal tract functional integrity. During the 1990s there were attempts to monitor "motor evoked potentials", including "neurogenic motor evoked potentials" recorded from peripheral nerves, following direct electrical stimulation of the spinal cord. It has become clear that these "motor" potentials were almost entirely elicited by antidromic stimulation of sensory tracts—even when the recording was from muscles (antidromic sensory tract stimulation triggers myogenic responses through synapses at the root entry level). TCMEP, whether electrical or magnetic, is the most practical way to ensure pure motor responses, since stimulation of sensory cortex cannot result in descending impulses beyond the first synapse (synapses cannot be backfired). TMS-induced MEPs have been used in many experiments in cognitive neuroscience. Because MEP amplitude is correlated with motor excitability, they offer a quantitative way to test the role of various types of intervention on the motor system (pharmacological, behavioral, lesion...) TMS-induced MEPs may thus serve as an index of covert motor preparation or facilitation, e.g., induced by the mirror neuron system when seeing someone's else actions. In addition, MEPs are used as a reference to adjust the intensity of stimulation that need to delivered by TMS when targeting cortical regions whose response might not be as easily measurable, e.g., in the context of TMS-based therapy. See also - Karl E. Misulis, Toufic Fakhoury (2001). Spehlmann's Evoked Potential Primer. Butterworth-heinemann. ISBN 0-7506-7333-8. - O’Shea, R. P., Roeber, U., & Bach, M. (2010). Evoked potentials: Vision. In E. B. Goldstein (Ed.), Encyclopedia of Perception (Vol. 1, pp. 399-400, xli). Los Angeles: Sage. ISBN 978-1-4129-4081-8 - Long KJ, Allen N (1984). "Abnormal Brainstem Auditory Evoked Potentials Following Ondine's Curse". Arch. Neurol 41 (10): 1109–1110. PMID 6477223. - Regan D (1966). "Some characteristics of average steady–state and transient responses evoked by modulated light". Electroencephalography and Clinical Neurophysiology 20 (3): 238–48. doi:10.1016/0013-4694(66)90088-5. PMID 4160391. - Regan D (1979). "Electrical responses evoked from the human brain". Scientific American 241 (6): 134–46. doi:10.1038/scientificamerican1279-134. PMID 504980. - Regan, D. (1989). Human brain electrophysiology: Evoked potentials and evoked magnetic fields in science and medicine. New York: Elsevier, 672 pp. - Regan D., Lee B.B. (1993). "A comparison of the human 40 Hz response with the properties of macaque ganglion cells". Visual Neuroscience 10 (3): 439–445. doi:10.1017/S0952523800004661. PMID 8494797. - Regan M.P., Regan D. (1988). "A frequency domain technique for characterizing nonlinearities in biological systems". Journal of Theoretical Biology 133 (3): 293–317. doi:10.1016/S0022-5193(88)80323-0. - Regan D., Heron J.R. (1969). "Clinical investigation of lesions of the visual pathway: a new objective technique". Journal of Neurology Neurosurgery and Psychiatry 32 (5): 479–83. doi:10.1136/jnnp.32.5.479. - Regan D., Regan M.P. (1988). "Objective evidence for phase–independent spatial frequency analysis in the human visual pathway". Vision Research 28 (1): 187–191. doi:10.1016/S0042-6989(88)80018-X. PMID 3413995. - Regan D., Regan M.P. (1987). "Nonlinearity in human visual responses to two–dimensional patterns and a limitation of Fourier methods". Vision Research 27 (12): 2181–3. doi:10.1016/0042-6989(87)90132-5. PMID 3447366. - Regan M.P., He P., Regan D. (1995). "An audio–visual convergence area in human brain". Experimental Brain Research 106 (3): 485–7. PMID 8983992. - Morgan S. T., Hansen J. C., Hillyard S. A. (1996). "Selective attention to stimulus location modulates the steady-state evoked potential". Proceedings of the National Academy of Science USA 93 (10): 4770–4774. doi:10.1073/pnas.93.10.4770. PMC 39354. PMID 8643478. - Srinivasan R, Russell DP, Edelman GM, Tononi G (1999). "Increased synchronization of neuromagnetic responses during conscious perception". Journal of Neuroscience 19 (13): 5435–48. PMID 10377353. - Regan D (1973). "Rapid objective refraction using evoked brain potentials". Investigative Ophthalmology 12 (9): 669–79. PMID 4742063. - Norcia A. M., Tyler C. W. (1985). "Infant VEP acuity measurements: Analysis of individual differences and measurement error". Electroencephalography and Clinical Neurophysiology 61 (5): 359–369. doi:10.1016/0013-4694(85)91026-0. PMID 2412787. - Regan D (1975). "Colour coding of pattern responses in man investigated by evoked potential feedback and direct plot techniques". Vision Research 15 (2): 175–183. doi:10.1016/0042-6989(75)90205-9. PMID 1129975. - Nelson J. I., Seiple W. H., Kupersmith M. J., Carr R. E. (1984). "A rapid evoked potential index of cortical adaptation". Investigative Ophthalmology and Vision Science 59 (6): 454–464. doi:10.1016/0168-5597(84)90004-2. PMID 6209112. - Norcia A. M., Tyler C. W. (1985). "Spatial frequency sweep VEP: Visual acuity during the first year of life". Vision Research 25 (10): 1399–1408. doi:10.1016/0042-6989(85)90217-2. PMID 4090273. - Norcia A. M., Tyler C. W., Allen D. (1986). "Electrophysiological assessment of contrast sensitivity in human infants". American Journal of Optometry and Physiological Optics 63 (1): 12–15. PMID 3942183. - Musiek, FE, & Baran, JA. (2007). The Auditory system. Boston, MA: Pearson Education, Inc. - Treede RD, Lorenz J, Baumgärtner U (December 2003). "Clinical usefulness of laser-evoked potentials". Neurophysiol Clin 33 (6): 303–14. PMID 14678844. - Catmur C., Walsh V., Heyes C. (2007). "Sensorimotor learning configures the human mirror system". Curr. Biol. 17 (17): 1527–1531. doi:10.1016/j.cub.2007.08.006. PMID 17716898.
<urn:uuid:f6d09480-6f3f-43f6-9a1e-8cbf574d3ecf>
CC-MAIN-2013-20
http://en.wikipedia.org/wiki/Evoked_potential
2013-05-22T08:12:26Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.855217
5,312
||It has been suggested that Inattentional Blindness be merged into this article. (Discuss) Proposed since July 2012.| ||This article has multiple issues. Please help improve it or discuss these issues on the talk page. Inattentional blindness, may also known as perceptual blindness. It is categorized as a psychological lack of attention and is not associated with any vision defects or deficits. Inattentional blindness defined Inattentional blindness is the failure to notice an unexpected stimulus that is in one's field of vision when other attention-demanding tasks are being performed. It is categorized as an attentional error and is not associated with any vision deficits. This typically happens because humans are overloaded with stimuli, and it is impossible to pay attention to all stimuli in one's environment. This is due to the fact that they are unaware of the unattended stimuli. Inattentional blindness also has an effect on people’s perception. There have been multiple experiments performed that demonstrate this phenomenon. Cognitive capture Cognitive capture or, cognitive tunneling, is an inattentional blindness phenomenon in which the observer is too focused on instrumentation, task at hand, internal thought, etc. and not on the present environment. For example, while driving, if the driver is focused on the speedometer and not on the road, they are suffering from cognitive capture. Experiments demonstrating inattentional blindness The term inattentional blindness was coined by Arien Mack and Irvin Rock in 1992. It was used as the title of Mack and Rock's book published by MIT Press in 1998. The book describes a series of experiments that demonstrated inattentional blindness. Invisible gorilla test To test for inattentional blindness, researchers ask participants to complete a primary task while an unexpected stimulus is presented. Afterwards, researchers ask participants if they saw anything unusual during the primary task. The best-known study demonstrating inattentional blindness is the Invisible gorilla test, conducted by Daniel Simons of the University of Illinois at Urbana-Champaign and Christopher Chabris of Harvard University. This study, a revised version of earlier studies conducted by Ulric Neisser, Neisser and Becklen, 1975, asked subjects to watch a short video of two groups of people (wearing black and white t-shirts) pass a basketball around. The subjects are told to either count the number of passes made by one of the teams or to keep count of bounce passes vs. aerial passes. In different versions of the video a woman walks through the scene carrying an umbrella, or wearing a full gorilla suit. After watching the video the subjects are asked if they saw anything out of the ordinary take place. In most groups, 50% of the subjects did not report seeing the gorilla. The failure to perceive the gorilla or the woman carrying an umbrella is attributed to the failure to attend to it while engaged in the difficult task of counting the number of passes of the ball. These results indicate that the relationship between what is in one's visual field and perception is based much more on attention than was previously thought. Although it was found that 50% of the test subjects demonstrated change blindness to the introduction of the gorilla or the umbrella, it is difficult to find published information on what percentage of study participants were able to accurately count the passes. Theoretical Background The perceptual cycle framework has been used as a theoretical basis for inattentional blindness. The perceptual cycle frame work describes attention capture and awareness capture as occurring at two different stages of processing. Attention capture occurs when there is a shift in attention due to the salience of a stimuli, and awareness capture refers to the conscious acknowledgement of stimuli. Attentional sets are important because it is composed of characteristics of stimuli an individual is processing. Inattentional blindness occurs when there is an interaction between an individual's attentional set and the salience of the unexpected stimulus. Recognizing the unexpected stimulus can occur when the characteristics of the unexpected stimulus resembles the characteristics of the perceived stimuli. The attentional set theory of inattentional blindness has implications for false memories and eyewitness testimony. The perceptual cycle framework offers four major implications about inattentional blindness 1) environmental cues aid in the detection of stumuli by providing orienting cues but is not enough to produce awareness, 2) perception requires effortful sustained attention, interpretation, and reinterpretation, 3) implicit memory may precede conscious perception, and 4) visual stimuli that is not expected, explored, or interpreted may not be perceived. Other bases for attentional blindness include top down and bottom up processing. The basic Simons and Chabris study was re-used on British television as a public safety advert designed to point out the potential dangers to cyclists caused by inattentional blindness in motorists. In the advert the gorilla is replaced by a moon-walking bear. Computer red cross experiment Another experiment was conducted by Steven Most, along with Daniel Simons, Christopher Chabris and Brian Scholl. Instead of a basketball game, they used stimuli presented by computer displays. In this experiment objects moved randomly on a computer screen. Participants were instructed to attend to the black objects and ignore the white, or vice versa. After several trials, a red cross unexpectedly appeared and traveled across the display, remaining on the computer screen for five seconds. The results of the experiment showed that even though the cross was distinctive from the black and white objects both in color and shape, about a third of participants missed it. They had found that people may be attentionally tuned to certain perceptual dimensions, such as brightness or shape. Inattentional blindness is most likely to occur if the unexpected stimuli presented resembles the environment. Clown on a unicycle experiment One interesting experiment displayed how cell phones contributed to inattentional blindness in basic tasks such as walking. The stimuli for this experiment was a brightly colored clown on a unicycle. The individuals participating in this experiment were divided into four sections. They were either talking on the phone, listening to an mp3 player, walking by themselves or walking in pairs. The study showed that individuals engaged in cell phone conversations were least likely to notice the clown. This experiment was designed by Ira E. Hyman, S. Matthew Boss, Breanne M. Wise, Kira E. Mckenzie and Jenna M. Caggiano at Western Washington University. Blindness despite fixation Daniel Memmert conducted an experiment which suggests that an individual can look directly at an object and still not perceive it. This experiment was based on the invisible gorilla experiment. The participants were children with an average age of 7.7 years. Participants watched a short video of a six player basketball game (three with white shirts, three with black shirts). The participants were instructed to watch only the players wearing black shirts and to count the number of times the team passed the ball. During the video a person in a gorilla suit walks through the scene. The film was projected onto a large screen (3.2 m X 2.4 m) and the participants sat in a chair 6 meters from the screen. The eye movement and fixations of the participants were recorded during the video and afterward the participants answered a series of questions. Only 40% of the participants reported seeing the gorilla, leaving 60% who did not report seeing the gorilla. There was no significant difference in accuracy of the counting between the two groups. Analyzing the eye movement and fixation data showed no significant difference in the time spent looking at the players (black or white) between the two groups. However, the 60% of participants who did not report seeing the gorilla spent an average of 25 frames (about one second) fixated on the gorilla, despite not perceiving it. A more common example of the above is illustrated in the game of Three-card Monte. Effects of expertise Another experiment conducted by Daniel Memmert tested the effects of different levels of expertise can have on inattentional blindness. The participants in this experiment included six different groups: Adult basketball experts with an average of twelve years of experience, junior basketball experts with an average of five years, children who had practiced the game for an average of two years, and novice counterparts for each age group. In this experiment the participants watched the invisible gorilla experiment video. The participants were instructed to watch only the players wearing white and to count the number of times the team passed the ball. The results of the experiment showed that experts did not count the number of passes more accurately than novices but did show that adult subjects were more accurate than the junior and children subjects. A much higher percentage of experts noticed the gorilla compared to novices and even the practiced children. 62% of the adult experts and 60% of the junior experts noticed the gorilla, suggesting that the difference between five and twelve years of experience has minimal effect on inattentional blindness. However, only 38% of the adult, 35% of the junior, and none of the children novices noticed the gorilla. Only 18% of the children with two years of practice noticed. This suggests that both age and experience can have a significant effect on inattentional blindness. Perception and inattentional blindness In 1995, Boston Police Officer Kenneth M. Conley was put on trial for claiming he did not see a violent assault incident between people while he was chasing a suspect. His alibi was accepted due to “inattentional blindness.” This case now leads to an experiment. In this experiment, Psychology professors Christopher Chabris of Union College and Daniel Simons of the University of Illinois demonstrated the same situations of the original incident of Officer Conley, with the help of their students. During the experiment, the students were asked to go on a three minute run around the campus. They were then asked to focus on keeping a steady distance and to count the number of times he touched his head to wipe off sweat. While focusing on running and keeping a steady distance, the students then came across a staged fight ahead of their running path. Most of the students missed the staged fight while running in the dark (in which officer Conley had his experience). During the day, 40 percent of the students still missed it. They all were so focused on running that they missed the staged fight. Officer Conley was in a similar situation. Professor Simons stated that we can’t state with confidence that Conley didn’t see the fight during the suspect chase, but the results of the study show that it is still possible to miss something as obvious as a fight, simply because of how you are directly concentrating on other things. Overall, perception can be affected while you are focusing on something else, it depends on the person as well as the type of event that takes place, and the resemblance between the unexpected stimuli and the environment . Possible causes The research that has been done on inattentional blindness suggests that there are four possible causes for this phenomenon. These include: conspicuity, mental workload, expectations, and capacity. Conspicuity refers to an objects ability to catch a person's attention. When something is conspicuous it is easily visible. There are two factors which determine conspicuity: sensory conspicuity and cognitive conspicuity. Sensory conspicuity factors are the physical properties an object has. If an item has bright colors, flashing lights, high contrast with environment, or other attention-grabbing physical properties it can attract a person’s attention much easier. For example, people tend to notice objects that are bright colors or crazy patterns before they notice other objects. Cognitive conspicuity factors pertain to objects that are familiar to someone. People tend to notice objects faster if they have some meaning to their lives. For example, when a person hears his/her name, their attention is drawn to the person who said it. The cocktail party effect describes the cognitive conspicuity factor as well. When an object isn’t conspicuous, it is easier to be intentionally blind to it. People tend to notice items if they capture their attention in some way. If the object isn’t visually prominent or relevant, there is a higher chance that a person will miss it. Mental workload and Working Memory Mental workload is a person's cognitive resources. Amount of a person's workload can interfer with processing of other stimuli. When a person focuses a lot of attention on one stimulus, he/she focuses less attention on other stimuli. For example, talking on the phone while driving – the attention is mostly focused on the phone conversation, so there is less attention focused on driving. The mental workload could be anything from thinking about tasks that need to be done, or tending to a baby in the backseat. When people have most of their attention focused on one thing, they are more vulnerable to inattentional blindness. However, the opposite is true as well. When a person has a very small mental workload – he/she is doing an everyday task – the task becomes automatic. Automatic processing can lessen one's mental workload, which can lead to a person to missing the unexpected stimuli. Working memory also has an effect on inattentional blindness. Those that experience inattentional blindness are more likely to have a lower working memory capacity. Working memory is also another contributor to inattentional blindness. Cognitive psychologists have examined the relationship between working memory and inattention, but evidence is inconclusive. For example, some researchers state that individuals that have more space in their working memory and those with stronger working memory are less likely to be susceptible to inattentional blindness. Other researchers state that working memory does not influence inattentional blindness because working memory does not influence all attentional processes. For example, research conducted by Bredemeier and Simons, participants were given working memory tasks and a sustained-attention task. The first working memory task required participants to indicate whether a combination of letters matched a previous combination of letters that appeared earlier on a computer screen. The second working memory task required participants to determine if a target letter was in the same position as previous letters. For the sustained-attention task, participants were asked to count how many times a white square touched the edges of a computer screen. Once the tasks were completed, researchers asked participants if they noticed anything else besides the white squares during the sustained-attention task. During the sustained-attention task, a grey cross moved around the screen during some of the trails. Results indicated that 70% of participants did notice the grey cross moving on the computer screen, suggesting working memory does not have an influence on susceptibility to inattentional blindness. On the other hand, a follow-up study to the Bredemeiser and Simons was conducted to further explore the impact of working memory using another working memory task. For this study, participants were asked to complete a math problem, and a letter was presented after each problem. After completing the math problems, participants were asked to recall the series of letters in sequential order. This task served as a working memory measure. The same sustained attention task was completed after the working memory task. Using this method, only 27% of participants noticed the grey square. Researchers concluded that working memory does influence one's experience of attentional blindness, but not an individual's ability to handle the task demands. These two studies demonstrate the inconsistencies in the relationship between working memory and inattentional blindness. When a person expects certain things to happen, he/she tends to block out other possibilities. This can lead to inattentional blindness. For example, person X is looking for their friend at a concert, and that person knows their friend (person Y) was wearing a yellow jacket. In order to find person Y, person X looks around for people wearing yellow. It is easier to pick a color out of the crowd than a person. However, if person Y took off the jacket, there is a chance person X could walk right past person Y and not notice because he/she was looking for the yellow jacket. Because of expectations, experts are more prone to inattentional blindness than beginners. An expert knows what to expect when certain situations arise. Therefore, that expert will know what to look for. This could cause that person to miss out on other important details that he/she may not have been looking for. Attentional capacity, or neurological salience, is a measure of how much attention must be focused to complete a task. For example, an expert pianist can play a piano without thinking much, but a beginner would have to consciously think of every note they hit. This capacity can be lessened by drugs, alcohol, fatigue, and age. With a small capacity, it is more possible to miss things. Therefore, if a person is drunk, he/she will probably miss more than a sober person would. If your attentional capacity is large, you are less likely to experience inattentional blindness. Inattentional blindness is exploited by illusionists in the presentation of "magic shows" in the performance of some tricks by focusing the audience's attention upon some distracting element, away from elements of the scene under manipulation by the performer. This is called misdirection by magicians. See also - Most, Steven B. (2010). "What's "inattentional" about inattentional blindness?". Consciousness and Cognition 19 (4): 1102. doi:10.1016/j.concog.2010.01.011. - Note: The term has also been applied to the "cognitive capture" of government regulatory agencies by the industries they are charged with regulating. The regulators may be seen as being so "captured" by the industry that they focus all their energy on the welfare of the industry and not on the welfare of the public. This concept may interact with "cognitive dissonance" to explain why people create local cultures that reflect some of the values in their local community, while completely ignoring others. - Most, SB; Simons, DJ; Scholl, BJ; Jimenez, R; Clifford, E; Chabris, CF (January 2001). "How not to be seen: the contribution of similarity and selective ignoring to sustained inattentional blindness". Psychol Sci 12 (1): 9–17. doi:10.1111/1467-9280.00303. PMID 11294235. - Change Blindness Study; The Invisible Gorilla online; retrieved ? - Most, Steven B; Scholl, Brian J; Simons, Daniel J; Clifford, Erin R (2005). "What you see is what you get. Sustained intattentional blindness and the capture of awareness". I-Psychological Review 112 (1): 217–242. doi:10.1037/0033-295X.112.1.217. - Carpenter, Siri (2001). "Sights Unseen". Monitor on Psychology 32 (4): 54. Retrieved 10 October 2012. - Hyman, Ira E.; Boss, S. Matthew; Wise, Breanne M.; McKenzie, Kira E.; Caggiano, Jenna M. (2009). "Did you see the unicycling clown? Inattentional blindness while walking and talking on a cell phone". Applied Cognitive Psychology 24 (5): 597–607. doi:10.1002/acp.1638. - Memmert, D (September 2006). "The effects of eye movements, age, and expertise on inattentional blindness". Conscious Cogn 15 (3): 620–7. doi:10.1016/j.concog.2006.01.001. PMID 16487725. - Chabris, Christopher F; Weinberger, Adam; Fontaine, Matthew; Simons, Daniel J (2011). "You do Not Talk About Fight Club if You Do Not Notice Fight Club: Inattentional Blindness For A Stimulated Real-World Assult". I-Perception 2 (2): 150–153. doi:10.1068/i0436. - Mack, Arien (2003). "Inattentional blindness: Looking without seeing" (PDF). Current Directions in Psychological Science 12 (5): 179–184. doi:10.1111/1467-8721.01256. Retrieved 10 October 2012. - Bredemeiser, Keith; Simons, DJ (April 2012). "Working memory and inattentional blindness". Psychonomic Bulletin & Review 19 (2): 239–244. doi:10.3758/s13423-011-0204-8. Further reading - Mack, A. & Rock I.; 1998; Inattentional Blindness; MIT Press; ISBN: - Chun, Marvin M.; Marois, René (2002). "The dark side of visual attention". Current Opinion in Neurobiology 12 (2): 184–189. doi:10.1016/S0959-4388(02)00309-4. PMID 12015235. - William Saletan (October 23, 2008). "The Mind-BlackBerry Problem: Hey, you! Cell-phone zombie! Get off the road!". slate.com. Retrieved 2009-03-26.
<urn:uuid:9c6875d8-7391-4bb5-bad5-14cff121f288>
CC-MAIN-2013-20
http://en.wikipedia.org/wiki/Inattentional_blindness
2013-05-22T08:12:42Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.942186
4,394
|This article is part of the series on: Military of ancient Rome (portal) 753 BC – AD erals |Roman navy (fleets, admirals)| |Lists of wars and battles| |Decorations and punishments| |Military engineering (castra, siege engines, arches, roads)| |Strategy and tactics| |Frontiers and fortifications (limes, Hadrian's Wall)| The spolia opima ("rich spoils") were the armor, arms, and other effects that an ancient Roman general stripped from the body of an opposing commander slain in single combat. The spolia opima were regarded as the most honorable of the several kinds of war trophies a commander could obtain, including enemy military standards and the peaks of warships. The Romans recognized only three instances when spolia opima were taken. The precedent was set in Rome's legendary history when in 752 BC Romulus defeated and stripped Acro, king of the Caeninenses, following the Rape of the Sabine Women. In the second instance, Aulus Cornelius Cossus obtained the spolia opima from Lar Tolumnius, king of the Veientes, during Rome's semi-legendary Regal period. The third and most historically grounded occurred during the Second Punic War when Marcus Claudius Marcellus (consul 222 BC) stripped the Celtic warrior Viridomarus, a king of the Gaesatae. The ceremony of the spolia opima was a ritual of state religion that was supposed to emulate the archaic ceremonies carried out by the founder Romulus. The victor affixed the stripped armor to the trunk of an oak tree, carried it himself in a procession to the Capitoline, and dedicated it at the Temple of Jupiter Feretrius. Imperial politics During the earliest years in the rise of Augustus (still known as Octavian at the time), Marcus Licinius Crassus (consul 30 BC) defeated an enemy leader in single combat in Macedonia and was eligible to claim the honour of spolia opima. This Marcus Crassus was the grandson of the triumvir Marcus Crassus, who had died in the disastrous Battle of Carrhae in 53 BC. His illustrious political lineage made him a potential rival to Octavian, who blocked the honors. Crassus may also have been the last Roman outside the imperial family to be awarded the honor of a triumph. See also - Livy, Ab urbe condita, 1:10 - J.W. Rich, "Drusus and the Spolia Opima," Classical Quarterly 49.2 (1999), p. 545. - Rich, "Drusus and the Spolia Opima," p. 545. - Ronald Syme, The Roman Revolution, p. 308 - Ronald Syme, The Augustan Aristocracy ( (Oxford University Press, 1989), pp. 273–274. The sources are not entirely clear as to whether Crassus was actually allowed to celebrate his triumph, virtually the only honor his grandfather never gained.
<urn:uuid:90fa5adb-8a43-4a0c-9cda-90c52677c718>
CC-MAIN-2013-20
http://en.wikipedia.org/wiki/Spolia_opima
2013-05-22T08:34:47Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.93936
650
The Stockholm Observatory (Swedish: Stockholms observatorium) is an astronomical institution in Stockholm, Sweden, founded in the 18th century and today part of Stockholm University. Its history is connected to two actual historical observatory complexes in the Stockholm area. The first observatory was established by the Royal Swedish Academy of Sciences on the initiative of its secretary Per Elvius. Construction, according to designs by the architect Carl Hårleman, was begun in 1748, and the building was completed in 1753. It is situated on a hill in a park nowadays named Observatorielunden. The first head of the observatory was Pehr Wilhelm Wargentin. Later heads of the observatory include Hugo Gyldén and Bertil Lindblad. This 18th-century observatory today functions as a museum. A newer observatory was built in Saltsjöbaden outside Stockholm and completed in 1931 (the architect this time being Axel Anderberg). More recent astronomical observations, however, are almost exclusively being done in observatories outside Sweden and closer to the equator. The research institute was transferred from the Academy to the university in 1973 and is since 2001 housed in the AlbaNova University Centre. - Observatory Museum (The Old Stockholm Observatory) - Stockholm Observatory, official website of new observatory
<urn:uuid:b8a993cf-7233-4dda-897a-f65a4bc65a0a>
CC-MAIN-2013-20
http://en.wikipedia.org/wiki/Stockholm_Observatory
2013-05-22T08:04:00Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.937312
274
Some of the greatest art shows itself in the form of architecture . I believe it is important to discuss the influence art nouveau and art deco had on architecture and design because they are commonly overlooked. The artists of these time periods expressed themselves through building designs and the furniture that adorned them. Although the eras of these two periods of design are side-by-side, the influences, style, and artists of art nouveau and art deco are very different. Despite this, art nouveau and art deco both had a great influence on architecture and design at the beginning of the twentieth century Art nouveau is the first period of art that came along between these two eras. It lasted from the 1880’s until the 1910’s. . The period was in protest to Industrialization and mass production. Art nouveau means "new art", labeled by The Paris Art Gallery, Maison de l’Art Nouveau. Nature’s curvy, flowing lines influence Art Nouveau. The pieces hold a sense of fluidness and fantasy, evident in the natural elements seen in the works, such as flowers and vines. Most of the works created were used for ornamentation in buildings and design. As well as in decoration, the works of art nouveau artists may also be seen in books as illustrations and advertisements. One of the most influential artists of this time is Alphonse Mucha, known for his advertisements for Sarah Barnhardt and various mass-produced products. Another well knows artist is Louis Comfort Tiffany. Tiffany lamps are both popular and prized, even today. Both of these artists took lines and forms straight from nature to accent their works. This is more evident in Mucha’s works, since he uses actual flowers and stars to accent his pieces, where as Tiffany only uses the inspiration from nature’s fluid lines to accentuate his pieces. On the timeline, art deco follows art nouveau; it’s period being from the 1920’s until the 1930’s. Art deco was a rebellion against art nouveau. Although it adopted art nouveau’s curved lines, art deco also used geometric shapes. Art deco artists designed their works so that they could be mass-produced, unlike the works of art nouveau artists. The greatest influence of art deco came from the cubist movement and received its name from Exposition Internationale des Arts Decoratifs es Industriels Modernes, which was a Parisian design exhibition in 1925. Art deco pieces were mostly furniture, pottery, jewelry, and fabrics. The pieces were elegant and mad of both industrialized materials, such as plastic and chrome, as well as expensive and lush materials, like ivory and silver. One of the greatest architectural achievements of the art deco period is the Empire State Building, located in New York City. Shreve, Lamb, and Harmon, an architectural firm, designed the building. Even today, the Empire State Building survives as a classic example of art deco design. Art deco and art nouveau were both very important to design movements during the early twentieth century. The architectural and design movement started with art nouveau. Since art nouveau was a rebellion against mass-production, most of the pieces of furniture were hand-made. This caused many of the pieces to be expensive, making works only available to the very rich. It was because of this that art nouveau lost the public’s interest. Art deco shortly followed to come and pick up where art nouveau left off. Art deco artists borrowed the smooth and flowing lines from art nouveau, but also incorporated geometric designs. Their pieces were designed specifically to be mass-produced, where art nouveau was hand-made. The works of art deco crafters often gave a sense of speed and industrialization, mirroring the attitude of the time period. Comparatively, art nouveau artists were successful in shying away from the Industrial Age, where art deco artists were successful in utilizing the Industrial Age to their advantage. The influence that art nouveau and art deco had on the twentieth century is very evident through architecture and design. Their marks can be seen on buildings, furniture and decorations. Art nouveau and art deco are commonly overlooked in this aspect, which is why I found it necessary to write about them. The two periods have so many differences, and yet they borrow so much from each other, that they are also alike. The expressions the artists create through these styles of art leave the viewer with a sense of awe and beauty.
<urn:uuid:719b64e8-a090-4b36-8382-0feb15fadc8e>
CC-MAIN-2013-20
http://everything2.com/title/Art+Nouveau+Vs.+Art+Deco
2013-05-22T08:21:56Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.968165
986
Unlike the middle colonies , New England did not receive a stream of unstable migrants. More like a trickle . The rocky soil of the northern climate did not encourage widespread farming. Weather ranged from one extreme to the other, with sweltering heat in the summer and bitter cold in the winter . This was a fortunate boon to Yankee ingenuity , however, because it encouraged the diversification of agriculture outside of staple products, unlike in the South . New Englanders were skilled merchants, lumberers, fishers, shipsmen, and industrial masters of the fledgling colonial enterprise. Religion also was another factor that kept New Englanders isolated and homogenous , with the sulferous Puritan sermon dominating the spiritual landscape straight to the Revolutionary War These traits had a major influence on the establishment of New England villages. Unlike in the Chesapeake, where establishment was random and somewhat haphazard, Yankee villages were required to go through rigourous processes for their establishment. A new town was to be legally chartered by colonial authorities, and town fathers, elderly gentlemen, took charge of the distribution of land. The colonial legislature would grant a large parcel to them, and they would then divvy it out as they saw by merit. Everything was done in an orderly fashion, with each family receiving several parcels of land, including a woodlot for fuel, a tract suitable for growing crops, and another for pasturing animals. Also constructed promptly were a meetinghouse, serving the needs of worship for the fiercly religious Puritan settlers, and a town hall. The village green was the final central addition, suitable for drilling the local militia. An interesting note is that all towns of more than fifty families were required by colonial law to provide elementary education. Because of this, literacy was widespread throughout New England in the pre-Revolutionary days, unlike in other areas of the country. The oldest corporation in America, Harvard College, was established in New England as a place of training for local ministers in 1636, an early testament to the importance of education for New Englanders. Primers had heavy religious content to suppliment their education values, intended to promote the continued propagation of "God's people."
<urn:uuid:79b04539-872e-43a5-8291-589614a03b70>
CC-MAIN-2013-20
http://everything2.com/title/The+colonial+New+England+village
2013-05-22T08:03:24Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.971705
453
There are some advantages to having a warmer-than-normal winter season, but this isn’t one of them: tick season. It started early this year due to the mild winter, and experts are warning that tick-borne diseases such as Lyme disease, Rocky Mountain spotted fever, tularemia and others may be on the increase as a result. It’s a serious warning, too, because ticks are the leading disease carriers in the United States, while mosquitos rank number one on the global scale. Take Lyme disease, for example, which sometimes, but not always, manifests a characteristic “bull’s eye” rash. Lyme disease is on the rise so much that during a two-year period the number of cases increased by 77 percent. It’s no respecter of persons, either. Even famous folks have had Lyme disease, including former President George W. Bush, Michael J. Fox, Christie Brinkley and Richard Gere. Diagnosis of Lyme disease can be elusive, although symptoms include fatigue, headaches, fever, chills, muscle or joint pain, mental confusion, swollen lymph nodes as well as neurological problems. It is often mistaken for chronic fatigue, fibromyalgia, lupus, multiple sclerosis, Parkinson's, Alzheimer's, arthritis or psychiatric disorders. The truth is that ticks may be tiny—freckle-sized—but they can cause big health problems. You might say that ticks make up a large force to be reckoned with, too, since there are more than 800 species of ticks worldwide. Ticks aren’t classified as insects, either. They’re arthropods, as are spiders, and their bite/saliva can contain toxins, organisms (such as protozoa, bacteria or viruses) or other secretions that can make you sick. There are two families of ticks to be aware of—hard ticks and soft ticks. Hard ticks have a tough back and can attach themselves to the host and feed for hours or days. Hard ticks usually transfer any diseases at the end of their meal, when the tick is full of blood. Soft ticks, on the other hand, have more rounded bodies and do not have the hardened back plate like hard ticks. Soft ticks usually feed for less than one hour and can transmit disease in less than a minute! That’s why we’re having this tick talk. Interestingly, ticks like to hang out in low brush, which allows them to come into contact with a potential host. If you lean against a tree or sit on an old log, then you might speedily pick up a tick—in about 30 seconds. You’ll want to be especially cautious if you bike or hike in or near wooded areas, fields or trails. Avoid sitting in the grass or weeds as well. You’ll also want to wear white socks, long pants tucked into your socks or boots, long-sleeved shirts and a cap—and no sandals or open-toed shoes—when you’re in tick territory. After getting back home, check your clothing right away and shower within two hours, which can reduce your risk of being bitten. Additionally, putting your clothes in the dryer for at least an hour can kill ticks. Be sure to check yourself and others, including pets, for ticks, paying close attention to the scalp. And remember that ticks aren’t picky eaters. Any human, pet or warm-blooded animal will do because they need only the blood to survive. Be smart. Protect yourself and your loved ones from ticks this season.
<urn:uuid:dabb5106-053a-429d-8991-2691b34d921f>
CC-MAIN-2013-20
http://gardenoflife.com/Our-Company/Articles/Daily-Health-Article-Display/ContentPubID/872/settmid/4862.aspx
2013-05-22T08:11:52Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.949596
745
It doesn't matter how many signal relay towers blanket your city, the quality of your wireless reception can often change drastically—sometimes within a few steps. But MIT researchers think they've figured out how to improve reception using error correcting codes that work regardless of signal noise. Error correcting codes (ECC) are a means of encoding data and so as to transmit it without losing fidelity and regardless of the communication channel's level of noise. It works by sending out an encoded test message, like a codeword, to test the level of noise on a channel—the more noise in the channel, the longer the codeword needs to be. The problem is that on noisy channels, the codeword becomes prohibitively long, and on channels with fluctuating noise levels, the codeword may be too short to ensure proper transmission of the data. MIT's solution for this problem is simple—use a single long code word broken into chunks. As Gregory Wornell, a professor in the Department of Electrical Engineering and Computer Science at MIT explains, The transmission strategy is that we send the first part of the codeword. If it doesn't succeed, we send the second part, and so on. We don't repeat transmissions: We always send the next part rather than resending the same part again. Because when you marry the first part, which was too noisy to decode, with the second and any subsequent parts, they together constitute a new, good encoding of the message for a higher level of noise. Basically the system will create a codeword, break it into sections, and send each section sequentially until the receiving device has enough of the codeword to decrypt the message. It works whether the channel has just a little noise or a lot. While still in the research phase, there are few obstacles standing between this new system and copmmercial deployment said H. Vincent Poor, dean of the School of Engineering and Applied Science at Princeton University. "The codes are inherently practical," Poor told MIT News. "In fact, the paper not only develops the theory and analysis of such codes but also provides specific examples of practical constructions." [MITNews via TNW] Image: Tatiana Popova / Shutterstock
<urn:uuid:544793ec-0923-4c70-a766-bf1d35712433>
CC-MAIN-2013-20
http://gizmodo.com/5884808/mits-error-correcting-codes-will-fix-your-crappy-wireless-reception
2013-05-22T08:12:34Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.94173
456
DATE or DATE PALM (Pkwnix dactyl ifera), a tall tree of the natural order Palma cecr. It is most notable for its fruit, which is an important part of the daily food of the natives of western Asia and northern Africa, where the tree is indigenous and from whence large quantities of dried dates (the fruit) are exported to other countries. The tree is also cultivated in some other warm countries, includ ing China, Italy, France, Snain and parts of the United States — Florida, New Mexico, Arizona and California, in the last three of which a promising industry seems to be becoming started. The tree, which attains a height of 100 feet, and bears fruit for one or two centuries, is, like other palms, useful in many ways; nearly all its parts are used for something. Date seeds are roasted and used as a substitute for coffee, or ground, and pressed for oil and the pomace used for stock food. The leaves are used for matting, baskets, thatch, etc.; the terminal bud as a vegetable; the wood for fence making and other purposes where great strain is not expected; the fibre of the bark for making rope; but the fruit, which, contains proteids, gum and pectin, and is particularly rich in sugar, is the most important part. It is one of the principal sources of wealth in the countries where the date is indigenous. It is believed that the leaves of this palm are the ones referred to in biblical writings, and at the present time the leaves of this palm are largely used upon Palm Sunday among Christians living where the trees abound. The leaves were also symbolical of victory. beauty, etc., among the ancient Greeks and Jews. Since the male and female flowers are borne on separate trees, enough specimens of stami nate flowering trees must be planted to fertilize the blossoms on the others which alone produce fruit. Since the plants obtained from seeds are of unknown sex until they flower, and since the proportion of inferior seedlings to seedlings which bear superior fruit is very large, the date is propagated by means of suckers, since these retain the characteristics of the parent. The young plants are set in sunny situations, in almost any kind of soil where water is within reach of the roots or can be Supplied by irriga tion. The sandy, alkaline soils of deserts seem more satisfactory than the richer soils necessary for the growth of general crops. The trees are very difficult to make grow after transplanting, because they demand much attention especially as to watering. A loss of 50 per cent is not uncommon even with the best of attention. The surviving trees should commence to bear when about eight years old. The fruit is borne in clusters which hang from the thick crown of large pinnate leaves. Individual trees produce from 300 to 500 pounds or more of fruit in a season. The fruit is eaten both fresh and dried and is divided into soft and dry dates. They vary in color, quality and size.
<urn:uuid:7b2e51dd-0a31-40f3-9b26-d766165f4bc1>
CC-MAIN-2013-20
http://gluedideas.com/content-collection/encyclopedia-americana-8/Date-or-Date-Palm.html
2013-05-22T07:54:08Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.966312
620
I want to get the message out about bullying and cyber-bullying. Bullying can hurt and also kill people. Bullying can also hurt friends or families. By creating a message to stop bullying, we can help people get better self-esteem. There are different types of bullying, like face to face, cyber, blackmailing, etc. These are the types of bullying that would cause people to feel depressed and stressed out. Suicide continues to be one of the leading causes of death among children under the age of 14. Over half, about 56 percent, of all students have witnessed a bullying crime take place while at school and don’t do anything about it. A reported 15 percent of all students who don’t show up for school do so out of fear of being bullied at school. Nowadays, when people go to school, it’s all about status. People got to have name-brands clothes. If not, you get judged in school, and you start to get teased. In school, there is always that one kid who gets picked on. In some cases, it leads to suicide. It’s the same way with cyber-bullying, except it’s through technology such as cellphones, messaging or Facebook. They can also get picked on while on the computer. They do this by sending messages like they don’t like you, or they want to fight, or even say you’re a loser. Stuff like that could hurt another person, even if they don’t show it. School is a place to learn, not to tease people or make fun of them. … By stopping bullying, we could prevent suicide, depression and stress. My solution to this issue is to care for each other. Even if they don’t show that it hurts, on the inside it hurts a lot. If you see someone being bullied, stand up for that person. See someone all by themselves? Go over there, make new friends, pull them into your friend group. And if you get bullied a lot, prove everyone wrong that you’re not a loser. Just shut them down with respect. Just humble yourself, because you know you’re better than them. If you know you’re not a loser, then you should not care about what people say about you. Student, Honokaa High & Intermediate The cost of electricity is a never-ending complaint. However, I have not heard anyone questioning the accuracy and honesty of the HELCO meters. If my experience is typical, HELCO has had their hand in my pocket, unknown to me, for years. My supposed kilowatt-hours per day has been creeping up, slowly at first, for quite awhile. I assumed it was because my appliances were older. I was careless; maybe I cooked more, or wasted hot water, etc. But, over the past year, the daily average has gone from about 16 kilowatt-hours to over 18 — even after I replaced my aging refrigerator/freezer. I requested a check on my meter accuracy, and a nice technician installed a new “smart-meter” that is digital and easy for anyone to read. It’s been over two weeks, and the average is now 11.85 kilowatt-hours per day, and that includes the extreme use of my stove top and oven for Thanksgiving. The tech from HELCO called about five days after he took the old meter for testing. He reported that it was working fine, and the problem must be something else. I assume the testing equipment is programmed to reflect the same error that the mechanical meters have built into them. Why else would there be a discrepancy of over 33 percent? I believe there is a surcharge for higher usage also, which makes it even more attractive for HELCO to jack-up the readings on the meters. The meters are provided by the power companies, and just like computerized voting machines with no paper trail, how can the public know they are accurate? We cannot! According to my December 2011 bill, I used 12.77 kilowatt hours per day, even though I was in the hospital, for three of those weeks. The Nov. 7, 2012, bill says I used 18.82 kilowatt-hours per day. Now, with a new meter, it is 11.85 kilowatt-hours per day. That is a 7,000 kilowatt-hours difference every day I will continue daily readings of my new meter. If it suddenly changes, I will call a lawyer, not HELCO. I am also curious if others have had similar experiences, and have kept records. If your kilowatt-hours per day has been creeping up, insist on a meter check-up — and track the kilowatt-hours. If this is a regular practice, it would help all HELCO customers to know it. A class-action suit would not be out of the question. Carol R. Campbell
<urn:uuid:5f93b7ed-f10f-4523-a44f-e460177808e7>
CC-MAIN-2013-20
http://hawaiitribune-herald.com/sections/commentary/your-views/your-views-december-2.html
2013-05-22T08:34:13Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.967354
1,040
Common name: Black Currant Latin name: Ribes nigrum Other names: European Black Currant, Quinsy Berries Habitat: Black Currant is native to central and northern Europe and northern Asia Description: Black Currant is a perennial shrub growing up to 2 meters high. It has alternate, deeply lobed and simple leaves. Leaves have five lobes and toothed margins. Flowers are small, five-petaled, growing in short clusters. They are usually red-green to brown in color. Fruit is a small shiny berry, about 1 centimeter in diameter. Berries are dark purple, almost black in color and highly nutritive. Parts used: Berries, leaves Useful components: Potassium, phosphorous, iron, pantothenic acid, vitamine C and E calcium, magnesium, zinc, unsaturated fatty acids (alpha-linolenic acid and gamma-linolenic acid), anthocyanins Medicinal use: Black Currant have anti-inflammatory, antimicrobial, anti-cancer, anti-oxidant and diuretic properties. Tea made from the leaves is used in treatment of arthritis, urinary dysfunctions, coughs and diarrhea. Berries are said to be effective in cases of rheumatism, gout and arthritis. Due to its high content in vitamin C, Black Currant is very often used in treatment of diseases related to the cardiovascular system. It can prevent cardiac insufficiency, vascular problems, and it can also reduce arterial hypertension. Several studies suggest that phytochemicals in the fruit have the ability to inhibit inflammation mechanisms suspected to be the origin of heart disease, cancer, microbial infections and neurological disorders. Safety: Some herbs could react with certain medication. Therefore, it is advisable to consult your doctor before consumption of any herb.
<urn:uuid:938f42a7-d289-40a8-b67b-b950fe23d9ad>
CC-MAIN-2013-20
http://health-from-nature.net/Currant,_Black.html
2013-05-22T08:01:58Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.900362
383
Going for your usual run or bike ride in hot temperatures can bring scary health hazards if you aren't adequately prepared. Heat can place strain on the cardiovascular system and cause serious illnesses such as dehydration, heat cramps, heat exhaustion, or heat stroke. Heat exhaustion, when someone's body temperature skyrockets to 104 degrees or higher, can develop from enduring many days of extreme temperatures and failing to properly rehydrate. If untreated, heat exhaustion can lead to heat stroke, a life-threatening condition that occurs when the body stops sweating and is unable to cool itself, according to the Centers for Disease Control and Prevention. Sweating is a main way that the body cools itself; it's the result of water being brought to the surface of the skin through sweat glands, says Michael Bergeron, a professor in the department of pediatrics at the University of South Dakota's Sanford School of Medicine. We only begin to cool down once sweat evaporates; when it has trouble evaporating, such as in humid weather, our risk for overheating goes up, he explains. Signs of heat illness include nausea, cramps, headache, dizziness, lack of appetite, fatigue, and dark or amber urine, which signals dehydration. Exercising indoors to stay cool may not always be an option. But you can still get in a good workout on warm-weather days by taking these precautions: Acclimatize at low intensity Adjust your workout when the heat wave hits. "The biggest mistake people make is that they don't work up to the heat and acclimate. It takes about one to two weeks to acclimatize to perform the best in the heat," says Samantha Clayton, a personal trainer and track coach based in Malibu, Calif. Clayton, who competed as a sprinter in the 2000 Olympic Games. She says that Olympic athletes arrive early at their events to acclimate to local temperatures. To prepare for and adapt to hot weather, begin with shorter distances at an easy effort, says Robbie Ventura, former professional cyclist on the U.S. Postal Service cycling team. Next, lengthen the easy workouts and, after about four or five rides (or runs), you can increase intensity. As you adapt, your body will hold more moisture and become more effective at cooling itself. Don't wait until you're thirsty to hydrate. Two hours before heading out the door, down 16 ounces of an electrolyte-fueled drink such as Gatorade, Clayton says. Drink another 6 to 8 ounces of water 10 to 15 minutes into your workout (yes, you should always run with water), he says. Even if you exercise briefly, replace fluids early in your workout. Drinking electrolytes, found in sports drinks, will replace sodium and potassium that are lost when sweating. If you are low on electrolytes, you may experience heart palpitations, nausea, and headaches. Evidence shows that even a 1 to 2 percent loss in body weight from fluid loss can boost your core temperature, putting you in the danger zone for dehydration. Weigh yourself before and after exercising to determine any amount of weight loss, and then drink 16 to 24 ounces of a sports drink to replace fluids for every pound lost. (Take note: This is not a safe way to lose weight!) Keep cool even before you hit the pavement. Olympic athletes will take a cold shower, cool off in an air-conditioned room, or even sit in an ice bath before exercising in the heat, Bergeron says. Eating crushed ice before a long workout will help delay a rise in body temperature as well, he adds. To stay cool during long rides and hill climbs, Tour de France cyclists put ice cubes in a sock, which they place on their backs, says Ventura, who spoke to U.S. News from France, where he was attending the race. To wick away moisture, the cyclists also wear light-color jerseys and unzip them to allow for ideal circulation. Rise and shine early Be like an Olympian, and get an early start. Pre-dawn runs or bike rides can help you beat the heat, and you'll benefit from better air quality before the mercury rises. "You need quality oxygen to help cool you down, so take a nice deep breath before you exercise," Clayton says. "Then your blood can do its work and get to the surface of your skin to cool you down." Slather on sunscreen Prevent sunburns, which can raise your body temperature. Apply sunscreen, but be sure it's not so thick that it blocks your pores and prevents you from sweating.
<urn:uuid:73027dc6-13ff-43a8-96c0-32a523c8ee45>
CC-MAIN-2013-20
http://health.usnews.com/health-news/articles/2012/07/25/hot-weather-workout-tips
2013-05-22T08:34:28Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.944561
945
Abstract.---Accuracy of phylogenetic methods may be assessed in terms of consistency, efficiency, and robustness. Four principal methods have been used for assessing phylogenetic accuracy: simulation, known phylogenies, statistical analyses, and congruence studies. Simulation studies are useful for studying accuracy of methods under idealized conditions, and can be used to make general predictions about the behavior of methods if the limitations of the models are taken into account. Studies of known phylogenies can be used to test predictions from simulation studies, which provides a check on the robustness of the models (and may suggest refinements for future simulations). Statistical analyses allow general predictions to be applied to specific results, facilitate assessments as to whether or not sufficient data have been collected to formulate a robust conclusion, and ask whether a given data set is any more structured than random noise. Finally, congruence studies of multiple data sets assess the degree to which independent results agree, and thus the minimum proportion of the findings that can be attributed to an underlying phylogeny. These different methods of assessing phylogenetic accuracy are largely complementary, and the results are consistent in identifying a large class of problems that are amenable to phylogenetic reconstruction. [Phylogeny; accuracy; simulations; experimental evolution; statistics; congruence; consistency; efficiency; robustness] "The major problem in studying the relative efficiencies [of phylogenetic methods] is that the true tree is usually unknown for any set of real organisms or any set of real DNA sequences, so that it is difficult to judge which tree is the correct one. However, this problem can be avoided if we use computer simulation" (Nei, 1991:90). "The evolutionary models used in many simulation studies are exceedingly simple, and even though they will surely become more sophisticated (e.g., more ´realistic═) in the future, such studies will still face a credibility gap" (Miyamoto and Cracraft, 1991:11). "[T]here are some fundamental philosophical and empirical differences between simulations of fictitious taxa and their DNA sequences, on the one hand; and real-world taxa and their sequence characteristics, on the other" (Miyamoto and Cracraft, 1991:11). "Although I am skeptical that the results of [experimental phylogenies] ´directly support the legitimacy of methods for phylogenetic estimation,═ it remains to be seen what experimental phylogenetics can teach us about the problem of phylogenetic inference" (Sober, 1993:89). "As DNA sequences accumulate, there will be an increasing demand for statistical methods to estimate evolutionary trees from them, and to test hypotheses about the evolutionary process" (Felsenstein, 1981:368). "It is remarkable that, in a century which has seen such a large growth in the application of statistics to the natural sciences, the fundamental issues of statistical inference have not been resolved. There are not many more statisticians than opinions as to how to assess rival hypotheses in the light of data" (Edwards, 1969:1233). "[E]xtensive congruence among branching patterns derived from independent data sets and by different methods of analysis is unlikely to occur for any reason other than phylogeny" (Sheldon and Bledsoe, 1993:256-257). "[T]here may indeed be substantial congruence between the two data sets, but that ´congruence═ is not quite what we had hoped it would be" (Swofford, 1991:326). "On that happy day when molecular systematists achieve the goal of adequate sampling in terms of both taxa and sequence length..., and when the computer and the program capable of analysing the alignment of life exist, there are two possible extremes: 'one tree,' or '10999 equally parsimonious trees'" (Patterson et al., 1993:180). "'I checked it quite thoroughly,' said the computer, "and that quite definitely is the answer. I think the problem, to be quite honest with you, is that you═ve never actually known what the question is'" (Adams, 1979:181). Jim Bull, Mike Charleston, Paul Chippindale, Keith Crandall, Tim Crowe, Cliff Cunningham, A. W. F. Edwards, Jotun Hein, John Huelsenbeck, Mike Miyamoto, Barbara Mable and an anonymous reviewer read this manuscript and offered useful suggestions. I thank Mike Miyamoto for inviting me to prepare an introduction to this series of papers on phylogenetic accuracy. My studies on phylogenetic accuracy have been supported by grants from the National Science Foundation. Adams, D. 1979. The hitchhiker's guide to the galaxy. Crown Publishers, New York. Adams, E. N., III. 1972. Consensus techniques and the comparison of taxonomic trees. Syst. Zool. 21:390-397. Allard, M. W., and M. M. Miyamoto. 1992. Testing phylogenetic approaches with empirical data, as illustrated with the parsimony method. Mol. Biol. Evol. 9:778-786. Archie, J. W. 1989. Phylogenies of plant families: A demonstration of phylogenetic randomness in DNA sequence data derived from proteins. Evolution 43:1796-1800. Atchley, W. R., and W. M. Fitch. 1991. Gene trees and the origins of inbred strains of mice. Science 254:554-558. Bandelt, H.-J., and A. W. M. Dress. 1992. Split decomposition: A new and useful approach to phylogenetic analysis of distance data. Mol. Phylogenet. Evol. 1:242-252. Barrett, M., M. J. Donoghue, and E. Sober. 1991. Against consensus. Syst. Zool. 40:486-493. Baum, B. R. 1984. Application of compatibility and parsimony methods at the infraspecific, specific, and generic levels in Poaceae. Pages 192-220 in Cladistics: Perspectives on the reconstruction of evolutionary history. Columbia Univ. Press, New York. Birnbaum, A. 1962. On the foundations of statistical inference. J. Am. Stat. Assoc. 57:269-326. Blanken, R. L., L. C. Klotz, and A. G. Hinnebusch. 1982. Computer comparison of new and existing criteria for constructing evolutionary trees from sequence data. J. Mol. Evol. 19:9-19. Bremer, K. 1988. The limits of amino-acid sequence data in angiosperm phylogenetic reconstruction. Evolution 42:795-803. Bremer, K. 1990. Combinable component consensus. Cladistics 6:369-372. Bull, J. J., C. W. Cunningham, I. J. Molineux, M. R. Badgett, and D. M. Hillis. 1993a. Experimental molecular evolution of bacteriophage T7. Evolution 47:993-1007. Bull, J. J., J. P. Huelsenbeck, C. W. Cunningham, D. L. Swofford, and P. J. Waddell. 1993b. Partitioning and combining data in phylogenetic analysis. Syst. Biol. 42:384-397. Charleston, M. A., M. D. Hendy, and D. Penny. 1994. The effects of sequence length, tree topology, and number of taxa on the performance of phylogenetic methods. J. Computation. Biol. 1:133-151. Chippindale, P. T., and J. J. Wiens. 1994. Weighting, partitioning, and combining characters in phylogenetic analysis. Syst. Biol. 43:278-287. Crandall, K. A. 1994. Intraspecific cladogram estimation: Accuracy at higher levels of divergence. Syst. Biol. 43:222-235. Crandall, K. A., A. R. templeton, and C. F. Sing. 1994. Intraspecific phylogenetics: Problems and solutions. Pages 273-297 in Models in phylogeny reconstruction (R. W. Scotland, D. J. Siebert, and D. M. Williams, eds.). Clarendon Press, Oxford, England. Davis, J. I. 1993. Character removal as a means for assessing stability of clades. Cladistics 9:201-210. DeBry, R. W. 1992. The consistency of several phylogeny-inference methods under varying evolutionary rates. Mol. Biol. Evol. 9:537-551. De Queiroz, A. 1993. For consensus (sometimes). Syst. Biol. 42:368-372. Dickersin, K., and J. A. Berlin. 1992. Meta-analysis: State-of-the-science. Epidemiol. Rev. 14:154-176. Dixon, M. T., and D. M. Hillis. 1993. Ribosomal RNA secondary structure: Compensatory mutations and implications for phylogenetic analysis. Mol. Biol. Evol. 10:256-267. Domingo, E., and J. J. Holland. 1994. Mutation rates and rapid evolution of RNA viruses. Pages 161-184 in The evolutionary biology of viruses (S. S. Morse, ed.). Raven Press, New York. Donoghue, M. J., R. G. Olmstead, J. F. Smith, and J. D. Palmer. 1992. Phylogenetic relationships of Dipscales based on rbcL sequences. Ann. Mo. Bot. Gard. 79:249-265. Edwards, A. W. F. 1969. Statistical methods in scientific inference. Nature 222:1233-1237. Edwards, A. W. F. 1992. Likelihood: An account of the statistical concept of likelihood and its application to scientific inference. 2nd ed. Johns Hopkins Univ. Press, Baltimore. Edwards, A. W. F., and L. L. Cavalli-Sforza. 1964. Reconstruction of evolutionary trees. Systematics Assoc. Publ. 6:67-76. Efron, B. 1979. Bootstrapping methods: Another look at the jackknife. Ann. Stat. 7:1-26. Efron, B. 1985. Bootstrap confidence intervals for a class of parametric problems. Biometrika 72:45-58. Efron, B. 1987. Better bootstrap confidence intervals. J. Am. Stat. Assoc. 82:171-185. Faith, D. P. 1991. Cladistic permutation tests for monophyly and nonmonophyly. Syst. Zool. 40:366-375. Faith, D. P., and P. S. Cranston. 1991. Could a cladogram this short have arisen by chance alone?: On permutation tests for cladistic structure. Cladistics 7:1-28. Felsenstein, J. 1973a. Maximum-likelihood and minimum-steps methods for estimating evolutionary trees from data on discrete characters. Syst. Zool. 22:240-249. Felsenstein, J. 1973b. Maximum-likelihood estimation of evolutionary trees from continuous characters. Am. J. Hum. Genet. 25:471-492. Felsenstein, J. 1978. Cases in which parsimony or compatibility methods will be positively misleading. Syst. Zool. 27:401-410. Felsenstein, J. 1981. Evolutionary trees from DNA sequences: A maximum likelihood approach. J. Mol. Evol. 17:368-376. Felsenstein, J. 1983a. Parsimony in systematics: Biological and statistical issues. Annu. Rev. Ecol. Syst. 14:313-333. Felsenstein, J. 1983b. Inferring evolutionary trees from DNA sequences. Pages 133-150 in Statistical analysis of DNA sequence data (B. S. Weir, ed.). Marcel Dekker, New York. Felsenstein, J. 1985a. Confidence limits on phylogenies: An approach using the bootstrap. Evolution 39:783-791. Felsenstein, J. 1985b. Confidence limits on phylogenies with a molecular clock. Syst. Zool. 34:152-161. Felsenstein, J. 1987. Estimation of hominoid phylogeny from a DNA hybridization data set. J. Mol. Evol. 26:123-131. Felsenstein, J. 1988. Phylogenies from molecular sequences: Inference and reliability. Annu. Rev. Genet. 22:521-565. Fisher, R. A. 1956. Statistical methods and scientific inference. Oliver and Boyd, Edinburgh. Fitch, W. M., and W. R. Atchley. 1985. Evolution in inbred strains of mice appears rapid. Science 228:1169-1175. Fitch, W. M., and W. R. Atchley. 1987. Divergence in inbred strains of mice: A comparison of three different types of data. Pages 203-216 in Molecules and morphology in evolution: Conflict or compromise? (C. Patterson, ed.). Cambridge Univ. Press, Cambridge, England. Glass, G. V. 1976. Primary, secondary and meta-analysis of research. Educ. Res. 5:3-8. Gojobori, T., W.-H. Li, and D. Graur. 1982. Patterns of nucleotide substitution in pseudogenes and functional genes. J. Mol. Evol. 18:360-369. Goldman, N. 1993. Statistical tests of models of DNA substitution. J. Mol. Evol. 36:182-198. Guyer, C., and J. B. Slowinski. 1991. Comparisons of observed phylogenetic topologies with null expectations among three monophyletic lineages. Evolution 45:340-350. Hacking, I. 1965. Logic of statistical inference. Cambridge Univ. Press, Cambridge, England. Hall, P., and M. A. Martin. 1988. On bootstrap resampling and iteration. Biometrika 75:661-671. Hedges, L. V., and I. Olkin. 1985. Statistical methods for meta-analysis. Academic Press, Orlando, Florida. Hedges, S. B. 1992. The number of replications needed for accurate estimation of the bootstrap P value in phylogenetic studies. Mol. Biol. Evol. 9:366-369. Heijerman, T. 1991. Adequacy of numerical taxonomic methods: Further experiments using simulated data. Z. Zool. Syst. Evolutionsforsch. 31:81-97. Hein, J. 1990. Reconstructing evolution of sequences subject to recombination using parsimony. Math. Biosci. 98:185-200. Hein, J. 1993. A heuristic method to reconstruct the history of sequences subject to recombination. J. Mol. Evol. 36:396-405. Hendy, M. D., and D. Penny. 1989. A framework for the quantitative study of evolutionary trees. Syst. Zool. 38:297-309. Hillis, D. M. 1987. Molecular versus morphological approaches to systematics. Annu. Rev. Ecol. Syst. 18:23-42. Hillis, D. M. 1991. Discriminating between phylogenetic signal and random noise in DNA sequences. Pages 278-294 in Phylogenetic analysis of DNA sequences (M. M. Miyamoto and J. Cracraft, eds.). Oxford Univ. Press, New York. Hillis, D. M., M. W. Allard, and M. M. Miyamoto. 1993a. Analysis of DNA sequence data: Phylogenetic inference. Methods Enzymol. 224:456-487. Hillis, D. M., and J. J. Bull. 1991. Of genes and genomes. Science 254:528. Hillis, D. M., and J. J. Bull. 1993. An empirical test of bootstrapping as a method for assessing confidence in phylogenetic analyses. Syst. Biol. 42:182-192. Hillis, D. M., J. J. Bull, M. E. White, M. R. Badgett, and I. J. Molineux. 1992. Experimental phylogenetics: Generation of a known phylogeny. Science 255:589-592. Hillis, D. M., J. J. Bull, M. E. White, M. R. Badgett, and I. J. Molineux. 1993b. Experimental approaches to phylogenetic analysis. Syst. Biol. 42:90-92. Hillis, D. M., and J. P. Huelsenbeck. 1992. Signal, noise, and reliability in molecular phylogenetic analyses. J. Hered. 83:189-195. Hillis, D. M., and J. P. Huelsenbeck. 1994. To tree the truth: Biological and numerical simulations of phylogeny. Pages 55-67 in Molecular evolution of physiological processes (D. M. Fambrough, ed.). Rockefeller Univ. Press, New York. Hillis, D. M., J. P. Huelsenbeck, and C. W. Cunningham. 1994a. Application and accuracy of molecular phylogenies. Science 264:671-677. Hillis, D. M., J. P. Huelsenbeck, and D. L. Swofford. 1994b. Hobgoblin of phylogenetics? Nature 369:363-364. Huelsenbeck, J. P. 1991. Tree-length distribution skewness: An indicator of phylogenetic information. Syst. Zool. 40:257-270. Huelsenbeck, J. P. 1995. The performance of phylogenetic methods in simulation. Syst. Biol. 44:00-00. Huelsenbeck, J. P., and D. M. Hillis. 1993. Success of phylogenetic methods in the four-taxon case. Syst. Biol. 42:247-264. Huelsenbeck, J. P., D. L. Swofford, C. W. Cunningham, J. J. Bull, and P. W. Waddell. 1994. Is character weighting a panacea for the problem of data heterogeneity in phylogenetic analysis? Syst. Biol. 43:288-291. Jin, L., and M. Nei. 1990. Limitations of the evolutionary parsimony method of phylogenetic analysis. Mol. Biol. Evol. 7:82-102. Källersjö, M., J. S. Farris, A. G. Kluge, and C. Bult. 1992. Skewness and permutation. Cladistics 8:275-287. Kim, J. 1993. Improving the accuracy of phylogenetic estimation by combining different methods. Syst. Biol. 42:331-340. Kim, J., F. J. Rohlf, and R. R. Sokal. 1993. The accuracy of phylogenetic estimation using the neighbor-joining method. Evolution 47:471-486. Kimura, M. 1980. A simple method for estimating evolutionary rate of base substitutions through comparative studies of nucleotide sequences. J. Mol. Evol. 16:111-120. Kishino, H., and M. Hasegawa. 1989. Evaluation of the maximum likelihood estimate of the evolutionary tree topologies from DNA sequence data, and the branching order in Hominoidea. J. Mol. Evol. 29:170-179. Kluge, A. G. 1989. A concern for evidence and a phylogenetic hypothesis of relationships among Epicrates (Boidae, Serpentes). Syst. Zool. 38:7-25. Kuhner, M. K., and J. Felsenstein. 1994. A simulation comparison of phylogeny algorithms under equal and unequal evolutionary rates. Mol. Biol. Evol. 11:459-468. Lake, J. A.. 1987. A rate-independent technique for analysis of nucleic acid sequences: Evolutionary parsimony. Mol. Biol. Evol. 4:167-191. Lanyon, S. M. 1985. Detecting internal inconsistencies in distance data. Syst. Zool. 34:397-403. Lanyon, S. M. 1993. Phylogenetic frameworks: Towards a firmer foundation for the comparative approach. Biol. J. Linnean Soc. 49:45-61. Li, W.-H. 1989. A statistical test of phylogenies estimated from sequence data. Mol. Biol. Evol. 6:424-435. Li, W.-H., and A. Zharkikh. 1995. Statistical tests of DNA phylogenies. Syst. Biol. 44:00-00. Li, W.-H., and M. Guoy. 1991. Statistical methods for testing phylogenies. Pages 249-277 in Phylogenetic analysis of DNA sequences (M. M. Miyamoto and J. Cracraft, eds.). Oxford Univ. Press, New York. Li, W.-H., C.-I. Wu, and C.-C. Luo. 1984. Nonrandomness of point mutation as reflected in nucleotide substitutions in pseudogenes and its evolutionary implications. J. Mol. Evol. 21:58-71. Mann, C. 1990. Meta-analysis in the breech. Science 249:476-480. McKitrick, M. C. 1985. Monophyly of the tyrannidae (Aves): Comparison of morphology and DNA. Syst. Zool. 34:35-45. Mickevich, M. F. 1978. Taxonomic congruence. Syst. Zool. 27:143-158. Mickevich, M. F., and J. S. Farris. 1981. The implications of congruence in Menidia. Syst. Zool. 30:351-370. Mickevich, M. F., and M. S. Johnson. 1976. Congruence between morphological and allozyme data in evolutionary inference and character evolution. Syst. Zool. 25:260-270. Miyamoto, M. M. 1985. Consensus cladograms and general classifications. Cladistics 1:186-189. Miyamoto, M. M., and W. M. Fitch. 1995. Testing species phylogenies and phylogenetic methods with congruence. Syst. Biol. 44:0-0. Miyamoto, M. M., M. W. Allard, R. M. Adkins, L. L. Janecek, and R. L. Honeycutt. 1994. A congruence test of reliability using linked mitochondrial DNA sequences. Syst. Biol. 43:236-249. Miyamoto, M. M., and J. Cracraft. 1991. Phylogenetic inference, DNA sequence analysis, and the future of molecular systematics. Pages 3-17 in Phylogenetic analysis of DNA sequences (M. M. Miyamoto and J. Cracraft, eds.). Oxford Univ. Press, New York. Moriyama, E. N., Y. Ina, K. Ikeo, N. Shimizu, and T. Gojobori. 1991. Mutation pattern of human immunodeficiency virus genes. J. Mol. Evol. 32:360-363. Nei, M. 1991. Relative efficiencies of different tree making methods for molecular data. Pages 90-128 in Phylogenetic analysis of DNA sequences (M. M. Miyamoto and J. Cracraft, eds.). Oxford Univ. Press, New York. Nelson, G. J. 1979. Cladistic analysis and synthesis: Principles and definitions, with a historical note on Adanson═s Familles des Plantes (1763-1764). Syst. Zool. 28:1-21. Olkin, I. 1990. History and goals. Pages 3-10 in The future of meta-analysis (K. W. Wachter and M. L. Straf, eds.). Russell Sage Foundation, New York. Olsen, G. J. 1987. Earliest phylogenetic branchings: Comparing rRNA-based evolutionary trees inferred from various techniques. Cold Spring Harbor Symp. Quant. Biol. 52:825-837. Ou, C.-Y., C. A. Ciesielski, G. Myers, C. I. Bandea, C.-C. Luo, B. T. M. Korber, J. I. Mullins, G. Schochetman, R. L. Berkelman, A. N. Economou, J. J. Witte, L. J. Furman, G. A. Satten, K. A. MacInnes, J. W. Curran, and H. W. Jaffe. 1992. Molecular epidemiology of HIV transmission in a dental practice. Science 256:1165-1171. Page, R. D. M. 1991. Clocks, clades, cospeciation: Comparing rates of evolution and timing of cospeciation events in host-parasite assemblages. Syst. Zool. 40:188-198. Patterson, C., D. M. Williams, and C. J. Humphries. 1993. Congruence between molecular and morphological phylogenies. Annu. Rev. Ecol. Syst. 24:153-188. Peacock, D., and D. Boulter. 1975. Use of amino acid sequence data in phylogeny and evaluation of methods using computer simulation. J. Mol. Biol. 95:513-527. Pearson, K. 1904. Report on certain enteric fever inoculation statistics. Brit. Med. J. 3:1243-1246. Penny , D., L. R. Foulds, and M. D. Hendy. 1982. Testing the theory of evolution by comparing evolutionary trees constructed from five different protein sequences. Nature 297:197-200. Penny, D., and M. D. Hendy. 1985a. Testing methods of evolutionary tree construction. Cladistics 1:266-278. Penny, D., and M. D. Hendy. 1985b. The use of tree comparison metrics. Syst. Zool. 34:75-82. Penny, D., and M. D. Hendy. 1986. Estimating the reliability of evolutionary trees. Mol. Biol. Evol. 3:403-417. Penny, D., M. D. Hendy, and M. A. Steel. 1991. Testing the theory of descent. Pages 155-183 in Phylogenetic analysis of DNA sequences (M. M. Miyamoto and J. Cracraft, eds.). Oxford Univ. Press, New York. Penny, D., M. D. Hendy, and M. A. Steel. 1992. Progress with methods for constructing evolutionary trees. Trends Ecol. Evol. 7:73-79. Prager, E. M., and A. C. Wilson. 1976. Congruency of phylogenies derived from different proteins. J. Mol. Evol. 9:45-57. Prager, E. M., and A. C. Wilson. 1988. Ancient origin of lactalbumin from lysozyme: Analysis of DNA and amino acid sequences. J. Mol. Evol. 27:326-335. Rodrigo, A. G. 1993. Calibrating the bootstrap test of monophyly. Int. J. Parasitol. 23:507-514. Rodrigo, A. G., M. Kelly-Borges, P. R. Bergquist, and P. L. Bergquist. 1993. A randomisation test of the null hypothesis that two cladograms are sample estimates of a parametric phylogenetic tree. N. Z. J. Bot. 31:257-268. Rohlf, F. J., W. S. Chang, R. R. Sokal, and J. Kim. 1990. Accuracy of estimated phylogenies: Effects of tree topology and evolutionary model. Evolution 44:1671-1684. Saitou, N., and M. Nei. 1987. The neighbor-joining method: A new method for reconstructing phylogenetic trees. Mol. Biol. Evol. 4:406-425. Sanderson, M. J. 1989. Confidence limits on phylogenies: The bootstrap revisited. Cladistics 5:113-129. Schöniger, M., and A. von Haeseler. 1993. A simple method to improve the reliability of tree reconstructions. Mol. Biol. Evol. 10:471-483. Sheldon, F. H., and A. H. Bledsoe. 1993. Avian molecular systematics, 1970s to 1990s. Annu. Rev. Ecol. Syst. 24:243-278. Sibley, C. G., and J. E. Ahlquist. 1987. Avian phylogeny reconstructed from comparisons of the genetic material, DNA. Pages 95-121 in Molecules and morphology in evolution: Conflict or compromise? (C. Patterson, ed.). Cambridge Univ. Press, Cambridge, England. Sidow, A. 1993. Parsimony or statistics? Nature 367:26. Sober, E. 1993. Experimental tests of phylogenetic inference methods. Syst. Biol. 42:85-89. Sourdis, J., and M. Nei. 1988. Relative efficiencies of the maximum parsimony and distance-matrix methods in obtaining the correct phylogenetic tree. Mol. Biol. Evol. 5:298-311. Steel, M. A., P. J. Lockhart, and D. Penny. 1993. Confidence in evolutionary trees from biological sequence data. Nature 364:440-442. Stewart, C.-B., and A. C. Wilson. 1987. Sequence convergence and functional adaptation of stomach lysozymes from foregut fermenters. Cold Spring Harbor Symp. Quant. Biol. 52:891-899. Studier, W. F. 1980. The last of the T phages. Pages 72-78 in Genes, cells, and behavior: A view of biology fifty years later (N. H. Horowitz and E. Hutchings, Jr., eds.). W. H. Freeman, San Francisco. Swofford, D. L. 1991. When are phylogeny estimates from molecular and morphological data incongruent? Pages 295-333 in Phylogenetic analysis of DNA sequences (M. M. Miyamoto and J. Cracraft, eds.). Oxford Univ. Press, New York. Tateno, Y., M. Nei, F. Tajima. 1982. Accuracy of estimated phylogenetic trees from molecular data. I. Distantly related species. J. Mol. Evol. 18:387-404. Tateno, Y., N. Takezaki, and M. Nei. 1994. Relative efficiencies of the maximum-likelihood, neighbor-joining, and maximum-parsimony methods when substitution rate varies with site. Mol. Biol. Evol. 11:261-277. Templeton, A. R. 1983a. Phylogenetic inference from restriction endonuclease cleavage site maps with particular reference to the humans and apes. Evolution 37:221-244. Templeton, A. R. 1983b. Convergent evolution and nonparametric inferences from restriction data and DNA sequences. Pages 411-501 in Statistical analysis of DNA sequence data (B. S. Weir, ed.). Marcel Dekker, New York. Templeton, A. R., E. Boerwinkle, and C. F. Sing. 1987. A cladistic analysis of phenotypic associations with haplotypes inferred from restriction endonuclease mapping. I. Basic theory and an analysis of alcohol dehydrogenase activity in Drosophila. Genetics 117:343-351. Templeton, A. R., K. A. Crandall, and C. F. Sing. 1992. A cladistic analysis of phenotypic associations with haplotypes inferred from restriction endonuclease mapping. III. Cladogram estimation. Genetics 132:619-633. Wheeler, W. C., and R. L. Honeycutt. 1988. Paired sequence differences in ribosomal RNAs: Evolutionary and phylogenetic implications. Mol. Biol. Evol. 5:90-96. Williams, S. A., and M. Goodman. 1989. A statistical test that supports a human/chimpanzee clade based on noncoding DNA sequence data. Mol. Biol. Evol. 6:325-330. Yang, Z., N. Goldman, and A. Friday. 1994. Comparison of models for nucleotide substitution used in maximum-likelihood phylogenetic estimation. Mol. Biol. Evol. 11:316-324. Zharkikh, A., and W.-H. Li. 1992a. Statistical properties of bootstrap estimation of phylogenetic variability from nucleotide sequences: I. Four taxa with a molecular clock. Mol. Biol. Evol. 9:1119-1147. Zharkikh, A., and W.-H. Li. 1992b. Statistical properties of bootstrap estimation of phylogenetic variability from nucleotide sequences: II. Four taxa without a molecular clock. J. Mol. Evol. 35:356-366. Zharkikh, A., and W.-H. Li. In press. Estimation of confidence in phylogeny: The full-and-partial bootstrap technique. Mol. Phylogenet. Evol. Received 11 August 1994; accepted 5 October 1994.
<urn:uuid:2c71a916-2a89-4490-a758-c7d19c852728>
CC-MAIN-2013-20
http://hydrodictyon.eeb.uconn.edu/systbiol/issues/44_1/44_1_hillis.html
2013-05-22T08:34:33Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.701511
7,316
From Intelligent Perception What it means to puncture a hole in a surface is clear. But the meaning of a tunnel isn't as obvious. For example, the hole in a doughnut is visible but the fact there are two in a tire might escape some people. How many tunnels are there in a porous material? Suppose we drill 3 perpendicular holes through a cube: How many tunnels here? There are so many ways to enter and exit these structures! We can even try to list the seemingly reasonable answers: - 1, because they are all connected; - 3, by the number of times it was drilled; - 6, by the number of entrances; - 5 + 4 + 3 + 2 + 1 = 15, by the number of ways one can enter and exit the structure. As a related question, in the next image, how many tunnels do these wireframes have? To answer the question topologically, we "flatten" them: Then we see that there are only 3 holes in the pyramid and 5 in the cube. The issues becomes even more complicated if you want to describe a tunnel in a surface without the benefit of a bird's-eye view, from the inside the surface. The issue becomes real when we need to understand the topology of our universe:
<urn:uuid:4c866296-769d-4280-9133-598a82db8d58>
CC-MAIN-2013-20
http://inperc.com/wiki/index.php?title=Tunnels
2013-05-22T08:33:17Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.961547
273
TerraPower is making great progress on their nuclear reactor design by using supercomputing clusters for computational modeling work. A calculation that takes all day to run on a desktop computer can run in one minute on our cluster. This past year, the TerraPower team has been heavily involved in engineering work and design with a confidence and speed that would not be possible without the use of a computing cluster. Rigorous modeling techniques present intricate insight into the physics of the online cultivation of fuel, that enables the unique fuel cycle of the Traveling Wave Reactor. Extensive computer simulations and engineering studies produced new evidence that a wave of fission moving slowly through a fuel core could generate a billion watts of electricity continuously for well over 50 to 100 years without enrichment or reprocessing. The hi-fidelity results made possible by advanced computational abilities of modern supercomputer clusters are the driving force behind one of the most active nuclear reactor design teams in the country. Our cluster contains 1,024 Xeon core processors assembled on 128 blade servers. To break it down, each core is equivalent to the power of a desktop computer. And each of the 128 blades within the cluster has eight cores. So this cluster has over 1000 times the computational ability as a desktop computer. Installed at the Lab Annex, the cluster features several built-in systems to protect the equipment and keep it running smoothly. Overheating is a major concern to the team because the temperature in the computer room can rise by 40 degrees Fahrenheit per minute without proper cooling. The cluster uses a cutting-edge cooling system. Powerful air conditioners line the servers and blow chilled air to maintain operations. As a precaution, the cluster contains 20-minutes worth of battery back-up power, which is enough to shut the cluster down safely during a power outage. These emergency batteries are the size of a small closet in order to serve the 40 kW of electricity the cluster uses during peak operation and the 25 to 65 kW consumed by the cooling system. Supercomputing clusters allow rapid and high-confidence studies of new, highly-sustainable nuclear reactors, which will play an important role in our nation’s current and future energy needs.
<urn:uuid:c9c467ae-ccd0-448e-9fd2-7769029f212e>
CC-MAIN-2013-20
http://intellectualventureslab.com/?p=536
2013-05-22T08:34:23Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.925226
441
Meaning of Frustration ↓ Frustration is one of the causes of stress. It arises when one's motivation to achieve a desired goal is blocked. For example, an employee wants to finish a report before the end of the day but finds that something or the others keep interrupting him at work. This can lead to his frustration. Image Credits © Sybren A. Stüvel Types of Reactions to Frustration ↓ The reactions to frustration are also known as Defense Mechanisms. These defense mechanisms are so called as they try to defend individuals from the psychological effects of a blocked goal. When some employees get frustrated, they become tensed and irritable. They experience an uneasy feeling in their stomach and also show various other reactions of frustration. Following are the various types of reactions to frustration :- - Withdrawal : Behaviours such as asking for a transfer or quitting a job. - Fixation : An employee blames others and superiors for his problems, without knowing complete facts. - Aggression : Acting in a threatening manner. - Regression : Behaving in an immature and childish manner and may self-pity (to feel sorry for oneself). - Physical Disorder : Physical ailments such as fever, upset stomach, vomiting, etc. - Apathy : Becoming irresponsive and disinterested in the job and his co-workers. Sources or Causes of Frustration ↓ Following are the main sources or causes of frustration :- - Environment : The workplace environment and natural environment both may frustrate the employees. For example, there may be break down in machinery, no canteen facilities, a wet rainy day or a hot sunny day may prevent the employees to perform their duties efficiently. - Co-workers : Co-workers may be a major source of frustration. They may place barriers in the way of goal attainment by delaying work, withholding work inputs, poor presentation of work, affecting its quality, etc. - Employee Himself : The employee himself is rarely recognised as a source of frustration. The employee may set higher goals than his abilities. - Management : Management may act as the source of frustration, they may block the promotion of an employee due to change in organisation's promotional policies.
<urn:uuid:17e50e57-c621-44be-83e1-b3cacee60c48>
CC-MAIN-2013-20
http://kalyan-city.blogspot.com/2011/03/frustration-types-of-reaction-and.html
2013-05-22T08:02:32Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.95651
460
Researchers at Harvard University and MIT wanted to see if a mathematical model developed to track and predict the spread of infectious diseases such as SARS and foot-and-mouth disease could also apply to the spread of happiness -- and found that it worked. They used data collected from 1,880 subjects in the Framingham Heart Study, a long-term research effort that has followed subjects since 1948 (and added some new ones along the way), giving them physical and emotional exams every two years. At each visit, subjects were classified as content, discontent or neutral. The researchers monitored how these emotional states changed over time and how these changes depended on the emotions of the people with whom the participants came into contact. When the information was put into a traditional infectious-disease simulation, slightly modified to reflect the unique qualities of emotional spread rather than actual disease, the researchers found a correlation between an individual's emotional state and those of the person's contacts. In other words, it appears that you can catch happiness. Or sadness. Moreover, the "recovery time" doesn't depend on your contacts at all, which is a hallmark of diseases but surprising in an emotional context, since continuing contact with happy or sad people could be expected to affect one's emotional state even after the initial "infection." People were found to "recover" (return to neutral) more quickly from discontent than from content; on average, a contentedness "infection" sticks around for 10 years, but it takes only five years to recover from discontent. While this may still seem like a long time, the work focused on long-term emotional states because they are more accurate measures of general life satisfaction than fleeting moods, which are already known to be contagious (think laughter). On the other hand, sadness is more contagious than happiness: A single discontent contact doubles one's chances of becoming unhappy, while a happy contact increases the probability of becoming content by only 11%. Researchers also found one way that emotions act differently than diseases -- they can arise due to events in your own life, such as a promotion or a disease diagnosis, rather than solely being "contagious." In another win for the good guys, it appears that happiness is more likely to come about spontaneously than is sadness. A report of the emotions-as-diseases research has been published in the Proceedings of the Royal Society B. -- Rachel Bernstein Photo: We may recover from sadness more quickly than we do from happiness, but it appears to be more infectious. Credit: Reuters
<urn:uuid:901d0c5a-1e57-4129-828a-f1db76985303>
CC-MAIN-2013-20
http://latimesblogs.latimes.com/booster_shots/depression/
2013-05-22T07:53:58Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.966313
518
National Aeronautics and Space Administration, more commonly known as NASA, is a U.S. government agency that manages the United States’ nonmilitary space program. It also conducts research in aircrafts. NASA employs thousands of workers and even more workers through contracts with private companies. NASA’s headquarters are located in Washington D.C. and has smaller field offices in states across the country. NASA’s programs simulate the design of new technology that could improve space travel. NASA also promotes international cooperation through projects like the International Space Station (ISS) and provides public programs to educate the public. NASA, Public domain (link). NASA’s history started with the National Advisory Committee for Aeronautics (NACA), a federal aviation research group started in 1915. Soon after the Soviet Union launched the first artificial satellite in 1957, United States’ President Eisenhower place NACA in charge of U.S. space exploration. The U.S. Congress reorganized the group as NASA in 1958. NASA gained its importance due to the competition with the Soviet Union during the Cold War. In 1961 United States’ President John F. Kennedy, told Americans that he wanted to have a man land on the moon by 1970. NASA took on this challenge. During this time period, people called the competition between the U.S. and the Soviet Union for space exploration the Space Race. The Soviet Union and the U.S. through NASA changed leads in the race many times, but NASA won by putting the first man on the moon. In 1969 astronaut Neil Armstrong was the first man to walk on the moon. After this, NASA continued its space explorations focusing mainly on the moon and moon landings. Since the 1980’s NASA has been working on reusable space crafts including space shuttles and places in space where people can live and work. Currently NASA is working with other nations to build the International Space Station. NASA has also launched Mars rovers and other space and planet exploratory devices.
<urn:uuid:9c3a70ff-6835-4b38-939f-a60cc0784178>
CC-MAIN-2013-20
http://library.thinkquest.org/07aug/00861/nasa.htm
2013-05-22T08:19:30Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.956062
412
Chapter 5: Circles and Loci First, we are going to talk about the circle. A circle is the set of all points in a plane that are a given distance from a fixed point in that plane. The fixed point is the center of the circle. A segment from the center to any point on the circle is a radius. A circle is named by its center. The circle with center point O is called circle O. When two circles have the same radii, then these two circles are called congruent. Talking about the radius, when set of all points whose distance from center of the circle is less than the radius, it is called the interior of the circle; when set of all points whose distance from the center of the circle is greater than the radius, it is called the exterior of the circle. Two radii form the diameter that is the longest distance in the circle. Diameter is also a kind of chord which is a segment whose endpoints are on the circle. When you extend a chord into both directions, you get a secant. A secant is a line that intersects the circle in two points. A special kind of secant is called tangent that only intersects the circle at one point. The point of intersection is called the point of tangency. Followings are some additional concepts about the chord and tangent: If a line or segment contains the center of a circle and is perpendicular to a chord, then it bisects the chord. In the same circle or in congruent circles, congruent chords are equidistant from the centers. In the same circle or in congruent circles, chords that are equidistant from the centers are congruent. You can also jump to the chapter of your choice by using the drop-down list at below.
<urn:uuid:5c6fe4f3-abb5-408e-b60e-cfe11505eec9>
CC-MAIN-2013-20
http://library.thinkquest.org/16284/g_circle_1.htm
2013-05-22T08:27:06Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.946852
376
The American Revolution and Its Era: Maps and Charts of North America and the West Indies, 1750-1789 The American Revolution and Its Era: Maps and Charts of North America and the West Indies, 1750-1789 represents an important historical record of the mapping of North America and the Caribbean. The maps and charts in this online collection number well over two thousand different items, with easily as many or more unnumbered duplicates, many with distance colorations and annotations. Almost six hundred maps are original manuscript drawings from famous mapmakers of the period, three of the best eighteenth-century map publishers in London, and other personal collections. In a hurry? Save or print these Collection Connections as a single file. These online exhibits provide context and additional information about this collection. These historical era(s) are best represented in the collection although they may not be all-encompassing. - Colonial Settlement, 1492-1763 - The American Revolution, 1763-1783 Related Collections and Exhibits These collections and exhibits contain thematically-related primary and secondary sources. Browse the Collection Finder for more related material on the American Memory Web site. - An American Time Capsule - A Century of Lawmaking for a New Nation, 1774-1873 - Continental Congress and Constitutional Conventions, 1774-1789 - George Washington Papers, 1741-1799 - Map Collections, 1500-2003 - Thomas Jefferson Papers, 1606-1827 - Words & Deeds in American History Recommended additional sources of information. There are currently no other resources for this collection Specific guidance for searching this collection. For help with general search strategies, see Finding Items in American Memory.
<urn:uuid:3571adef-1602-457e-99a6-359111d67f80>
CC-MAIN-2013-20
http://loc.gov/teachers/classroommaterials/connections/amrev-maps/
2013-05-22T08:19:23Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.834502
361
Amelia Island is one of the southernmost of the Sea Islands, a chain of barrier islands that stretches along the east coast of the United States from South Carolina to Florida. It is 13 miles (21 km) long and approximately 4 miles (6 km) wide at its widest point. Amelia Island is situated off the coast in Nassau County, Florida, south of Cumberland Island, Georgia. Fernandina Beach and Amelia City are both located on the island. Named for Princess Amelia, daughter of King George II, Amelia Island is also rich in history and tradition. Native American bands associated with the Timucuan mound-building culture settled on the island, which they called Napoyca, circa 1000. They would remain on Napoyca until the early 18th century. Since then, the island has frequently changed possession and been under eight different flags – the only United States location to have done so. In 1562 French Huguenot explorer Jean Ribault becomes the first (recorded) European visitor to Napoyca and names it Isle de Mar.In 1565, Spanish forces led by Pedro Menendez de Aviles drive the French from northeastern Florida, slaughtering Ribault and approximately 350 other French colonists. In 1573, Spanish Franciscans establish the Santa Maria mission on the island, which is named Isla de Santa Maria. The mission was abandoned in 1680 after the inhabitants refuse a Spanish order to relocate. British raids force the relocation of the Santa Catalina de Guale mission on St. Catherine’s Island, Georgia, to the abandoned Santa Maria mission on the Island in 1685. In 1702, this mission was again abandoned when South Carolina’s colonial governor, James Moore, leads a joint British-Indian invasion of Florida. Georgia’s founder and colonial governor, James Oglethorpe, renames the island “Amelia Island” in honor of princess Amelia (1710-1786), King George IIdaughter, although the island was still a Spanish possession. After establishing a small settlement on the northwestern edge of the island, Oglethorpe negotiates with Spanish colonial officials for a transfer of the island to British sovereignty. Colonial officials agree to the transfer, but the King of Spain nullifies the agreement. The Treaty of Paris in 1763 ratifies Britain’s victory in the Seven Years War, ceding Florida to Britain in exchange for Havana and nullifying all Spanish land grants in Florida. The Proclamation of 1763 established the St. Mary River as East Florida northeastern boundary. In 1783, the Second Treaty of Paris ends the Revolutionary War and returns Florida to Spain. British inhabitants of Florida had to leave the province within 18 months unless they swore allegiance to Spain. Amelia Island was the final location of Santa Catalina de Guale, the main mission of Spanish Florida to the Guale chiefdom.In 1811, surveyor George J. F. Clarke plats the town of Fernandina, named in honor of King Ferdinand VII of Spain. With the approval of President James Madison and Georgia Governor George Mathews in 1812-1813, insurgents known as the “Patriots of Amelia Island” seize the island. After raising a Patriot flag, they replace it with the United States Flag. American gunboats under the command of Commodore Hugh Campbell maintain control of the island until Spanish pressure forces their evacuation in 1813. Spanish erect Fort San Carlos on the island in 1816. Green Cross of Florida Flag Led by Gregor MacGregor , a Scottish-born soldier of fortune, 55 musketeers seize Fort San Carlos, claiming the island on behalf of the “Green Cross” in june of 1817.In colonial history, it became known for an episode called the Amelia Island Affair. Mexican Rebel Flag Spanish soldiers force MacGregor’s withdrawal, but their attempt to regain complete control is foiled by American irregulars organized by Ruggles Hubbard and former Pennsylvania congressman Jared Irwin. Hubbard and Irwin later join forces with the French-born pirate Louis Aury, who lays claim to the island on behalf of the Republic of Mexico. U. S. Navy forces drive Aury from the island, and President James Monroevows to hold Amelia Island “in trust for Spain.” On January 8, 1861, two days before Florida secession, Confederate sympathizers (the Third Regiment of Florida Volunteers) take control of Fort Clinch, already abandoned by Federal workers who had been constructing the fort. General Robert E. Lee visits Fort Clinch in November 1861 and again in January 1862, during a survey of coastal fortifications. United States Flag Union forces, consisting of 28 gunboats commanded by Commodore Samuel Dupont, restore Federal control of the island on March 3, 1862 and raise the American Flag. It is host to the Isle of Eight Flags Shrimp Festival, the Amelia Island Jazz Festival, the Amelia Island Chamber Music Festival, the Amelia Island Film Festival, the Amelia Island Concours d’Elegance and the Amelia Island Blues Festival. Amelia Island was the main filming location for the 2002 John Sayles directed film Sunshine State. The New Adventures of Pippi Longstocking was also filmed there in 1988. Amelia Island hosted a Women’s Tennis Association tournament for 28 years (1980 to 2008). From 1987 to 2008 it was known as the Bausch & Lomb Championships. In 2009-2011 Amelia Island also hosted the Pétanque America Open.
<urn:uuid:c6cc1c4d-51c6-4d78-91a1-bad5de2bf910>
CC-MAIN-2013-20
http://lowcountryevents.com/amelia-island/
2013-05-22T07:54:50Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.927347
1,142
Daniel Greysolon, or Sierur du Lhut, was a French Aristocrat of the late 17th century. As a maritime captain for Louis XIV, he migrated to Montreal and in 1678 led a party into the wilderness hoping to discover the Northwest Passage, a rumored route to the Pacific and the Orient. While failing at this, he did convince the Chippewa, Sioux and other tribes to recognize the authority of Louis IVX and cease their battles. This agreement occurred in the place that is now the city that bears his name. Du Lhut continued his military career, which included mapping a trade route from the Mississippi and Lake Superior, until 1707. He is considered the first white man to reach the western extremity of the Great Lakes. American Fur Company – Beaver Pelts The fashionable beaver hats worn by European gentlemen was the main force pushing exploration and settlement in northern Minnesota. In 1792 a small fort was built at the Head of the Lakes to carry on the fur trade between the northern wilderness and the beaver pelt markets of Europe. The American Fur Company had a substantial station in Fond du Lac. 1839- Fashionable Europeans began to prefer silk hats to beaver ones and the loss of pelt markets devastated the economy and well being of the Lake Superior settlements. In 1847 the Fond du Lac fur station closed. Copper- A “New” Industry Copper mining was an activity that dated back thousands of years in the region. Indian tales of copper were shared with the first Frenchman who entered the region and copper signs and traces were commented on by many early white explorers. On Isle Royale and the Keweenaw Peninsula, major copper mines were built in the 1840’s. In 1849 Minnesota became an official United States Territory. Minnesota Alliance for Geographic Education 1600 Grand Avenue Saint Paul, Minnesota 55105-1899 [email protected] 651-696-6731 or 651-696-6249
<urn:uuid:4a393005-8dab-447b-b514-4e027ec17542>
CC-MAIN-2013-20
http://lt.umn.edu/mage/curriculum/go-minnesota/duluths-namesake-and-early-beginnings/
2013-05-22T08:12:05Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.959208
413
Working with Parents of Suicidal Youth Restricting access to lethal means is a very important intervention. It is as sensible as taking the car keys away from an intoxicated individual. It can mean the difference between life and death for a young person. Restricting access to lethal means has been cited as one of the most important ways to reduce youth suicide(1). A study conducted by the Centers for Disease Control and Prevention (CDC) asked individuals ages 15-34 how much time elapsed between the time they decided to attempt suicide and the time they took action. Nearly one quarter stated that less than five minutes passed. Other studies have followed victims of nearly lethal attempts and found that 10 to 20 years later, 90% or more had not died by suicide(2). Imagine a 14-year-old running out of the kitchen after an argument with a parent. The youth reaches into a closet to discover a loaded firearm, and pulls the trigger. A life is suddenly and sadly lost. Now imagine there is no gun, and in the 15-20 minutes it takes to find a rope, gather pills or fill the garage with fumes, the anger felt may have passed and/or a family member may have intervened to help. For some youth, the best form of suicide prevention is putting time and/or distance between the impulse to die and a form of lethal means. - Inform the parents that you believe their adolescent is at risk for suicide and why you think so. For example, if you are working with an adolescent who has made one attempt, it is important to inform the parent or caretaker that, "Adolescents who have made a suicide attempt are at risk for another attempt. One suicide attempt is a very strong risk factor for another." - Tell parents or caretakers that they can reduce the risk of suicide by removing firearms from the house and restricting access to other lethal means. Research shows that the risk of suicide doubles if a firearm is in the house, even if the firearm is locked up. It is extremely important to help parents or caretakers understand the importance of removing access to firearms and other lethal means. Half of Maine’s youth suicides are committed with a firearm. This is important information for all parents, even if they do not own a firearm. Access to lethal means may be readily available at the home of other family members, friends, or neighbors. Every effort must be made to remove all access to lethal means. - Educate parents about different ways to dispose of, or at the very least, limit access to a firearm. Officers from local police departments, sheriff’s offices, or state police barracks are willing to help to temporarily remove firearms from the environment of a suicidal individual. The information on the following pages will inform you on what to expect in terms of procedures for removing, storing, or disposing of firearms. Questions about removing firearms. In Five Minutes or Less You Can Tell a Parent These Three Things: - That their adolescent is at risk for suicide and why you think so. - They can reduce the risk of suicide by getting firearms and other lethal means out of the house. - There are several ways to dispose of, or at the very least, limit access to a firearm. (1)Hemenway, David. Private Guns Public Health. The University of Michigan Press. Ann Harbor, 2004. (2)Barber, Catherine, MPA. Fatal Connection The link between guns and suicide. Advancing Suicide Prevention. July/August 2005, Vol I, Issue 2.
<urn:uuid:42054bd0-ca4d-4e20-b35b-389087027ad3>
CC-MAIN-2013-20
http://maine.gov/suicide/professionals/program/five.htm
2013-05-22T08:01:20Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.956483
722
This is the summary of a presentation given at the 74th Annual NCTM Meeting, 25-28 April 1996, San Diego, CA. This interactive session will help in the planning and incorporating of writing assignments for all content areas. The Standards indicate that students need to be encouraged to write in a variety of ways and to write often. Learn and discuss ways of grading these assignments that can be structured so as to allow immediate feedback to the student while also allowing the instructor the ability to assess learning. Participants will be exposed to a variety of activities which encourage writing in mathematics. These include analytical questions, summaries of daily concepts, projects and papers, group work (both in-class and out) and labs and classroom assessment instruments. Participants are encouraged to share ideas they have used in the classroom and to bring samples for the participants. Discussion will center around these as well as methods for evaluation students' progress and performance, and evaluative tools for the instructor to test effectiveness. Julane B. Crabtree (Johnson County Community College, Overland Park, KS) Home || The Math Library || Quick Reference || Search || Help
<urn:uuid:7ce49424-726f-4953-808e-3a47c5f70f9d>
CC-MAIN-2013-20
http://mathforum.org/mathed/nctm96/humanistic/crabtree.html
2013-05-22T08:25:30Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.925681
228
A slender, long-bodied mammal with short legs and well-furred tail about half the length of the head and body. The head is small, flattened and only slightly larger in diameter than the long neck. The ears are short and rounded, the whiskers prominent and the small eyes beady. In summer, adults are usually dark brown above and yellowish white below with a white chin and black tail tip. In winter, the coat is paler but sometimes in northern Missouri they have an all-white coat except for the black tail tip.
<urn:uuid:9686500a-1c28-4851-bf56-0180c6bf0250>
CC-MAIN-2013-20
http://mdc.mo.gov/discover-nature/field-guide/long-tailed-weasel
2013-05-22T07:56:05Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.941069
111
Unraveling the mysteries of the natural killer within us Scientists have discovered more about the intricacies of the immune system in a breakthrough that may help combat viral infections such as HIV. Co-led by Professor Jamie Rossjohn of Monash University and Associate Professor Andrew Brooks from University of Melbourne, an international team of scientists have discovered more about the critical role Natural Killer cells play in the body's innate immune response. The findings were published today in Nature. Natural Killer cells are a unique type of white blood cell important in early immune responses to tumours and viruses. Unlike most cells of the immune system that are activated by molecules found on the pathogen or tumour, Natural Killer cells are shut down by a group of proteins found on healthy cells. These de-activating proteins, known as Human Leukocyte Antigens or HLA molecules are absent in many tumours and cells infected with viruses, leaving them open to attack by the Natural Killer cells. Natural Killer cells recognise the HLA molecules using an inbuilt surveillance system called "Killer cell immunoglobulin-like receptors" (KIR). Using the Australian Synchrotron, the team determined the three dimensional shape of one of these key KIR proteins, termed KIR3DL1, which binds to a particular HLA molecule. This pairing is known to play a role in limiting viral replication in people with HIV, slowing the progression of the disease to AIDS. Professor Rossjohn said that better understanding the structure of KIR proteins may help to develop approaches to better utilise Natural Killer cells to combat viral infection. "It is only possible to detect proteins, such as KIRs, using extremely high-end equipment. The use of the platform technologies at Monash and the Australian Synchrotron was absolutely essential to this project's success," Professor Rossjohn said. Professor Brooks said the researchers would use these findings to investigate other KIR molecules. "Since KIR3DL1 is only a single member of a much larger family of receptors, the study provides key insight into how Natural Killer cells utilise other members of this important family of receptors to recognise virus-infected cells and tumours." Professor Brooks said. Provided by Monash University - Natural killer cells contribute to immune response against HIV Aug 03, 2011 | not rated yet | 0 - New cell type offers new hope Jun 14, 2011 | not rated yet | 0 - Immune cell communication key to hunting viruses, Oct 25, 2006 | not rated yet | 0 - HIV pays a price for invisibility Apr 13, 2009 | not rated yet | 0 - Discovery in how HIV thwarts the body's natural defense opens up new target for drug therapies Nov 19, 2010 | not rated yet | 0 - Motion perception revisited: High Phi effect challenges established motion perception assumptions Apr 23, 2013 | 3 / 5 (2) | 2 - Anything you can do I can do better: Neuromolecular foundations of the superiority illusion (Update) Apr 02, 2013 | 4.5 / 5 (11) | 5 - The visual system as economist: Neural resource allocation in visual adaptation Mar 30, 2013 | 5 / 5 (2) | 9 - Separate lives: Neuronal and organismal lifespans decoupled Mar 27, 2013 | 4.9 / 5 (8) | 0 - Sizing things up: The evolutionary neurobiology of scale invariance Feb 28, 2013 | 4.8 / 5 (10) | 14 How can there be a term called "intestinal metaplasia" of stomach 21 hours ago Hello everyone, Ok Stomach's normal epithelium is simple columnar, now in intestinal type of adenocarcinoma of stomach it undergoes "intestinal... Pressure-volume curve: Elastic Recoil Pressure don't make sense May 18, 2013 From pressure-volume curve of the lung and chest wall (attached photo), I don't understand why would the elastic recoil pressure of the lung is... If you became brain-dead, would you want them to pull the plug? May 17, 2013 I'd want the rest of me to stay alive. Sure it's a lousy way to live but it beats being all-the-way dead. Maybe if I make it 20 years they'll... MRI bill question May 15, 2013 Dear PFers, The hospital gave us a $12k bill for one MRI (head with contrast). The people I talked to at the hospital tell me that they do not... Ratio of Hydrogen of Oxygen in Dessicated Animal Protein May 13, 2013 As an experiment, for the past few months I've been consuming at least one portion of Jell-O or unflavored Knox gelatin per day. I'm 64, in very... Alcohol and acetaminophen May 13, 2013 Edit: sorry for the typo in the title , can't edit I looked around on google quite a bit and it's very hard to find precise information on the... - More from Physics Forums - Medical Sciences More news stories Trends in Helicobacter pylori (H. pylori) and smoking explain a significant proportion of the decline of intestinal-type noncardia gastric adenocarcinoma (NCGA) incidence in US men between 1978 and 2008, and are estimated ... Medical research 11 hours ago | not rated yet | 0 Widely available in pharmacies and health stores, phosphatidylserine is a natural food supplement produced from beef, oysters, and soy. Proven to improve cognition and slow memory loss, it's a popular treatment for older ... Medical research 15 hours ago | 5 / 5 (3) | 0 | Researchers at Emory University have identified a protein that stimulates a pair of "orphan receptors" found in the brain, solving a long-standing biological puzzle and possibly leading to future treatments for neurological ... Medical research 16 hours ago | 5 / 5 (1) | 0 | Australian scientists have charted the path of insulin action in cells in precise detail like never before. This provides a comprehensive blueprint for understanding what goes wrong in diabetes. Medical research 16 hours ago | 4.6 / 5 (7) | 0 | Researchers at the University of Illinois at Chicago College of Medicine will study gender differences in how the heart uses and stores fat—its main energy source—and how changes in fat metabolism play ... Medical research 19 hours ago | not rated yet | 0 Phthalates: Study links chemicals widely found in plastics, processed food to elevated blood pressure in children, teens Plastic additives known as phthalates (pronounced THAL-ates) are odorless, colorless and just about everywhere: They turn up in flooring, plastic cups, beach balls, plastic wrap, intravenous tubing and—according to the ... 3 minutes ago | not rated yet | 0 | (Medical Xpress)—Native peoples in regions where cameras are uncommon sometimes react with caution when their picture is taken. The fear that something must have been stolen from them to create the photo ... 16 hours ago | 4.2 / 5 (5) | 0 | (Medical Xpress)—Despite spending billions of dollars on research and development, drug companies have been unable to come up with effective treatments for dementia and Alzheimer's Disease (AD). Now, A. ... 14 hours ago | 4.9 / 5 (13) | 0 | An experimental sleeping pill from US drug company Merck is effective at helping people fall and stay asleep, according to reviewers at the US Food and Drug Administration, which could soon approve the new drug. 10 hours ago | 4 / 5 (4) | 0 Activating an enzyme known to play a role in the anti-aging benefits of calorie restriction delays the loss of brain cells and preserves cognitive function in mice, according to a study published in the May ... 11 hours ago | 5 / 5 (4) | 0 | A drug commonly used to treat depression and anxiety may improve a stress-related heart condition in people with stable coronary heart disease, according to researchers at Duke Medicine. 12 hours ago | 5 / 5 (1) | 0 |
<urn:uuid:37230d46-7d56-4a6e-bc69-f9c8b44fc469>
CC-MAIN-2013-20
http://medicalxpress.com/news/2011-10-unraveling-mysteries-natural-killer.html
2013-05-22T08:19:29Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.922497
1,702
BAR 38:05, Sep/Oct 2012 How many men named Herod are mentioned in the New Testament? It’s easy to get confused. The Gospel of Matthew tells of a cruel king named Herod in Jerusalem who seeks to kill the baby Jesus and soon dies himself. Yet this same gospel, as well as Mark’s and Luke’s gospels, later mentions a ruler named Herod who has John the Baptist beheaded and is considered a threat to Jesus’ preaching. Luke’s gospel adds that Pontius Pilate sent Jesus to Herod to be tried in his native Galilee. Later, in the Acts of the Apostles a man called King Herod persecutes the followers of Jesus in Judea. The name Herod appears in the New Testament 44 times but refers to three different men. The first Herod is the ruler in the gospel infancy narratives, Herod the Great, the ruthless client-king of Judea supported by the Romans. He sponsored massive building projects during his reign, including a complete renovation of Jerusalem’s Temple and Temple Mount.
<urn:uuid:b44ecd8e-a112-4a66-816d-43c61ca25890>
CC-MAIN-2013-20
http://members.bib-arch.org/publication.asp?PubID=BSBA&Volume=38&Issue=5&ArticleID=13
2013-05-22T08:19:25Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.961402
221
- Faith & Family Miami’s getaway resorts and pristine beaches may be one of the top choices for tourists today, but in years past, segregation kept both Black visitors and Black Miamians locked out. In fact, racial segregation, formerly the law of the land, extended to every facet of life. As the Black population surged in the 1940s, religious and community leaders demanded police presence and protection in Miami’s two overcrowded Black communities of Liberty City and Overtown. In 1944, the first Black patrolmen were sworn in as “emergency policemen” and assigned to what was then referred to as the Central Negro District. In 1950, the Negro Police Precinct and Courthouse was established in Overtown, operating as a separate station house and municipal court for Blacks. It remained in operation until 1963 but still stands — now as an historical monument and education museum where all can learn about the struggles and triumphs of Miami’s “first Negro policemen.” Last week, the City of Miami Negro/Black Police Officers Precinct and Museum [480 NW 11th Street], under the direction of newly-elected Museum president, Dr. Thomas K. Pinder, Ph.D., held an open house for the community. He says this is “just the beginning of programs and special events that will share this important piece of history with the youth of today and tomorrow.” “After our members decided that we needed to operate separately from the Black Officers Retired Association, I was chosen to oversee the Museum,” he said. “What’s most important is that our children know about this place and the significant role that Black officers played when segregation was still operating in Miami and throughout the U.S. The more we understand our history and our struggles, the better we can prepare and empower our children for the future.” Clarence Dickson, 78, was the first Black police chief for the City of Miami and was first hired in 1944. He says prior to the precinct being built for Blacks, they were forced to meet in places ranging from an apartment room to a dentist’s office. “We weren’t allowed to meet with the rest of the police in their downtown headquarters,” he said. “But thanks to continued pressure from the Black community, we finally got a place of our own.” Dickson says there were about 40 policemen when the doors of the Black precinct first opened. He was also one of the first men allowed to attend the Police Academy and to sit with white officers to take examinations for promotion. He would rise up the ranks and retire after 30 years of service. He adds that 40 were in his class when it began in 1960 — only 13 finished. “I failed the test twice and saw others who were much smarter than me bomb out,” he said. “You could only take the test three times. I figured I wouldn’t pass. But after one of my fellow officers told me that everyone was depending on me to succeed, I buckled down — failure was no longer an option.” Other officers that shared their compelling stories during the program included: Jesse Hill, 84, retired sergeant, who was sent to the precinct in the early 1950s and was one of the first Black detectives; James Stubbs, 78, who served in the armed forces before joining the police department; Archie McKay, 87, retired lieutenant, who has the distinction of being the oldest living Black police officer and recalls “being called nigger as frequently as one runs water for drinking”; Pinder, Museum president and retired sergeant; Otis Davis, retired lieutenant; Davie Madison, retired captain; and Officer Leroy Smith. For more information or for a tour call 305-329-2513. By D. Kevin McNeir
<urn:uuid:9ebb579b-9000-40af-b893-a9572eb01b0b>
CC-MAIN-2013-20
http://miamitimesonline.com/miami%E2%80%99s-first-black-cops-remember-%E2%80%9Cjoy-and-pain%E2%80%9D/
2013-05-22T08:32:33Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.976926
801
|SUBSCRIBE TO MONEY| Financial education for kids Check out our parent's guide to kids and money. Help put your children on the road to handling money responsibly. 1. When it comes to teaching kids about money, the sooner the better. Up until they start earning a living, and sometimes well beyond that, kids are apt to spend money like it grows on trees. This lesson will help you put your children on the road to handling money responsibly. Long before most children can add or subtract, they become aware of the concept of money. Any 4-year-old knows where their parents get money - the ATM, of course. Understanding that parents must work for their money requires a more mature mind, and even then, the learning process has its wrinkles. For example, once he came to understand that his father worked for a living, a 5-year-old asked, "How was work today?" "Fine," the father replied. The child then asked, "Did you get the money?" 2. Once they learn how money works, children often display an instinctive conservatism. Instant gratification aside, once they learn they can buy things they want with money - e.g., candy, toys - many children will begin hoarding every nickel they can get their hands on. How this urge is channeled can determine what kind of financial manager your child will be as an adult. 3. Seeds planted early bear fruit later. It's important to work on your child's financial awareness early on, for once they're teenagers, they are less likely to heed your sage advice. Besides, they're busy doing other things - like spending money. 4. An allowance can be an effective teaching tool. When your kids are young, giving them small amounts of money helps them prepare for the day when the numbers will get bigger. 5. Teenagers and college-age kids have bigger responsibilities. Checking accounts, credit cards and debt are as elemental to the college experience as books and keg parties. Teaching high-schoolers about banking and credit will make them more savvy when they leave the nest. 6. Even investing should be learned early. High schoolers can and should be taught about the market - using real money.
<urn:uuid:b3d9d115-6647-4c12-a94a-e3e5bd115900>
CC-MAIN-2013-20
http://money.cnn.com/magazines/moneymag/money101/lesson12/index.htm
2013-05-22T08:32:29Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.968241
466
The dangers of GPS/GNSS |GNSS (Global Navigation Satellite Systems) is a common acronym encompassing all existing and planned satellite-based navigation systems. So far, the US-built GPS dominates the scene completely, but the Russian GLONASS is approaching around-the-clock global operational status, and other systems are being developed (the European Galileo, the Chinese Compass/Beidou and the Indian IRNSS). There are also augmentation systems of more or less operational status (the US WAAS, the European EGNOS, the Japanese MSAS and the Indian GAGAN). Satellite navigation is becoming part of everyday life, user equipments are becoming cheaper, smaller, easier to handle and with increasingly improved performance. This development is expected to continue for the foreseeable future with receivers in mobile phones and cars as dominating markets (Figs. 1-3). The following discussion for obvious reasons mostly refers to GPS, but the arguments are generally valid for all global navigation satellite-based systems. Today’s average performance of GPS is used as a starting point for our discussion. In not too distant a future, even better numbers can be expected. PDOP Availability: Requirement – PDOP of 6 or less, 98% of the time or better; Actual – 99.98%. Horizontal Service Availability: Requirement – 95% threshold of 36 metres, 99% of the time or better; Actual – 3.7 metres. Vertical Service Availability: Requirement – 95% threshold of 77 metres, 99% of the time or better; Actual – 5.3 metres. User Range Error: Requirement – 6 metres or less; Constellation Average Actual – 1.2 metres. Fig. 1. GPS users in 2006 What’s the problem? The problem is that nothing works 100 %. GPS is very close, but for some users under some circumstances,“very close” is not good enough. The situation in general is as follows: • Most GPS users know nothing about GPS vulnerability. • Most users don’t care. • Most GPS users can stand some interruptions or performance reduction. • Most politicians and representatives of authorities in the fi eld of navigation don’t know of GPS vulnerability. • Back-up systems are being closed down (e.g. LORAN-C), and there is little or no contact between different countries about these matters. GPS (and all satellite navigation systems, more or less) are vulnerabe because of • Very low signal power received; • A few frequencies (in the GPS case today, only one for general use)and a known signal structure; • Spectrum competition; • Worldwide military applications drive a GPS disruption industry; • Jamming techniques are well known, devices are available, or can be built easily (fig. 4). In 2001 (just before the infamous 9/11), the U.S. Department of Transportation’s Volpe National Transportation Systems Center published results from an investigation into the vulnerability of the transportation infrastructure relying on GPS. Conclusions to be drawn from that investigation are: • Awareness should be created in the navigation and timing communities of the need for back-up systems or operational procedures; • All transportation modes should be encouraged to pay attention to autonomous integrity monitoring of GPS/GNSS signals; • All GPS/GNSS receivers in critical applications must provide a timely warning when the signals are degraded or lost; • Development of certifiable, integrated (multipurpose) receivers should be encouraged; • A comprehensive analysis of GPS/ GNSS back-up navigation and precise timing options (e.g. LORAN, VOR/DME, ILS, INS) and operating procedures should be conducted. Fig. 2. Experienced and expected use of GPS/GNSS in cars Fig. 3. Experienced and expected use of GPS/GNSS in mobile phones Fig. 4. This dice is a 10 mW GPS jammer. Fig. 5. Example of a car navigation problem. Causes of trouble There are many possible reasons for degraded performance or service interruption for users of GNSS: • Satellite or controlsegment malfunctions. • Unintentional interference: • Radio-frequency interference (RFI) from external sources (spectrum congestion, harmonics, high-power signals saturating receiver front • Testing at system level; • Ionospheric infl uence (solar maxima, magnetic storms, scintillations); • Intentional interference: • Spoofing (false signals into the receiver); • Meaconing (interception and re-broadcast of navigation signals). • Human factors: • User equipment and satellite design errors; • Lack of knowledge and/or training. The main technical explanation of GNSS receiver vulnerabilities to external interference can be summarised as very low power received from the satellites. The minimum power level is usually between -150 and -160 dBW. Looking at the details for GPS, we see that • acquisition requires 6 – 10 dB higher signal-to-noise ratio (SNR) than • loss-of-lock sometimes occurs for interference-to-signal ratios (I/S) below 30 dB, • receiver detection of loss-of-lock is delayed because of narrowband code-tracking loops, • some lines in the GPS C/A-code spectrum are more vulnerable than others because of higher power levels (Gold code spectra do not exactly follow a sinc shape, and spectral lines work as local oscillator frequencies for received interference signals), • modulated interference is generally worse than white noise, and narrowband interference is worse than wide-band. Pages: 1 2
<urn:uuid:796d8d36-5a58-42e2-bd20-3c88d1831667>
CC-MAIN-2013-20
http://mycoordinates.org/the-dangers-of-gpsgnss/
2013-05-22T08:19:34Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.878031
1,206
For more information about National Park Service air resources, please visit http://www.nature.nps.gov/air/. Air Pollution Impacts Glacier National Park Natural and scenic resources in Glacier National Park (NP) are susceptible to the harmful effects of air pollution. Mercury, nitrogen, sulfur, ozone, and fine particles impact natural resources such as wildlife, surface waters, and vegetation, and scenic resources such as visibility. Click on the tabs below to learn more about air pollutants and their impacts on natural and scenic resources at Glacier NP. - Toxics & Mercury - Nitrogen & Sulfur Toxics, including heavy metals like mercury, accumulate in the tissue of organisms. When mercury converts to methylmercury in the environment and enters the food chain, effects can include reduced reproductive success, impaired growth and development, and decreased survival. Other toxic air contaminants of concern include pesticides, industrial by-products, and emerging chemicals such as flame retardants for fabrics, some of which are also known or suspected to cause cancer or other serious health effects in humans and wildlife. Effects of mercury and toxics at Glacier NP include: - Presence of contaminants including current- and historic-use pesticides, mercury, and industrial by-products in snow, fish, water, and lake sediment (Downs and Stafford 2009; Hageman et al. 2006; Ingersoll et al. 2007; Krabbenhoft et al. 2002; Landers et al. 2010; Landers et al. 2008; Mast et al. 2006 [pdf, 4.4 MB]; Watras et al. 1995); - Concentrations of a combustion by-product, PAHs, in snow, lichens, and sediment 3.6 to 60,000 times greater in the park’s Snyder Lake watershed than in watersheds from other western and Alaskan national parks; levels attributable to emissions from a local aluminum smelter. Although the smelter is now closed, PAHs deposited from its emissions persist in the park’s ecosystems (Usenko et al. 2010); - Levels of historic-use pesticides dieldrin and DDT in fish that exceed safe consumption thresholds for human and wildlife health, and concentrations of current-use pesticides (e.g., endosulfans, dacthal) in fish higher than in other western U.S. national parks (Ackerman et al. 2008; Landers et al. 2010; Landers et al. 2008); - Concentrations of mercury in fish from numerous lakes in the park that exceed safe consumption thresholds for human and wildlife health (Downs and Stafford 2009; Downs et al. 2011), prompting guidelines for fish consumption (GNP 2009 [pdf, 330 KB]). Mercury levels are also associated with tissue damage in fish kidney and spleen (Schwindt et al. 2008); - Male “intersex” fish (the presence of both male and female reproductive structures in the same fish) found in the park, a response that often indicates exposure to contaminants (Schwindt et al. 2009). Nitrogen and sulfur compounds deposited from air pollution can harm surface waters, soils, and vegetation. While nitrogen and sulfur deposition is generally low at Glacier National Park, concentrations of ammonium in wet deposition, a contributor to nitrogen deposition that often indicates the influence of nearby agriculture, are elevated and increasing (Clow et al. 2003; Ingersoll et al. 2007; NPS 2010 [pdf, 2.8 MB]). High elevation ecosystems in the park are particularly sensitive to nitrogen deposition. Not only do these systems receive more nitrogen deposition than lower elevation areas because of greater amounts of snow and rain, but short growing seasons and shallow soils limit the capacity of soils and plants to absorb nitrogen. Dilute surface waters in some park watersheds are also very sensitive to acidification from sulfur and nitrogen deposition. Other watersheds in the park, especially those that receive glacial runoff, are less sensitive to acid deposition due to buffering minerals like calcium in the runoff (Ellis et al. 1992; Clow et al. 2002; Nanus et al. 2009; Peterson et al. 1998 [pdf, 1.1 MB]; Sullivan et al. 2011a; Sullivan et al. 2011b [pdf, 11.1 MB]). Beyond the possible effects of acidification, excess nitrogen loading can contribute to overenrichment, causing changes to the species composition of sensitive terrestrial and aquatic communities. Certain vegetation communities including alpine and wetland are at high risk from nitrogen enrichment (Bowman 2009; Sullivan et al. 2011a; Sullivan et al. 2011b [pdf, 11.1 MB]). Recent research in the park’s Snyder and Oldman Lakes indicates that aquatic communities appear undisturbed by nitrogen deposition to that area, suggesting that these lakes, like many in the area, are phosphorus-limited (Ellis et al. 1992; Saros 2009). Naturally-occurring ozone in the upper atmosphere absorbs the sun’s harmful ultraviolet rays and helps to protect all life on earth. However, in the lower atmosphere, ozone is an air pollutant, forming when nitrogen oxides from vehicles, power plants and other sources combine with volatile organic compounds from gasoline, solvents, and vegetation in the presence of sunlight. In addition to causing respiratory problems in people, ozone can injure plants. Ozone enters leaves through pores (stomata), where it can kill plant tissues, causing visible injury, or reduce photosynthesis, growth, and reproduction. There are a few ozone-sensitive plants in Glacier NP including Populus tremuloides (quaking aspen) and Salix scouleriana (Scouler’s willow). The low levels of ozone exposure at Glacier NP make the risk of foliar ozone injury to plants low (Kohut 2004 [pdf, 145 KB]). Search the list of ozone-sensitive plant species (pdf, 184 KB) found at each national park. Visitors come to Glacier NP to enjoy spectacular views of active glaciers and the work of colossal glaciers in the past, creating rugged topography and stunning lakes and streams. Unfortunately, park vistas of such spectacular scenery are sometimes obscured by haze caused by fine particles in the air. Many of the same pollutants that ultimately fall out as nitrogen and sulfur deposition contribute to this haze and visibility impairment. Additionally, organic compounds, soot, and dust reduce visibility; as does smoke from nearby forest fires. Visibility effects at Glacier NP include: - Reduced visibility sometimes due to human-caused haze and fine particles of air pollution; - Reduction of the average natural visual range from about 140 miles (without the effects of pollution) to about 50 miles because of pollution at the park; - Reduction of the visual range to below 25 miles on high pollution days. (Source: IMPROVE 2010) Explore scenic vistas through live webcams at Glacier National Park. Studies and monitoring help the NPS understand the environmental impacts of air pollution. Access air quality data and see what is happening with Studies and Monitoring at Glacier NP. Last Updated: August 18, 2011
<urn:uuid:d89bbe16-da52-4c7e-968f-39e3385fd6fa>
CC-MAIN-2013-20
http://nature.nps.gov/air/permits/aris/glac/impacts.cfm?tab=2
2013-05-22T08:00:40Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.9068
1,471
Weight trimming is an adjustment procedure that involves detecting and reducing extremely large weights. "Extremely large weights" generally refer to large sampling weights that were not anticipated in the design of the sample. Unusually large weights are likely to produce large sampling variances of statistics of interest, especially when the large weights are associated with sample cases reflective of rare or atypical characteristics. To reduce the impact of these large weights on variances, weight reduction methods are typically employed, with the goal of reducing the mean square error of survey estimates. While the trimming of large weights reduces variances, it also introduces some bias. However, it is presumed that the reduction in the variances more than compensates for the increase in the bias, thereby reducing the mean square error and thus improving the accuracy of survey estimates (Potter 1988). NAEP employs weight trimming at both the school and the student levels.
<urn:uuid:840fd347-deff-418b-9860-2f93d9113a88>
CC-MAIN-2013-20
http://nces.ed.gov/nationsreportcard/tdw/weighting/2008/ltt_weighting_2008_trimming_adjustments.asp
2013-05-22T08:02:26Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.962467
181
March 19, 2009—It's about as unlikely as capturing a "fossil sneeze," but researchers have found the second known set of octopus fossils, a new study says. The five well-preserved fossils were found in 95-million-year-old rocks in Lebanon. The specimens represent three new species of ancient octopus, study lead author Dirk Fuchs of the Freie Universität Berlin said in a statement. For each animal, all eight arms, traces of muscle, and rows of suckers are visible, and a few of the fossils even contain remnants of ink and internal gills. With boneless bodies made mostly of muscle and skin, octopuses usually disintegrate into "slimy blobs" after death—making preservation over time extremely rare, experts say. (Watch an octopus squeeze through a maze.) While none of the 200 to 300 modern octopus species have been found in fossil form, the ancient creatures look indistinguishable from living species, Fuchs and colleagues note. The fossils' unprecedented detail has shaken up the octopus family tree. That's because primitive octopus relatives had fleshy fins along their bodies, said Fuchs, whose study appeared in March in the journal Palaeontology. But the newfound fossils, like modern octopuses, lack these fins, a discovery that pushes back the origins of modern octopuses by tens of millions of years.
<urn:uuid:0cb5997d-b9d3-4f5f-a85e-8cb58e82af8d>
CC-MAIN-2013-20
http://news.nationalgeographic.com/news/2009/03/090319-octopus-fossil-picture.html
2013-05-22T08:14:05Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.937319
297
By inserting a 43,000-year-old woolly mammoth gene into Escherichia coli bacteria, scientists have figured out how these ancient beasts adapted to the subzero temperatures of prehistoric Siberia and North America. The gene, which codes for the oxygen-transporting protein hemoglobin, allowed the animals to keep their tissues supplied with oxygen even at very low temperatures. "It's no different from going back 40,000 years and taking a blood sample from a living mammoth," says Kevin Campbell, a biologist at the University of Manitoba in Canada. Campbell's team obtained DNA from mammoth bone preserved in the Siberian permafrost. It was a long journey for Campbell, whose specialty is the physiology of mammals. A decade ago, he saw a Discovery Channel program on the recovery of a mammoth specimen encased in ice and wondered if such specimens might hold clues to the physiology of the mammoth. Campbell worked with DNA expert Alan Cooper of the University of Adelaide in Australia to isolate the mammoth gene responsible for hemoglobin. Then, he and colleagues plugged the gene into E. coli. The goal, says Campbell, was to make the bacteria produce mammoth hemoglobin in the lab. (A similar process is used to synthesize human proteins like insulin.) Although they were created in a bacterium, he says, the hemoglobin proteins are identical to what mammoth cells would have produced. "I figured if E. coli can make perfect human hemoglobin, why can't it make mammoth hemoglobin?" Once the researchers had their mammoth hemoglobin, they compared it with that from Asian and African elephants (also created in the lab using genes from living animals spliced into E. coli). The elephant hemoglobin functioned much like human hemoglobin, delivering oxygen more efficiently at warmer temperatures. That helps the hemoglobin transport oxygen to the hardest-working muscles. But the mammoth hemoglobin released oxygen at a steady rate regardless of the temperature, the team reports online today in Nature Genetics. Those differences help explain how the mammoth was able to adapt to frigid temperatures as it evolved from its elephantlike African ancestor over tens of millions of years. In addition to tiny ears and thick wool, Campbell suggests that mammoths may have developed ways to let their limbs and extremities cool dramatically to save energy and conserve their body's core temperature, a physiological trick used by cold-adapted modern animals such as reindeer and muskoxen. But cold feet present a problem when it comes to hemoglobin. "What the mammoth evolved is changes in hemoglobin that reduce the amount of heat needed to exchange oxygen," Campbell says, which allowed the animals to keep their extremities "breathing" even at very low temperatures. Mammoth expert Ralf-Dietrich Kahlke of the Senckenberg Research Institute in Weimar, Germany, says the result jibes with what is known about the timing and trends of mammoth evolution, and it represents an impressive step beyond what ancient DNA has been able to do so far. "It's good to get the ancestry and evolution of the animals with ancient DNA, but for me it's much more exciting that we get results to understand the stamina and power of the animals," Kahlke says. "This is sort of the first result in paleophysiology." Biologist Michael Berenbrink of the University of Liverpool in the United Kingdom says more research is needed to see how significant the adaptation was, however. There's a possibility, for example, that cold tissue needs less oxygen to begin with. "They take it for granted that they have this problem of delivering oxygen," Berenbrink says. Still, he says, the work is a "fantastic" combination of existing methods to answer a new question. "The extraordinary thing is to get the sequence out of an extinct animal, express the protein, and come up with an explanation of its adaptive function." If you liked this article, you might also enjoy:
<urn:uuid:9bcb4ff7-bb22-4ef7-b7fa-4a425b34cbdf>
CC-MAIN-2013-20
http://news.sciencemag.org/sciencenow/2010/05/scientists-resurrect-mammoth-hem.html
2013-05-22T08:11:59Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.951508
805
In This Publication: - Sara’s story - What are learning disabilities? - How common are they? - What are the signs? - About the evaluation process - What if the school declines to evaluate your child? - Definition of “specific learning disabilities” under IDEA - Additional evaluation procedures for LD - What about school? - Tips and resources for teachers - Tips and resources for parents When Sara was in the first grade, her teacher started teaching the students how to read. Sara’s parents were really surprised when Sara had a lot of trouble. She was bright and eager, so they thought that reading would come easily to her. It didn’t. She couldn’t match the letters to their sounds or combine the letters to create words. Sara’s problems continued into second grade. She still wasn’t reading, and she was having trouble with writing, too. The school asked Sara’s mom for permission to evaluate Sara to find out what was causing her problems. Sara’s mom gave permission for the evaluation. The school conducted an evaluation and learned that Sara has a learning disability. She started getting special help in school right away. Sara’s still getting that special help. She works with a reading specialist and a resource room teacher every day. She’s in the fourth grade now, and she’s made real progress! She is working hard to bring her reading and writing up to grade level. With help from the school, she’ll keep learning and doing well. What are Learning Disabilities? Learning disability is a general term that describes specific kinds of learning problems. A learning disability can cause a person to have trouble learning and using certain skills. The skills most often affected are: reading, writing, listening, speaking, reasoning, and doing math. “Learning disabilities” is not the only term used to describe these difficulties. Others include: - dyslexia—which refers to difficulties in reading; - dysgraphia—which refers to difficulties in writing; and - dyscalcula—which refers to difficulties in math. All of these are considered learning disabilities. Learning disabilities (LD) vary from person to person. One person with LD may not have the same kind of learning problems as another person with LD. Sara, in our example above, has trouble with reading and writing. Another person with LD may have problems with understanding math. Still another person may have trouble in both of these areas, as well as with understanding what people are saying. Researchers think that learning disabilities are caused by differences in how a person’s brain works and how it processes information. Children with learning disabilities are not “dumb” or “lazy.” In fact, they usually have average or above average intelligence. Their brains just process information differently. There is no “cure” for learning disabilities. They are life-long. However, children with LD can be high achievers and can be taught ways to get around the learning disability. With the right help, children with LD can and do learn successfully. How Common are Learning Disabilities? Very common! As many as 1 out of every 5 people in the United States has a learning disability. Almost 1 million children (ages 6 through 21) have some form of a learning disability and receive special education in school. In fact, one-third of all children who receive special education have a learning disability (Twenty-Ninth Annual Report to Congress, U.S. Department of Education, 2010). Back to top What Are the Signs of a Learning Disability? While there is no one “sign” that a person has a learning disability, there are certain clues. We’ve listed a few below. Most relate to elementary school tasks, because learning disabilities tend to be identified in elementary school. This is because school focuses on the very things that may be difficult for the child—reading, writing, math, listening, speaking, reasoning.A child probably won’t show all of these signs, or even most of them. However, if a child shows a number of these problems, then parents and the teacher should consider the possibility that the child has a learning disability. When a child has a learning disability, he or she: - may have trouble learning the alphabet, rhyming words, or connecting letters to their sounds; - may make many mistakes when reading aloud, and repeat and pause often; - may not understand what he or she reads; - may have real trouble with spelling; - may have very messy handwriting or hold a pencil awkwardly; - may struggle to express ideas in writing; - may learn language late and have a limited vocabulary; - may have trouble remembering the sounds that letters make or hearing slight differences between words; - may have trouble understanding jokes, comic strips, and sarcasm; - may have trouble following directions; - may mispronounce words or use a wrong word that sounds similar; - may have trouble organizing what he or she wants to say or not be able to think of the word he or she needs for writing or conversation; - may not follow the social rules of conversation, such as taking turns, and may stand too close to the listener; - may confuse math symbols and misread numbers; - may not be able to retell a story in order (what happened first, second, third); or - may not know where to begin a task or how to go on from there. If a child has unexpected problems learning to read, write, listen, speak, or do math, then teachers and parents may want to investigate more. The same is true if the child is struggling to do any one of these skills. The child may need to be evaluated to see if he or she has a learning disability. About the Evaluation Process If you are concerned that your child may have a learning disability, contact his or her school and request that the school conduct an individualized evaluation under IDEA (the nation’s special education law) to see if, in fact, a learning disability is causing your child difficulties in school. Visit NICHCY’s website and read more about the evaluation process, beginning at: http://nichcy.org/schoolage/evaluation/ What if the School System Declines to Evaluate Your Child? If the school doesn’t think that your child’s learning problems are caused by a learning disability, it may decline to evaluate your child. If this happens, there are specific actions you can take. These include: Contact your state’s Parent Training and Information Center (PTI) for assistance. The PTI can offer you guidance and support in what to do next. Find your PTI by visiting: http://www.parentcenternetwork.org/parentcenterlisting.html Consider having your child evaluated by an independent evaluator. You may have to pay for this evaluation, or you can ask that the school pay for it. To learn more about independent evaluations, visit NICHCY at: http://nichcy.org/schoolage/parental-rights/iee Ask for mediation, or use one of IDEA’s other dispute resolution options. Parents have the right to disagree with the school’s decision not to evaluate their child and be heard. To find out more about dispute resolution options, visit NICHCY at: http://nichcy.org/schoolage/disputes/overview/ IDEA’s Definition of “Specific Learning Disability” Not surprisingly, the Individuals with Disabilities Education Act (IDEA) includes a definition of “specific learning disability” —as follows: (10) Specific learning disability —(i) General. Specific learning disability means a disorder in one or more of the basic psychological processes involved in understanding or in using language, spoken or written, that may manifest itself in the imperfect ability to listen, think, speak, read, write, spell, or to do mathematical calculations, including conditions such as perceptual disabilities, brain injury, minimal brain dysfunction, dyslexia, and developmental aphasia. (ii) Disorders not included. Specific learning disability does not include learning problems that are primarily the result of visual, hearing, or motor disabilities, of intellectual disability, of emotional disturbance, or of environmental, cultural, or economic disadvantage. [34 CFR §300.8(c)(10)] IDEA also lists evaluation procedures that must be used at a minimum to identify and document that a child has a specific learning disability. These will now be discussed in brief. Additional Evaluation Procedures for LD Now for the confusing part! The ways in which children are identified as having a learning disability have changed over the years. Until recently, the most common approach was to use a “severe discrepancy” formula. This referred to the gap, or discrepancy, between the child’s intelligence or aptitude and his or her actual performance. However, in the 2004 reauthorization of IDEA, how LD is determined has been expanded. IDEA now requires that states adopt criteria that: - must not require the use of a severe discrepancy between intellectual ability and achievement in determining whether a child has a specific learning disability; - must permit local educational agencies (LEAs) to use a process based on the child’s response to scientific, research-based intervention; and - may permit the use of other alternative research-based procedures for determining whether a child has a specific learning disability. Basically, what this means is that, instead of using a severe discrepancy approach to determining LD, school systems may provide the student with a research-based intervention and keep close track of the student’s performance. Analyzing the student’s response to that intervention (RTI) may then be considered by school districts in the process of identifying that a child has a learning disability. There are also other aspects required when evaluating children for LD. These include observing the student in his or her learning environment (including the regular education setting) to document academic performance and behavior in the areas of difficulty. This entire fact sheet could be devoted to what IDEA requires when children are evaluated for a learning disability. Instead, let us refer you to a training module on the subject. It’s quite detailed, but if you would like to know those details, read through Module 11 of NICHCY’s Building the Legacy curriculum on IDEA 2004. It’s available online, at: http://nichcy.org/laws/idea/legacy/module11/ Moving on, let us suppose that the student has been diagnosed with a specific learning disability. What next? What About School? Once a child is evaluated and found eligible for special education and related services, school staff and parents meet and develop what is known as an Individualized Education Program, or IEP. This document is very important in the educational life of a child with learning disabilities. It describes the child’s needs and the services that the public school system will provide free of charge to address those needs. Learn more about the IEP, what it includes, and how it is developed, at: Supports or changes in the classroom (called accommodations) help most students with LD. Common accommodations are listed in the “Tips for Teachers” section below. Accessible instructional materials (AIM) are among the most helpful to students whose LD affects their ability to read and process printed language. Thanks to IDEA 2004, there are numerous places to turn now for AIMs. We’ve listed one central source in the “Resources Especially for Teachers” section. Assistive technology can also help many students work around their learning disabilities. Assistive technology can range from “low-tech” equipment such as tape recorders to “high-tech” tools such as reading machines (which read books aloud) and voice recognition systems (which allow the student to “write” by talking to the computer). To learn more about AT for students who have learning disabilities, visit LD Online’s Technology section, at: http://www.ldonline.org/indepth/technology Tips and Resources for Teachers Learn as much as you can about the different types of LD. The resources and organizations listed below can help you identify specific techniques and strategies to support the student educationally. Seize the opportunity to make an enormous difference in this student’s life! Find out and emphasize what the student’s strengths and interests are. Give the student positive feedback and lots of opportunities for practice. Provide instruction and accommodations to address the student’s special needs. Examples: - breaking tasks into smaller steps, and giving directions verbally and in writing; - giving the student more time to finish schoolwork or take tests; - letting the student with reading problems use instructional materials that are accessible to those with print disabilities; - letting the student with listening difficulties borrow notes from a classmate or use a tape recorder; and - letting the student with writing difficulties use a computer with specialized software that spell checks, grammar checks, or recognizes speech. Learn about the different testing modifications that can really help a student with LD show what he or she has learned. Teach organizational skills, study skills, and learning strategies. These help all students but are particularly helpful to those with LD. Work with the student’s parents to create an IEP tailored to meet the student’s needs. Establish a positive working relationship with the student’s parents. Through regular communication, exchange information about the student’s progress at school. Resources Especially for Teachers LD Online | For Educators LD Online | Teaching and Instruction National Center for Learning Disabilities | Especially for Teachers TeachingLD | A service of the Division for Learning Disabilities (DLD) of the Council for Exceptional Children Learning Disabilities Association of America | For Teachers National Center for Accessible Instructional Materials | Find AIM in your state! Reading Rockets | For Teachers Tips and Resources for Parents A child with learning disabilities may need help at home as well as in school. Here are a number of suggestions and considerations for parents. Learn about LD. The more you know, the more you can help yourself and your child. Take advantage of the excellent resources out there for parents (see the next section, below). Praise your child when he or she does well. Children with LD are often very good at a variety of things. Find out what your child really enjoys doing, such as dancing, playing soccer, or working with computers. Give your child plenty of opportunities to pursue his or her strengths and talents. Find out the ways your child learns best. Does he or she learn by hands-on practice, looking, or listening? Help your child learn through his or her areas of strength. Let your son or daughter help with household chores. These can build self-confidence and concrete skills. Keep instructions simple, break down tasks into smaller steps, and reward your child’s efforts with praise. Make homework a priority. Read more about how to help your child be a success at homework in the resources listed below. Pay attention to your child’s mental health (and your own!). Be open to counseling, which can help your child deal with frustration, feel better about himself or herself, and learn more about social skills. Talk to other parents whose children have LD. Parents can share practical advice and emotional support. You can identify parent groups in your area via NICHCY’s State Resource Sheets. Go to the section entitled “Disability-Specific Agencies” and scroll down until you reach “learning disabilities.” Meet with school personnel and help develop an IEP to address your child’s needs. Plan what accommodations your child needs, and don’t forget to talk about AIM or assistive technology! Establish a positive working relationship with your child’s teacher. Through regular communication, exchange information about your child’s progress at home and at school. Resources Especially for Parents LD Online | For Parents LD Online | Parenting and Family National Center for Learning Disabilities | In the Home Learning Disabilities Association of America | For Parents Reading Rockets | For Parents Learning disabilities clearly affect some of the key skills in life—reading, writing, doing math. Because many people have learning disabilities, there is a great deal of expertise and support available. Take advantage of the many organizations focused on LD. Their materials and their work are intended solely to help families, students, educators, and others understand LD and address it in ways that have long-lasting impact.
<urn:uuid:3ef3eeb3-d1fd-42c1-a277-541f6107d64f>
CC-MAIN-2013-20
http://nichcy.org/disability/specific/ld
2013-05-22T08:33:03Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.946848
3,508
The term ‘aposematic colouration’ describes the often vivid markings of animals which may as a deterrent or warning signal to any potential predator. These signals are a secondary defence mechanism, advertising that the animal is toxic, noxious or otherwise able to defend itself in a manner which may result in injury to the predator. Such colours may also be the result of Batesian mimicry, whereby an otherwise harmless species closely resembles an unpalatable species with such a degree of accuracy that it too is avoided by experienced predators. Mutual mimicry between species sharing similar anti-predator colouration may occur (Mullerian mimicry). Colour, along with startling behaviours and/or sounds may form an effective predator deterrent. The very definition of aposematic colouration demands that an animal’s colour be bold or vivid enough to be visible against an often homogenous background of neutral shades such as greens and browns – the more conspicuous an animal is, the more likely it will be seen by a predator. The most visible colours are most often those at the red end of the spectrum and indeed reds, oranges and yellows are extremely common. While some individuals will perish as a result of their enhanced visibility, the predator will learn to associate such marking with unpalatability or danger and thus ignore similar in the future. To quote Spock1: “The needs of the many outweigh the needs of the few. Or the one.”. Aposematism is most frequently associated with invertebrates and is indeed much more common in such taxa than in vertebrates.Ladybirds provide a classic example; their brightly coloured elytra warning of their toxicity. Many moth and butterfly caterpillars are similarly brightly coloured and may combine colouration with other defences such as eye-like markings. Warning colouration need not be immediately apparent or even visible. Some, like the caterpillar of the swallowtail butterfly, are cryptic from afar but quite alarmingly conspicuous close-up, a dual defence which proves to be quite effective. Similarly Poecilotheria regalis, a tarantula from India (above, left), is very well camouflaged from above but when disturbed it rears on its hind legs, displaying a black ventral surface surrounded by startlingly white or yellow limbs. Of course, vertebrates may also – and do – express aposematic colouration. Poison dart frogs are often extremely colourful, vividly advertising their toxicity (though the degree of toxicity varies between species). Larger species such as skunks and porcupines are more boldly patterned, their greater size ensuring that they are readily visible. Though the colouration of these two species is somewhat similar, they both advertise very different characters. The skunk (and some mustelids, though to a much lesser degree) famously excretes a foul-smelling liquid which can be sprayed a considerable distance. The porcupine on the other hand advertises its impressive armoury of sharp spines; modified hairs coated in keratin which can inflict a painful or even fatal wound on an attacker. The evolution of aposematic traits may appear somewhat paradoxical if taken from the commonly assumed starting point of successful crypsis (camouflage). If such vivid secondary defences evolved in an already successfully cryptic prey animal, it would seem logical to assume that the new rare and more conspicuous morphs would experience a greater degree of predation. Such frequency-dependent selection would surely result in the early removal of these individuals from the population. It would therefore seem more reasonable to explain the evolution of aposematic colouration in terms of species which are already conspicuous by virtue of their behaviour. In such instances the gradual evolution of brighter colouration imposes fewer costs in comparison to those imposed on already cryptic species. Enhanced colouration may also confer an array of benefits, from deterring predators to increased mating success. Evolution in this regard may thus be focussed by any one, or a combination of, sexual selection, facultative aposematism or the enhancement of pre-existing traits. Alright, Spock and Kirk. Start Trek II: The Wrath of Khan.
<urn:uuid:b2a4fb64-09cc-47a5-a95d-b8c87641f773>
CC-MAIN-2013-20
http://ninjameys.wordpress.com/tag/animal/
2013-05-22T08:26:54Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.946552
841
Part of our Chemicals of Concern series. Why the chemicals are considered of concern: Nonylphenol (NP) and Nonylphenol Ethoxylates (NPEs): NPEs are surfactants that are common ingredients in many formulated consumer products, and are also used in various industrial applications and pesticides. NP is primarily used to make NPEs. NP and some NPEs are persistent in the environment, moderately bioaccumulative and extremely toxic. Other NPEs are not as toxic and are less persistent, but are still highly toxic to aquatic organisms; moreover, these NPEs can degrade in the environment back into NP. The widespread and multiple uses of these chemicals mean they can enter our food chain and water supply, raising serious concerns about both human and environmental exposures. Human biomonitoring data reveals the presence of NP in human breast milk, umbilical cord blood, and urine. NP is a suspected endocrine disruptor (PDF) and has been shown to have estrogenic effects (PDF) in a number of aquatic organisms, human breast tumor cells, and laboratory rodents. A major concern for people is that bioaccumulation may occur from multiple sources of exposure. There’s also the potential for cumulative effects from exposure to NP and NPEs in combination with other endocrine disrupting chemicals. A specific health concern arising from NP’s estrogenic properties is the potential increased risk for breast cancer. Exposure to NP in high concentrations is extremely destructive (PDF) to the upper respiratory tract, eyes, and skin. Symptoms include coughing, wheezing, a hoarse voice, shortness of breath, headache, nausea, and vomiting. Prolonged skin contact with NP causes burns, irritation, and swelling. Where it is most commonly found: NPEs are used in a range of industrial and consumer applications, including detergents cleaners, degreasers, dry cleaning aids, wetting agents, paper and textile processing formulations, and prewash spot removers. They serve several functions: as wetting agents (which enable a liquid solution to spread evenly across surfaces), as emulsifiers (which allow normally immiscible liquids to mix together), as defoaming agents (which hinder the formation of foam in liquids) and as dispersants (which break up a liquid such as oil into small droplets or separate particles to prevent settling or clumping). NP and NPEs are produced in large volumes and have uses that can lead to widespread exposure. They’re found in water downstream from industrial facilities where they are used, and are present in both the sludge (solids) and effluents produced by sewage treatment plants. Such sludge is often used for agricultural purposes (PDF), consequently introducing the possibility of release into the food chain and water supply. NPEs have been detected in drinking water (PDF), and are among the toxic chemical used in natural gas hydro-fracking. Limited U.S. regulation: The European Union has already banned or severely restricted many uses of NP and NPEs. However, only last year did the U.S. EPA issue a chemical action plan to address the health risks associated with these chemicals. To date EPA has relied on voluntary cooperation from industry to help phase out the use of NP and NPEs in household laundry detergents, mainly through EPA’s Design for the Environment (DfE) Safer Detergents Stewardship Initiative. The DfE program incentivizes the production of safer products through a label system, which products for consumer or commercial purchase can earn if they meet certain safety criteria. Currently, the program’s impact is limited to the use of NPs and NPEs in household detergents, although industrial detergents remain a major source of NPEs to the environment. What should be done: It’s challenging to identify all of the products containing NPEs. For example, sometimes NPEs are identified in the ingredient list on the labels for certain personal care products and spermicides. However, they are rarely listed on household products like cleaners, detergents, and pesticides. These chemicals fall within the broader category of surfactants called alkyl phenol ethoxylates. They may be identified by a variety of names, including nonoxynol, nonylphenol polyethylene glycol ether, nonylphenoxypoly(ethylenoxy)ethanol, POE (n) nonyl phenol, POE (n) Nonyl Phenyl Ether, Antarox, Makon, and many others. In order for companies and the public to avoid NP and NPEs we need policies that require better public disclosure of chemical ingredients in products. As for all chemicals, EPA needs better information on NP and NPEs hazards, uses and exposures. It also desperately needs the authority to put in place mandatory restrictions on the use of such chemicals where appropriate. The Agency does not currently posses these powers, handicapping its ability to protect our health and our environment. Environmental and health organizations across the country have been calling for these types of reforms through the overhaul of the Toxic Substances Control Act (TSCA), the primary law meant to protect public health and the environment from toxic chemicals. You can help support TSCA reform by letting your legislators know that you care about this issue. Join I Am Not a Guinea Pig on Facebook and Twitter, and check back here (www.NotaGuineaPig.org) for updates. Will NP and NPEs keep you up worrying at night? What would you do about these chemicals?
<urn:uuid:22d93588-45b3-440e-b0c5-4126b27118e2>
CC-MAIN-2013-20
http://notaguineapig.org/2011/03/23/nonylphenols-and-nonylphenol-ethoxylates/
2013-05-22T07:55:07Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.944262
1,174
Fallacies from Chapter 3 - 5469 Only 2 choices are offered when there are more False reduction of choices to just 2 (one of which is usually unacceptable) when there are other choices A or B Either A or B If not A then B Unless A, B either, or, if not, unless, the only alternative, prefer How to Explain Name 3rd choice or alternative belief or middle ground You're either with us or you're with the terrorists. 1. The textbook defines a dilemma as an argument that presents 2 alternatives, both claimed to be bad. A false dilemma is a dilemma that can be shown to be false (going between the horns of the dilemma) by demonstrating there is at least one other viable possibility to the 2 choices offered, or by challenging one or both of its other 2 premises (grasping the horns of the dilemma). The either-or fallacy is sometimes called the black-or-white fallacy and is very similar to the false dilemma. The either-or fallacy occurs when there is an argument that assumes just 2 viable alternatives, one of which is bad, although there is at least one other viable alternative. NOTE: For this class, the either-or variation of the false dilemma fallacy will be named false dilemma. 2. Some people have a tendency to over-simplify an issue by reducing choices, since it's too hard for them to look for other alternatives or they have a vested interest in one of the choices. 3. False dilemmas are offered out of ignorance, laziness, as a diversion from the truth, or through a natural tendency to categorize. 4. Psychological explanation: The origin of black-or-white thinking is thought to come from childhood, where we had fairy tales, heroes and villains, and good guys and bad guys. This immaturity is then taken into adulthood because some people yearn for the security of childhood. 5. We can protect ourselves from someone who offers a false dilemma by always asking 'why not both' or 'why not neither'. 6. The Manichaeans were an early Christian sect whose views on good and evil have become interpreted in our time as the ultimate 'either-or' philosophy. Everyone and everything is either totally good or totally evil; no shades of gray, no mixed motives, no redeeming qualities in the wicked, no lapses of virtue in the saintly. 7. Either-or on the Internet 1. You're either with us or you're with the terrorists. Analysis: The 3rd choice is neutrality. 2. Either we ease up on environmental protection or we see our economy get worse. Analysis: We could have both occur or neither could occur. Studies show that many environmental companies are profitable. 3. Unless you go to college and make something of yourself, you’ll end up as an unhappy street person. Analysis: This is the same as 'Either you go to college and make something of yourself or you’ll end up as an unhappy street person.' Again you could have both or neither occur. Many people who don't go to College do not end up on the street. 4. Either you are over 21 or you're not. Analysis: Not a false dilemma, that is, no fallacy occurred since there is no 3rd choice. 5. Either you believe abortion is murder or you don't. Now which is it? Analysis: The 3rd choice is that the person doesn't know or hasn't made up their mind. 6. The differences in behavior between the sexes is either due to heredity or environment. Analysis: The 3rd choice is that it is likely that both play a role. 7. If we don't keep the death penalty then people will get off after a few years in prison and then parole. so, we shouldn't abolish the death penalty. Analysis: The 3rd choice is life in prison without the possibility of parole. 8. Either we allow abortions or we force children to be raised by parents who don't want them. Analysis: The 3rd choice is adoption. 9. The only alternative to a dictatorship is communism. Analysis: A 3rd choice is democracy. 10 Either we cut spending or we'll increase the deficit. Analysis: A 3rd choice is raising taxes. 11. Either we withdraw from Iraq completely or we stay the course. Analysis: A 3rd choice would be removing most of our troops from Iraq at once, while reconfiguring the rest into small teams that would take the initiative in tracking the al qaeda affiliates still trying to despoil the country. CATE: Computer-Assisted Teaching Environment Distance Education office at Santa Rosa Junior College, Santa Rosa, CA USA Last updated: 15:36 on 30 January 2013 Copyright © Steve Rubin Contact Steve Rubin
<urn:uuid:bad845e7-3bf4-4416-8efc-05c6f3f22516>
CC-MAIN-2013-20
http://online.santarosa.edu/presentation/page/?36864
2013-05-22T08:19:42Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.942071
1,011
Publication: Research - peer-review › Journal article – Annual report year: 2012 Since Münch in the 1920s proposed that sugar transport in the phloem vascular system is driven by osmotic pressure gradients, his hypothesis has been strongly supported by evidence from herbaceous angiosperms. Experimental constraints made it difficult to test this proposal in large trees, where the distance between source and sink might prove incompatible with the hypothesis. Recently, the theoretical optimization of the Münch mechanism was shown to lead to surprisingly simple predictions for the dimensions of the phloem sieve elements in relation to that of fast growing angiosperms. These results can be obtained in a very transparent way using a simple coupled resistor model. To test the universality of the Münch mechanism, we compiled anatomical data for 32 angiosperm and 38 gymnosperm trees with heights spanning 0.1–50 m. The species studied showed a remarkable correlation with the scaling predictions. The compiled data allowed calculating stem sieve element conductivity and predicting phloem sap flow velocity. The central finding of this work is that all vascular plants seem to have evolved efficient osmotic pumping units, despite their huge disparity in size and morphology. This contribution extends the physical understanding of phloem transport, and will facilitate detailed comparison between theory and field experiments. |Journal||Plant, Cell and Environment| |Citations||Web of Science® Times Cited: 4| - Long-distance transport, Münch mechanism, Phloem, Scaling, Sieve elements, Sugar, Trees
<urn:uuid:bf2bf4c2-673d-46e2-8fb4-bd3237dd633f>
CC-MAIN-2013-20
http://orbit.dtu.dk/en/publications/universality-of-phloem-transport-in-seed-plants(035114b7-1c09-4a09-9271-f4b93907c532).html
2013-05-22T08:36:22Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.866801
324
Pronation is a normal foot motion. This word refers to the action of the foot as you apply weight through the gait cycle. As the foot strikes the ground, starting from the heel and rolling forward through the arch and onto the toes, a specific series of movements takes place called pronation . Essentially, the heel and ankle roll inwards after the heel strikes the ground, and as weight is transferred to the midfoot, the arch flattens out. Pronation is normal; a problem arises where there is overpronation. When a person overpronates, the arch remains flat, and the ankle rolls too far inward as the toes begin to push off. This places increased stress on the muscles and ligaments of the foot. For more information: Overpronation
<urn:uuid:4244556f-63d7-45d8-9156-41c72c73c5bb>
CC-MAIN-2013-20
http://orthopedics.about.com/cs/sportsmedicine/g/pronation.htm
2013-05-22T08:01:49Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.90298
160
There are a lot of explanation or definition about scientific attitudes… Scientific attitudes is not just for scientist per see. Scientific attitude is how a person or a predisposition of a person to solve a problem or create solution in a scientific way. Here are some Scientific attitudes that one should poses to be able to be a effective researcher, scientist, teacher and inventor. 1. Curiosity - He or she should show interest on how things work , questions its validity and seek alternative answer and questions. 2. Open mindedness - Accept every ideas even if it is contradictory to his idea and test it validity. 3. Objectivity - To show no particular bias to results , ideas and data. To show no subjectivity in any test, experiment he or she conducted. What every the result records it truthfully. 4. Honesty - To report all results truthfully. No manipulation of data should be done just to please benefactors, stockholders etc… 5. Humility - being humble. 6. Creativity- Thinking outside of the box, creating new ideas and procedures. 7. Risk taker - To be able to accept criticism and to be misjudge. These are just a few example of scientific attitudes.
<urn:uuid:6dd61544-64d4-40a3-9e99-5a4f65adc80d>
CC-MAIN-2013-20
http://philippinetambayan.com/2011/06/25/scientific-attitudes/
2013-05-22T08:27:09Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.895855
252
(AP) -- From pole to pole, surface to frigid depths, researchers have discovered thousands of new ocean creatures in a decade-long effort now nearing completion, and there may still be several times more strange creatures to be found, leaders of the Census of Marine Life reported Thursday at the annual meeting of the American Association for the Advancement of Science. The effort has "given us a much clearer window into marine life," said Shirley Pomponi, executive director of the Harbor Branch Oceanographic Institute at Florida Atlantic University in Fort Pierce. The research, which has involved thousands of scientists from around the world, got under way in 2000 and the final report is scheduled to be released in London on Oct. 4. Last fall the census reported having added 5,600 new ocean species to those already known. Ron O'Dor, a professor at Dalhousie University in Halifax, Canada, said there may be another 100,000 or more to be found. "Add microbes and it could be millions," he said. One benefit of learning more about ocean life is the chance of finding new medical treatments, Pomponi said. For example, a chemical discovered in deep water sponges is now a component of the cream used to treat herpes infections, Pomponi said. Other research is under way on pain killers and cancer treatments based on ocean life. Kristina Gjerde, of the International Union for the Conservation of Nature, in Konstancin-Chylice, Poland, said the research will help guide governments in setting up marine protected areas to preserve species both for food and of value for other reasons. O'Dor said the ocean is large and resilient, so that when a region is protected life there can rebound, "but we can't keep insulting the ocean." O'Dor noted that many people are concerned about the decline of tigers in the wild, and said the same may be true of great white sharks. Noting a marine census project that places sonar trackers on fish and marine mammals, O'Dor pointed to an Australian program that senses those trackers and warns people ashore when to close a beach because a shark is nearby. "See, we can coexist," he said. Huw Griffiths of the British Antarctic Survey told the gathering that Antarctic sea life is far more than penguins. There are 8,000 species there, most living on the bottom, he said, and they have found novel ways to survive the bitter cold. But global warming is changing conditions there, with a decline in ice that affect these species and others. Indeed, O'Dor noted that some squid formerly found only in tropical areas are now migrating to polar regions as climate changes. Jason Hall-Spencer of the Marine Institute at the University of Plymouth, England, warned that as the ocean absorbs more carbon dioxide from the air it becomes more acid, which can kill some marine creatures, including corals. Explore further: Bird's playlist could signal mental strengths and weaknesses Census of Marine Life: http://www.coml.org Global Ocean Biodiversity Initiative: http://www.gobi.org
<urn:uuid:57a9c2ee-bd69-4cde-aa7e-2314b3b70d96>
CC-MAIN-2013-20
http://phys.org/news185779258.html
2013-05-22T08:12:59Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.944333
653
The emerging science of gravitational wave astronomy is optimistically named. Astronomy depends ultimately on observations, yet the only output of gravitational wave detectors has so far been noise generated within the instruments. There is good reason, based on experimental and theoretical progress, to believe that things are about to change. As an example of progress on the theoretical side, Kenta Kiuchi of Waseda University, Yuichiro Sekiguchi of the National Astronomical Observatory, Masaru Shibata of Kyoto University (all in Japan), and Keisuke Taniguchi of the University of Wisconsin, US, report in Physical Review Letters simulations of neutron star mergers that reveal new details of the gravitational waves they are expected to emit . The effort to detect gravitational waves started humbly fifty years ago with Joe Weber’s bar detectors . Today the field is a thriving example of Big Science, including large facilities in the US (LIGO) and Italy (VIRGO), smaller installations in Germany (GEO ) and Japan (TAMA, LCGT), and potential future detectors in Australia (AIGO) and India (INDIGO). LIGO, the best funded and so far the most sensitive of these instruments, is preparing a major upgrade called Advanced LIGO. In parallel with the development of ground-based detectors, there has been substantial design progress for detectors in space. The principal example is LISA , which received effusive endorsement from the National Academy of Sciences: “LISA is an extraordinarily original and technically bold mission concept. The first direct detection of low-frequency gravitational waves will be a momentous discovery, of the kind that wins Nobel Prizes” . Space-based detectors will not likely be making that low-frequency () discovery for another ten years at least—not for lack of inherent sensitivity or progress in technology development, but rather because rapid deployment is not a characteristic of billion-dollar space research missions. Meantime, the effort to improve ground-based detectors, which operate at higher frequency ( to ), is proceeding apace. What is the motivation compelling some of us to spend entire careers building telescopes that have yet to see the gravitational equivalent of first light? The thrall of zero is one conceivable answer. That is, the absence of signals at predicted levels places real constraints on astrophysical phenomena, and ultimately could test Einstein’s theory of general relativity. But most of us would trade a thick stack of publications with “search for” in the title for a single thin “discovery of.” It’s not about zero. Rather, the case that LIGO and VIRGO are almost good enough to see signals stands up to scrutiny. This is not simple optimism: the detection prediction is derived from a synthesis of electromagnetic astronomy and astrophysical models of sources that seem inevitable. For many years the favored source of gravitational waves for ground-based detectors has been the inspiral of a compact binary system consisting of one neutron star plus a companion that is either another neutron star or a black hole (Fig. 1, top). The orbital motion generates gravitational radiation at a frequency that chirps as the orbit decays and speeds up. The chirp waveform can be calculated accurately from a handful of parameters such as the masses and spins of the two stars and the inclination angle of the orbital plane. This waveform parameterization allows for coherent integration of the last minute or so of the life of the binary system, when the gravitational wave signal is strongest. In the 1987 proposal to the National Science Foundation that first described the LIGO concept , Kip Thorne had this to say about detection prospects in general: “The most certain of the sources is coalescence of neutron-star binaries: Estimates based on pulsar statistics in our own galaxy suggest that to see such events per year one should look out to distance. For supernovae the event rate is known to be roughly one each years in our own galaxy and several per year in Virgo, but the amount of radiation emitted is very uncertain. For black hole births, both the wave-emission efficiency and the distance to which one must look are highly uncertain.” Since then, refined calculations of supernova strength have moved that source lower on the list, and the rate of black hole births remains uncertain. But estimates of the instrument range required to see a neutron star binary inspiral [7, 8, 9, 10, 11] has held fairly stable with the discovery of a few more galactic radio pulsars in binary orbits. A current estimate is that the merger rate within the galaxy is , which at a galactic density of corresponds to a required detector range of to —a somewhat more promising number than the 1987 estimate of to . After the inspiral chirp comes the coalescence. The end stage of the neutron binary system is a complex explosion that takes as little as a few milliseconds: tidal disruption, core collapse, merger, and formation of the final-state black hole. The merger hypothesis holds that each short (–) gamma-ray burst observed by satellites is generated at the instant of merger, perhaps from shock waves formed as the neutron star collapses. This hypothesis was supported by the identification of several short bursts in 2005 as originating beyond the galaxy . Longer gamma-ray bursts, those lasting more than , arise from different etragalactic events, including supernovae. A recent search for gravitational wave inspiral signals in the LIGO and VIRGO detectors used short gamma-ray bursts detected by gamma-ray and x-ray satellites as timing triggers . Analyzing data stretches preceding distinct gamma-ray bursts, the result was no signal with confidence from a neutron star/black hole binary within a median distance of , or from a double neutron star system within . These numbers fall short of the range required for detection, but there is already data collection of higher sensitivity in progress, and further enhancements are planned after that run is finished. Beyond that, Advanced LIGO, with ten times the sensitivity and consequently ten times the range, should be in operation by 2015. The double neutron star inspiral range for Advanced LIGO is projected to be , adequate to see several events per year even at the pessimistic end of the source estimates. There is little doubt that the new window on the universe will finally be cracked open. Parameters derived from the waveform fit will constitute a rich vein of data: confirmation of the merger hypothesis, a survey of masses and spins of neutron stars, and, tantalizingly, a calibration-free measure of the source distance that can be used to measure dark energy. The results of Kiuchi and colleagues look beyond even Advanced LIGO. The authors present one of the first detailed simulations of the waves generated by the merger. Figure 1 shows their simulation result for the precursor chirp and the merger itself starting at milliseconds. The chaotic-looking merger waveform contains information that is completely inaccessible to conventional astronomy. Information from even the gamma-ray burst is smoothed over by scattering within the matter that generated it, just as distance and dispersion transforms the clap of lightning to the low roar of thunder. Gravitational waves, by contrast, are scatter-proof, and carry the merger signature with better than resolution. The authors model several different equations of state for the neutron star, and simulate the formation of the final black hole and associated disk for a wide range of parameters. They find, among other phenomena, that small spiral arms are formed around, and are eventually swallowed by, the black hole. Disappointingly, the merger signal occurs at high frequencies, where detector sensitivity is limited by photon shot noise. At a distance of , the peak strain amplitude for the equation of state used to generate Fig. 1 is at a frequency around , which is about a factor of too weak to be seen by Advanced LIGO. According to the authors, a subsequent generation of detectors now in the conceptual planning stage, such as the Einstein Telescope , will be needed to detect the merger. Then the theoretical calculations of the favorite source of ground-based detectors will finally be fully confronted with observational data. The author thanks Curt Cutler for helpful discussions. Note added by author (8 April 2010): Bruno Giacomazzo (University of Maryland) points out that the description of short gamma-ray bursts arising from core collapse is inaccurate, as core collapse is associated with a supernova and the accompanying long gamma-ray burst. Rather, the short gamma-ray burst is probably generated as a short-lived disk or torus surrounding the final state black hole accretes onto the black hole. - K. Kiuchi, Y. Sekiguchi, M. Shibata, and K. Taniguchi, Phys. Rev. Lett. 104, 141101 (2010). - J. Weber, Phys. Rev. 117, 306 (1960). - B. C. Barish and R. Weiss, Phys. Today 52, No. 10, 44 (1999). - P. L. Bender, K. Danzmann, and the LISA Study Team, “Laser Interferometer Space Antenna for the Detection of Gravitational Waves, Pre-Phase A Report,” doc. MPQ 233, Max-Planck-Institüt für Quantenoptik, Garching, 1998. - Beyond Einstein Program Assessment Committee and National Research Council, NASA's Beyond Einstein Program: An Architecture for Implementation (National Academy Press, Washington, DC, 2007)[Amazon][WorldCat]. - R. E. Vogt, R. W. P. Drever, K. S. Thorne, and R. Weiss “Caltech/MIT project for a laser interferometer gravitational wave observatory,” Renewal proposal for NSF PHY-8504136, December, 1987. - E. S. Phinney, Astrophys. J. Lett. 380, L17 (1991). - C. Cutler and K. S. Thorne, in Proceedings of the 16th International Conference on General Relativity and Gravitation, Durban, South Africa, 2001, edited by N. Bishop and S. D. Maharaj (World Scientific, Singapore, 2002)[Amazon][WorldCat]. - V. Kalogera et al., Astrophys. J. Lett. 603, L41 (2004). - D. Guetta and T. Piran, Astron. Astrophys 435, 421 (2005). - J. Abadie et al., arXiv:1003.2480. - E. Nakar, Phys. Rep. 442, 166 (2007); arXiv:astro-ph/0701748v2. - J. Abadie et al., arXiv:1001.0165.
<urn:uuid:7072e35f-89b7-49c2-bbed-3120dd53a6e6>
CC-MAIN-2013-20
http://physics.aps.org/articles/v3/29
2013-05-22T08:33:06Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.896592
2,231
Dever writes the following. - ... those older than me who are complementarian generally want to downplay this issue, and those younger than me want to lead with it, or at least be very up front about it. - The older group is among peers who see women's ordination as an extension of civil rights for people of different races. Civil rights for people of different races was an extension of the recognition of equal spiritual authority for women. No matter what Dever's assessment is of the Quaker movement today, let him know that Quakers acknowledged the right of women to speak in the assembly before their anti-slavery thrust. In a post that I wrote in March, I mentioned Women Speaking 1666., written by Margaret Fell, the wife of George Fox, founder of the Quakers. Here is the beginning of the Quaker Testimony to Equality. - The Quaker testimony to equality stems from the conviction that all people are of equal spiritual worth. This was reflected in the early days of Quakerism by the equal spiritual authority of women, and by the refusal to use forms of address that recognised social distinctions. Equality is also a fundamental characteristic of Quaker organisation and worship, with the lack of clergy and any formal hierarchy. - Before the eighteenth century, very few white men questioned the morality of slavery. The Quakers were among these few. The doctrines of their religion declared an issue such as slavery to be unjust. By 1775, the Quakers founded the first American anti-slavery group. Through the 1700s, Quakers led a strong-held prohibition against slavery. - The Quakers’ fight inspired growing numbers of abolitionists, and by the 1830’s abolitionism was in full force and became a major political issue in theUnited States.The Quakers were radical Christians. They believed that all people were equal in the sight of God, and every human being wascapable of receiving the "light" of God’s spirit and wisdom. They also were against violence. - Quakers were known for their simple living and work ethic. Therefore, to the Quakers, slavery was morally wrong.It was as early as the 1600s that Quakers began their fight against slavery, and thus the beginning of the abolitionist movement.They debated, made speeches, and preached to many people. By 1696, they made their first official declaration for abolitionism in Pennsylvania, in which they declared they were not going to encourage the importation of slaves. How dare Christians of other denominations wrest the anti-slavery movement out of its rightful origin? Does no one remember how the Quakers were persecuted by other Christians for their anti-slavery actions? How long before those of us who are over 50 come to see the younger generation as revisionists and ideologues with no respect for fact? How many errors will it take before someone signs a few preachers up for History 101? Dever also writes this about the older group who wish to downplay complementarianism. - Normal for the older group is evangelicals as upstanding members of the society. They are mayors and bankers and respected persons in the community. The tendency is natural to do what would be culturally acceptable, as much as is possible (parallel to John Rawls and his idea of publicly accessible reasons). Update: My apologies to Dever. I have rethought this post and now admit that older complementarians are probably equally ignorant of the Quaker origins of the abolition movement.
<urn:uuid:89f956c0-528f-4b37-9688-c802dd883cc2>
CC-MAIN-2013-20
http://powerscourt.blogspot.com/2006_05_01_archive.html
2013-05-22T08:27:54Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.980417
725
Individual differences | Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology | - Main article: Hearing disorders |Sensorineural hearing loss| |Classification and external resources| Cross section of the cochlea. Sensorineural hearing loss (SNHL) is a type of hearing loss in which the root cause lies in the vestibulocochlear nerve (Cranial nerve VIII), the inner ear, or central processing centers of the brain. The Weber test, in which a tuning fork is touched to the midline of the forehead, localizes to the normal ear in people with this condition. The Rinne test, which tests air conduction vs. bone conduction is positive (normal), though both bone and air conduction are reduced equally. Sensorineural hearing loss can be mild, moderate, or severe, including total deafness. The great majority of human sensorineural hearing loss is caused by abnormalities in the hair cells of the organ of Corti in the cochlea. There are also very unusual sensorineural hearing impairments that involve the eighth cranial nerve (the vestibulocochlear nerve) or the auditory portions of the brain. In the rarest of these sorts of hearing loss, only the auditory centers of the brain are affected. In this situation, central hearing loss, sounds may be heard at normal thresholds, but the quality of the sound perceived is so poor that speech can not be understood. Most sensory hearing loss is due to poor hair cell function. The hair cells may be abnormal at birth, or damaged during the lifetime of an individual. There are both external causes of damage, like noise trauma and infection, and intrinsic abnormalities, like deafness genes. Sensory hearing loss that results from abnormalities of the central auditory system in the brain is called central hearing impairment. Since the auditory pathways cross back and forth on both sides of the brain, deafness from a central cause is unusual. This type of hearing loss can also be caused by prolonged exposure to very loud noise, for example, being in a loud workplace without hearing protection, or having headphones set to high volumes for a long period. Table 1. A table comparing sensorineural to conductive hearing loss |Criteria||Sensorineural hearing loss||Conductive hearing loss| |Anatomical Site||Inner ear, cranial nerve VIII, or central processing centers||Middle ear (ossicular chain), tympanic membrane, or external ear| |Weber Test||Sound localizes to normal ear||Sound localizes to affected ear (ear with conductive loss)| |Rinne Test||Positive Rinne; Air conduction > Bone conduction (both air and bone conduction are decreased equally, but the difference between them is unchanged).||Negative Rinne; Bone Conduction > Air Conduction (Bone/Air Gap)| Sensorineural hearing loss may be congenital or acquired. - Lack of development (aplasia) of the cochlea - Chromosomal syndromes (rare) - Congenital cholesteatoma - squamous epithelium is normally present on either side of the tympanic membrane. Externally, within the external auditory meatus or ear canal and internally within the middle ear. Within the middle ear the simple epithelium gradually transitions into ciliated pseudostratified epithelium lining the Eustachian tube now known as the pharyngotympanic tube becoming continuous with the respiratory epithelium in the pharynx. The squamous epithelium hyperplasia within the middle ear behaves like an invasive tumour and destroys middle ear structures if not removed. - Delayed familial progressive - Ototoxic drugs - Physical trauma - either due to a fracture of the temporal bone affecting the cochlea and middle ear, or a shearing injury affecting cranial nerve VIII. - Noise-induced - prolonged exposure to loud noises (>90 dB) causes hearing loss which begins at 4000 Hz (high frequency). The normal hearing range is from 20 Hz to 20,000 Hz. - Presbycusis - age-related hearing loss that occurs in the high frequency range (4000 Hz to 8000 Hz). - Sudden hearing loss - Idiopathic (ISSHL: idiopathic sudden sensoneurinal hearing loss), H91.2 - Vascular ischemia of the inner ear or CN 8 - Perilymph fistula, usually due to a rupture of the round or oval windows and the leakage of perilymph. The patient will most likely also experience vertigo or imbalance. A history of an event that increased intracranial pressure or caused trauma is usually present). - Autoimmune - can be due to an IgE or IgG allergy (e.g. food) - Autoimmune - a prompt injection of steroids into ear is necessary. - Cerebellopontine angle tumour (junction of the pons and cerebellum) (the cerebellopontine angle is the exit site of both the facial nerve(CN7) and the vestibulocochlear nerve(CN8). Patients with these tumors often have signs and symptoms corresponding to compression of both nerves) - Ménière's disease - causes sensorineural hearing loss in the low frequency range (125 Hz to 1000 Hz). Ménière's disease is characterized by sudden attacks of vertigo, lasting minutes to hours preceded by tinnitus, aural fullness, and fluctuating hearing loss. Long term exposure to environmental noiseEdit Populations living near airports or freeways are exposed to levels of noise typically in the 65 to 75 dbA range. If lifestyles include significant outdoor or open window conditions, these exposures over time can degrade hearing. The U.S. EPA and various states have set noise standards to protect people from these adverse health risks. The EPA has identified the level of 70 db(A) for 24 hour exposure as the level necessary to protect the public from hearing loss (EPA, 1974). - Noise-induced hearing loss (NIHL) typically is centered at 4000 Hz. - The louder the noise is, the shorter the safe amount of exposure is. Normally, the safe amount of exposure is reduced by a factor 2 for every additional 3 dB. For example, the safe daily exposure amount at 85 dB is 8 hours, while the safe exposure at 91 dB(A) is only 2 hours (National Institute for Occupational Safety and Health, 1998). Sometimes, a factor 2 per 5 dB is used. - Personal audio electronics, such as iPods (iPods often reaching 115 decibels or higher), can produce powerful enough sound to cause significant NIHL, given that lesser intensities of even 70 dB can also cause hearing loss. Hearing loss can be inherited. Both dominant and recessive genes exist which can cause mild to profound impairment. If a family has a dominant gene for deafness, it will persist across generations because it will manifest itself in the offspring even if it is inherited from only one parent. If a family had genetic hearing impairment caused by a recessive gene, it will not always be apparent, as it will have to be passed onto offspring from both parents. Dominant and recessive hearing impairment can be syndromic or nonsyndromic. Recent gene mapping has identified dozens of nonsyndromic dominant (DFNA#) and recessive (DFNB#) forms of deafness. - The most common type of congenital hearing impairment in developed countries is DFNB1, also known as Connexin 26 deafness or GJB2-related deafness. - The most common dominant syndromic forms of hearing impairment include Stickler syndrome and Waardenburg syndrome. - The most common recessive syndromic forms of hearing impairment are Pendred syndrome, large vestibular aqueduct syndrome and Usher syndrome. - MT-TL1 mutations cause hearing loss, along with diabetes and other symptoms. Disease or illnessEdit - Measles may result in auditory nerve damage - Meningitis may damage the auditory nerve or the cochlea - Autoimmune disease has only recently been recognized as a potential cause for cochlear damage. Although probably rare, it is possible for autoimmune processes to target the cochlea specifically, without symptoms affecting other organs. Wegener's granulomatosis, an autoimmune condition, may precipitate hearing loss. - Autoinflammatory disease, such as Muckle-Wells Syndrome, can lead to hearing loss. - Mumps (epidemic parotitis) may result in profound sensorineural hearing loss (90 dB or more), unilaterally (one ear) or bilaterally (both ears). - Presbycusis is deafness due to loss of perception to high tones, mainly in the elderly. It is considered by some to be a degenerative process, although there has never been a proven link to aging. (See impact of environmental noise exposure above.) - Adenoids that do not disappear by adolescence may continue to grow and may obstruct the Eustachian tube, causing conductive hearing impairment and nasal infections that can spread to the middle ear. - AIDS and ARC patients frequently experience auditory system anomalies. - HIV (and subsequent opportunistic infections) may directly affect the cochlea and central auditory system. - Chlamydia may cause hearing loss in newborns to whom the disease has been passed at birth. - Fetal alcohol syndrome is reported to cause hearing loss in up to 64% of infants born to alcoholic mothers, from the ototoxic effect on the developing fetus, plus malnutrition during pregnancy from the excess alcohol intake. - Premature birth results in sensorineural hearing loss approximately 5% of the time. - Syphilis is commonly transmitted from pregnant women to their fetuses, and about a third of the infected children will eventually become deaf. - Otosclerosis is a hardening of the stapes (or stirrup) in the middle ear, and causes conductive hearing loss. - See also Ototoxicity Extremely heavy hydrocodone (Vicodin) abuse is known to cause hearing impairment. There has been speculation radio talk show host Rush Limbaugh's hearing loss was at least in part caused by his admitted addiction to narcotic pain killers, in particular Vicodin and OxyContin. [needs citation] - There can be damage either to the ear itself or to the brain centers that process the aural information conveyed by the ears. - People who sustain head injury are especially vulnerable to hearing loss or tinnitus, either temporary or permanent. - Exposure to very loud noise (90 dB or more, such as jet engines at close range) can cause progressive hearing loss. Exposure to a single event of extremely loud noise (such as explosions) can also cause temporary or permanent hearing loss. A typical source of acoustic trauma is a too-loud music concert. Previously, sensorineural hearing loss has been treated with hearing aids, which amplify sounds at preset frequencies to overcome a sensorineural hearing loss in that range; or cochlear implants, which stimulate the cochlear nerve directly. Some audiologists and ENTs have reported if severe noise-induced hearing loss (exposures exceeding 140dB) is treated immediately (within 24 hours) with a course of steroids, it can often be almost completely reversed. This, however, is a new field without proven success. Researchers at the University of Michigan report that a combination of high doses of vitamins A, C, and E, and Magnesium, taken one hour before noise exposure and continued as a once-daily treatment for five days, was very effective at preventing permanent noise-induced hearing loss in animals - ↑ Sound Output Levels of the iPod and Other MP3 Players: Is There Potential Risk to Hearing?. URL accessed on 2007-11-20. - ↑ 2.0 2.1 2.2 2.3 2.4 2.5 Frequently Asked Questions: Etiologies and Causes of Deafness. URL accessed on 2006-12-02. - ↑ Hearing Loss News and Articles: Sonic tonic - ↑ Sergi, Bruno (2006). Neuroreport 17 (9): 857–861. - ↑ Haynes, David S. (2009). The Laryngoscope 117 (1): 3–15. - ↑ http://www.hearinglossweb.com/Medical/Causes/nihl/prtct/nutr.htm - A free online hearing test to measure your ears' high frequency response - Hearing Loss Web - Sensorineural Hearing Loss, Dr Peter Grant [dead link] Diseases of the ear and mastoid process (H60-H99, 380-389) |Middle ear and mastoid| | Inner ear and| |This page uses Creative Commons Licensed content from Wikipedia (view authors).|
<urn:uuid:39d681ac-d1b2-42d9-98ca-87a35c84a1ba>
CC-MAIN-2013-20
http://psychology.wikia.com/wiki/Sensorineural_hearing_impairment
2013-05-22T08:12:37Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.896351
2,710
Liberal is a political term that has a number of meanings, depending on context. - Europeans typically use the term to describe politics that draw on classical liberalism's basic touchstone of the free individual operating in a free market economy, a notion similar to modern libertarianism. - In the United States, however, the word is typically used to describe politics on the left side of the political spectrum, and has been used by the GOP as a synonym for socialist. This is partially due to their use of bullshit scare tactics and partially a legacy from the second Red Scare, when some people with communist sympathies, not wanting to state their affiliation openly, called themselves "liberals" or "progressives" instead. When talking about social issues, "liberal" is usually applied to people who favor fewer restrictions on the individual's right to choose how to live her or his life. Thus liberals tend to support such things as gay marriage and reproductive freedoms, such as abortion and birth control. See also - Conservative, the traditional antonym - Age of Enlightenment, what created it - Liberal Party, the various parties explicitly using the name - Democratic Party, the US party labeled "liberal" since the 20th century and onward - Combat Liberalism, a 1937 pamphlet published by Mao Tse-Tung - Social democracy, what liberalism in the US sense seems to be about - Neoliberalism, something liberals in the US sense seem not to like very much - Ordoliberalism, a slightly buffered version of social democracy - Really embarrassing liberals
<urn:uuid:d82729fe-323b-488e-85d9-359c693172de>
CC-MAIN-2013-20
http://rationalwiki.org/wiki/Liberal
2013-05-22T08:19:37Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.948177
320
The low spots are filled with marshmallow fog with the peaks clear in the distance. Triple Footprint - You heard it here first. The production of "things" in our daily lives have a "cost". You've heard of carbon footprint, and maybe heard of chemical footprint but social footprint might be new to you. These are basically the amount of baggage a product carries with it or makes with regard to carbon required start to finish, chemicals used and contained during production and the human touch required. Considering the Triple Footprint will give you a better understanding of the what's behind the green choices you make. Products can have a very varied history in terms of how much energy was used in its production and transportation. If something is complicated with lots of parts and metals, then you can see that considering even the energy in mining the materials adds up. A small carbon footprint would be made out of something sustainable and local. You can measure your own carbon impact by one of the many sites available. Chemical footprint considers two things - the chemicals that are used for the entire production of a product and the final list of ingredients. Chemical footprint also includes the chemicals left behind. Most folks don't realize that paper and fabric production require huge amounts of chemicals. Reading the list of ingredients also gives you an idea of the product's chemical footprint. Remember less is more. Look for simple vegetable based ingredients in products when considering chemical footprint in personal care products. Social footprint is getting more attention through organizations such as fair trade, but is only a small fraction of consumer products. Social footprint should be considered since the human touch is responsible for everything brought to market. So many faces and stories go behind the making of what we buy and use. It's easy to see the social footprint of some things, but other items, like a light bulb, get complicated. I think everyone would agree that all people deserve fair living wages and healthy, safe working conditions. In this global economy, equalizing working conditions and wages will bring about peaceful exchanges between people and countries for a more secure future. The green industry has embraced the idea of the triple footprint and uses this to separate itself from green washing or businesses trying half heartedly to jump on the band wagon. Stonyfield Yogurt gets the gold star for working the Triple Footprint to the max. Considering the Triple Footprint is what will bring us towards a sustainable future. Considering the Triple Footprint will help you decide green from really green.
<urn:uuid:ef11db10-ede7-40c7-a462-d8217a6f3674>
CC-MAIN-2013-20
http://realgreengirl.blogspot.com/2008/07/really-green-consider-triple-footprint.html
2013-05-22T07:54:42Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.951709
501
Welcome > Professional Development Program Description > Curriculum The Coherent Curriculum: Integrating and Connecting Learning In the process of using a comprehensive inventory, participants, work with specific models that shape the integration of the curricula in myriad ways. Included in the models are ways to prioritize curricular concerns, methods for sequencing and mapping curricular content, templates for webbing themes across the disciplines, techniques for threading life skills into all content areas and ways to immerse students in content that is self-selected and personally relevant. By integrating the curriculum, teachers are compelled to “cluster” content an process standards into meaningful, purposeful authentic performance tasks. How to Integrate the Curricula In the process of using a comprehensive inventory, participants, work with specific models that shape the integration of the curricula in myriad ways. Included in the models are ways to prioritize curricular concerns, methods for sequencing and mapping curricular content, templates for webbing themes across the disciplines, techniques for threading life skills into all content areas and ways to immerse students in content that is self-selected and personally relevant. By integrating the curriculum, teachers are compelled to “cluster” content and process standards into meaningful, purposeful authentic, performance tasks. Integrating Big Ideas that Thread the Curricula Find the nuggets embedded throughout the curriculum for making connections within and across the disciplines. In this highly interactive workshop, participants discover simple templates to use as they thread higher order thinking skills and robust concepts into every area of the curricula. Using these user-friendly strategies, teacher teams leave the session with simple tools to create a more connected curriculum. In turn, students benefit as they see explicit connections within, between and across subject matter content. Problem Based Learning By framing learning around authentic, real world problems, students are challenged in relevant and meaningful ways. Experience the problem based learning approach to inquiry learning and inviting investigations. Walk through the steps that include: meeting and defining the problem; gathering facts and data; hypothesizing and researching; generating alternatives; and advocating solutions. Learn to use the stakeholder role as the key to student involvement with these statements: “You are…? You will…” and watch your curriculum come to life. Standards of Learning: Design with the End in Mind The standards are not the curriculum! Standards are the goals of the curriculum. To meet the overwhelming number of student learning standards, the use of robust and rigorous performance tasks are needed. In this session, participants will explore the idea of designing learning with the end in mind. It is a process as simple as 1,2,3. One! Standards as the goals of curriculum. Two! Performance Tasks to provide the evidence of learning. Three! Scoring Rubrics for judging of the quality of learning. Learn how to implement these three simple steps to create rich, relevant and real-life learning for the K-12 classroom. Leave with the tools for immediate, back-home use. "We Deliver", was a brilliant elementary school theme that paralleled a promotion sponsored by the US Postal System. Writing across the curriculum was the primary focus, yet every discipline became integrated into the overall theme. Each classroom, in addition to writing and corresponding on a daily basis, designed it’s own postage stamp, disseminated its own mail from the classroom mailbox and provided weekly workers for the sorting tasks in school post office. Students played the national anthem on the recorders in music class; they learned about languages from around the world; they created maps, postal zones and calculated costs about mail service and postal office budgets. With a big idea as the central theme, the curriculum came alive for these youngsters as they used their learning in real and relevant ways. Topics (space), concepts (structures), events (visiting artist), projects (science fair), novels (The Phantom Tollbooth), films (Around the World in 80 Days) and songs (Scarborough Fair), provide the fodder for finding rich, robust and relevant themes. Used as umbrella themes for re-conceptualizing and re-organizing curriculum, these big ideas create a cohesive and cogent pattern for curriculum planning. Six steps that create the acronym, THEMES, are developed, as participants begin to design exciting thematic units for k8 classrooms. Think of themes Participants begin with a "big idea theme" and end with a billowing umbrella of learning activities that cluster the content standards into authentic, lively learning scenarios. Students begin with authentic learning models and leave with deep understandings about the concepts and skills embedded in the thematic unit. Hone the list Extrapolate the criteria Manipulate the theme Expand into activities Select goals and assessments Technology: A Learning Tool and a Teaching Tool Technology creates a dynamic duo as a professional learning tool and as a powerful teaching tool. On one hand, technology empowers teachers as learners with online professional development courses. On the other hand, technology enhances classroom instruction for increased student achievement. Online Professional Development (OPD) is a friend to teachers for a number of reasons: • Most states require teachers to earn continuing education units Online options for classroom instruction offer significant student benefits: • OPD is flexible, convenient, and cost effective • Teachers prefer an learning • With choices, teachers have “buy in”. • Reinforcement for flexible skill groups • Research tools for classroom investigations • Integrating tools for classroom projects • Preparation tools for polished presentations Technology Walks the Data Talk Understanding the role of assessment data in determining appropriate instructional strategies is the challenge of this generation of teachers. Learn a simple, effective process that focuses the “data dialogue” toward productive problem solving. Framed by three research-based components of managed data, meaningful teams and measurable goals, teachers address four critical questions: What? What else? So, what? Now, what? Leave this interactive session with the tools and the confident about data-driven instruction. • Technology-Managed Data • What Data? • Meaningful Team Dialogues • What Else Do We Know? Need to Know • Measurable Student Goals • SMART GOALS
<urn:uuid:90a8c22d-d420-4071-86b8-9285ac0e573b>
CC-MAIN-2013-20
http://robinfogarty.com/curriculum-84.html
2013-05-22T08:27:40Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.917978
1,285
No high resolution image exists... What’s with the news: So, a while back, a bunch of scientists in Antarctica were digging stuff up, and they happen to stumble upon some fossil pollens that prove the existence of a near-tropical rainforest that covered the continent millions of years ago. Turns out that the average temperature at the time was about 68 degrees Fahrenheit, which might as well be a million degrees Fahrenheit when you consider that at the time of publishing this column it is a balmy minus 44 F in McMurdo, Antarctica (where the Polar Plunge is simply an exercise in Darwinism). What does this all mean in terms of climate change? Is it proof that anyone who doesn’t drive a hybrid is Pinkie-and-the-Braining human existence into oblivion? Or is climate change just a bunch of hippie (vegan) baloney because, clearly, Earth has been up to these sorts of shenanigans since well before we humans ever showed up and started posting handmade patio furniture on Pinterest? What’s with us: Ian Faloona was a postdoctoral fellow in chemistry and microscale meteorology at the National Center for Atmospheric Research. He currently serves as associate professor and bio-micrometeorologist for the UC Davis Department of Land, Air, and Water Resources. Frankly, I only have half a clue what any of that even means, but he’s here to explain the crap out of this rainforest. “This geologic period is of particular interest because it is believed to be the warmest of the past 65 million years and could hold clues as to how the future Earth might behave in the event of unabated fossil fuel emissions of CO2,” said Faloona regarding the importance of the findings. “During this period it is estimated that the global temperatures were about 10 C warmer than present, and the levels of CO2 are believed to have been more than three times greater than now. The evidence of tropical vegetation on Antarctica then tells us something about how the Earth responded during this very warm period. These must have been some desperate trees that in fleeing the heat of the tropics they were willing to live though the darkness of an Antarctic winter. It should be kept in mind that this warm period of about 55 million years ago, referred to as the PETM, Paleocene-Eocene Thermal Maximum, also ushered in a mass extinction that heavily influenced the evolution of early mammals. (Image by: Pavel.Riha.CB via Wikimedia Commons) “In the case of the PETM, it is believed that a period of intense volcanic activity, associated with the tectonic drift creating the North Atlantic Ocean, released huge amounts of CO2 into the atmosphere, which caused the intense warming. Over these long geologic spans, the Earth can recover because CO2 is consumed by chemical reaction with surface rocks (chemical weathering.) So in the broadest sense, on these vast time scales, it's mostly volcanism and rock weathering that cause and control the large swings in CO2. “But here's the rub: In the past, some have considered this past warm period as proof that what we're doing now, even if we continue to emit CO2 in huge quantities, is not that different from a past state of the planet. However, recent work has shown that the buildup to the very warm period of the PETM was accomplished by CO2 emissions that are about one-tenth as large as what we are currently emitting. So, sure the climate was warm but it took over 20,000 years to build up that much CO2. We are now pushing on the climate much harder than ever before, with consequences that are not fully understood, but are most likely unprecedented in Earth's entire history. It's a very impressive feat. “We cannot completely control climate change. We cannot control volcanoes or asteroids (not yet, at least), but the pace at which we are increasing atmospheric CO2 is much greater than ever before on Earth. The chances that this will not have a large impact on the climate are pretty slim. And the big change is thought to be coming in the course of the next few human generations because of the rapid rate of input. So, while there are always things in life that you can control and plenty of things you can't, it seems utterly foolish not to control those within your power. “Given what we know about Earth's rich and storied past, I do not think there is any question that she will persevere,” Faloona said, in closing. “The question is, as it usually is in the course of human affairs, how will we be able to survive, and what kind of life might we all collectively share on the planet.” So, just because climate change might be based on naturally occurring phenomena does not mean that your refusal to turn the bathroom light off isn’t helping to push things along quite nicely. Way to go, jerkface. What’s with the news: Hey guys, did ya’ll catch the Republican National Convention? Didja see Clint Eastwood talk to a chair? Did you sit back and reflect upon what Mitt Romney has to offer this country in comparison to President Obama? Or did you watch a 6-year-old pageant princess mix and slug a Red Bull and Mountain Dew cocktail? The RNC came through with some respectable ratings, nabbing second place on the evening of Wednesday, Aug. 29. First place? Well, that went to former "Toddlers and Tiaras" contestant Alana Thompson and her TLC reality series “Here Comes Honey Boo Boo.” I guess Honey Boo Boo is her nickname, because this show wants me to hate it before I even have the chance to watch it. Now, I’m not trying to get into whether you or I bleed red or blue, dear reader. I don’t care if you heart elephants or donkeys or whatever it was Eastwood was smoking moments before he took that stage. But, and correct me if I’m wrong here, this whole election is supposed to be a big deal, right? We’re talking babymurdercivilrightsunemploymentepidemic big deal. And the RNC represents half of the people involved in this ongoing debate. Shouldn’t we be paying attention? (Image by: Gage Skidmore via Wikimedia Commons modified by Jared Banta) What’s with us: Dr. Debra Moore is a licensed psychologist, as well as founder and director of Fall Creek Associates. She has served in the past as president of the Sacramento Valley Psychological Association, and is on board to offer her thoughts on our (seriously disheartening) viewing patterns. “Humans love a good story,” Moore said, regarding the popularity of shows like HCHBB. “All cultures have traditions that revolve around the retelling of universal themes. Television is the modern version of folklore, the Greek myths, the 19th century melodrama and so on. Reality television is the latest incarnation. “We're drawn to common motifs such as stories about coming of age, the triumph of the underdog and good winning out over evil. Reality TV capitalizes on this. Also, people are drawn to what they are already familiar with and relate to. If they're unfamiliar or uneducated about political issues being debated, and there are no other obvious ‘human dramas’ surrounding a particular politician, they may not feel a connection to the process. “For people who are informed and believe the issues directly impact their lives,” said Moore, in closing, “the political process becomes compelling. And if the speakers embed the issues within ‘a good story,’ people form an especially strong personal connection to the political process.” So, perhaps if Jerry Springer had hosted the RNC, and thrown in, say, an illegitimate children or two, and exposed someone’s history of alcoholism, people would have tuned in. Good to know. I’m going to dig a hole and shove my head into it now. What’s with the news: Nerd alert! GameStop, the largest video game retailer in the galaxy, is going hipster by adding vintage games to their inventory. Someone call Horders, this shit is an untapped market. I’ll level with you — I'm terrible at video games. That little guy in the cloud is always giving me the hairy eyeball for driving the wrong way in my Mario Kart, and any game with even remotely realistic graphics that includes guns, zombies, or scantily clad women frightens me far more than it awakens my competitive edge. My awfulness knows no console. I suck on Nintendo, PlayStation, Xbox, and any arcade game I come near. In short, I have no idea what I am talking about. What’s with us: You know who does? Johnny Flores — illustrator, co-owner of Sacramento-based mobile gaming party company Event Gaming, and trusted What’s With That ally. Regarding the popularity of vintage games, Flores had this to say: “When my buddy and I set up our business, one of the priority purchases we made was a vintage Nintendo Entertainment System, specifically because of how much people, not just ‘gamers,’ love this system. People of all ages, from tiny children who have grown up in the era of the gorgeous graphic power of the PS3, to baby boomers whose own adult offspring played these games as children, love to pick up the old rectangular controller of the NES and play Super Mario Bros. 3, Mike Tyson’s Punch-Out!!, or any of the old school titles. We love watching smiles creep over a person’s face when they see these old game titles on our larger HD TV. “That being said, I’m not a big supporter of some corporation like GameStop going out and buying up these games, which you can usually pick up from just a few bucks to around $20, and jacking the prices of these games up beyond what they go for now. Of course, people may still find these games in a local store, or through an eBay store, that has lower prices. (Image by: david cussac via sxc.hu) “One thing we’ve learned the hard way is that some of the cartridges are so old that the metal teeth that connect the game to the console, where the data transfers so that you can play the game, are worn down, and the game doesn’t work. Perhaps they have technicians that can fix this, but we haven’t figured out a way to do that. They’re also doing battle with PlayStation, Xbox, and Nintendo, making these games available for download via the networks these consoles have. Although, there’s nothing like actually seeing these old cartridges and being able to pick them up in your hand. “Rumor has it,” said Flores, “the next generation of consoles are likely to be releasing new games as downloadable content only, basically circumventing the need for a disc, because GameStop’s used game discs have put such a dent in the game publishers’ income. These modern games are ridiculously expensive to produce and the publishers have been complaining for some time to the console manufactures about this lost revenue. “Perhaps they’re pursuing this new market because they can see the writing on the wall.” Yay for downloadable content! I will say that I rocked the shit out of Donkey Kong Country for Super NES. What a wonderful world it would be if I could get that game for my Netflix Box (or Nintendo Wii, for those of you who don’t live in my apartment). Then again, I might never leave the house. Each week "What's With That" will find local experts from the Sacramento area to weigh in on national and international news stories. Stumble across an interesting item? Wondering, "What's WITH that?" Email [email protected] with your ideas! Or, if you’d like to be added to the WWT mailing list, send me an email with the subject line “LIST.”
<urn:uuid:931092a8-0a16-42bb-ab3e-f04414e96cd4>
CC-MAIN-2013-20
http://sacramentopress.com/headline/73225/Arctic_rainforests_reality_TV_and_vintage_gaming
2013-05-22T08:12:05Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.957031
2,569
An incredible image of Saturn’s moon Enceladus venting jets of ice: What’s happening on the surface of Saturn’s moon Enceladus? Enormous ice jets are erupting. Giant plumes of ice have been photographed in dramatic fashion by the robotic Cassini spacecraft during this past weekend’s flyby of Saturn’s moon Enceladus. Pictured above, numerous plumes are seen rising from long tiger-stripe canyons across Enceladus’ craggy surface. Several ice jets are even visible in the shadowed region of crescent Enceladus as they reach high enough to scatter sunlight. Other plumes, near the top of the above image, appear visible just over the moon’s sunlit edge. That Enceladus vents fountains of ice was first discovered on Cassini images in 2005, and has been under close study ever since. Continued study of the ice plumes may yield further clues as to whether underground oceans, candidates for containing life, exist on this distant ice world.
<urn:uuid:1d47e621-030c-4e4f-843e-6a0781e79498>
CC-MAIN-2013-20
http://silver-rockets.com/2009/11/enceladus-venting/
2013-05-22T08:26:33Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.918926
227
Have a prosperous and good year! Chinese new year is upon us. The Chinese New Year is a holiday that marks the end of winter and the beginning of spring. The holiday is rich with cultural traditions that make a great learning experience for students. We study the customs and traditions of holiday. the children love hearing about how bad luck is swept out of the home to prepare for the new year. Scissors and sharp objects are put away so as not to cut off the good fortune of the new year. The celebration has many interesting facets such as the dancing lions and dragons, firework festivals, and the lantern festival to mark the end of the fourteen day new year celebration.
<urn:uuid:fb006943-c69b-4aa1-b14c-f4b69c3611ba>
CC-MAIN-2013-20
http://smartypantsjohnston.blogspot.com/2011/01/gung-hay-fat-choy.html
2013-05-22T08:34:13Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.951733
135
Edward Forbush in A Natural History of American Birds, 1925, described the reaction of people to the “flying wedge” of Canada Geese which “brings to all who see or hear the promise of another spring. The farmer stops his team to gaze; the blacksmith leaves his forge to listen as that far-carrying clamor falls upon the ear; children leave their play and eagerly point to the sky where the feathered denizens of the northern wilderness press steadily on toward the pole, babbling of the coming of spring .... Coming after the long, cold winter, not even the first call of the bluebird so stirs the blood of the listener.” Forbush wrote when most Canada Geese still migrated to and from their breeding grounds, and when those breeding grounds were mostly north of the U.S. - Canadian border. He described it as “a distinctly American bird.” A combination of factors has contributed to Canada Geese becoming so common and widespread that some birders now refer to them as “pond starlings,” an unflattering comparison to the European Starling which is an introduced species whose abundance, habits, and characteristics makes it a pest bordering on a pestilence. In the case of the Canada Goose, native intelligence and adaptability has combined with human ineptitude to create a population explosion and a goose-human conflict. Habitat loss and over-hunting caused the population of Canada Geese to hit a low point in the 1920s and 1930s. Over a period of years, a variety of steps were taken which created the situation which we have today. Unlike many species, migration among Canada Geese is a learned trait, not an inherited trait. Some species of shore birds, for example, hatch their young, and then the adults head south. Some time later, the first year birds make their own way to the same wintering grounds to which their parents flew. Clearly, migration for these birds is encoded in their genes. But this is not the case with Canada Geese. The geese “learn” to migrate, and when the opportunity presents itself, they “relearn” their migration. Scott Weidensaul (in Living on the Wind, 1999) describes how “wildlife management” effected geese migration: “Canada Geese from the central Arctic always wintered along the lower Mississippi, but in 1927 the state of Illinois converted thousands of acres of rich bottomland into a waterfowl refuge; within a few years, half of the geese on the central flyway were stopping there instead of continuing south. Then in 1941, the federal government opened the enormous Horicon National Wildlife Refuge even farther north, in Wisconsin, employing the same mix of ponds, lakes, and crops to shortstop the fickle geese that so recently had favored Illinois. The original migration to Louisiana and Arkansas, meanwhile, had dried up ....” With food available, the geese did not have to make the long, arduous flight. But the elevation of the Canada Goose to “pond starling” status really began in the 1960s and 1970s. The intent was to restore the species to the wildlife refuges and establish it in parks. Wing-clipped geese were introduced for this purpose. The effort was successful beyond all expectations, but ... the new populations of geese were non-migratory. The original wing-clipped birds couldn’t migrate; their offspring and the succeeding generations never learned to migrate, and the adaptability of the species precluded the need to migrate. In Pennsylvania, for example, there were historically no breeding Canada Geese. During the 1990s, the permanent, non-migratory population of Canada Geese in that state grew to in excess of 200,000 birds. Again, Weidensaul: “What had started as small, picturesque flocks in widely scattered locations became larger, messier, more widespread, until by the 1990s it was hard to find a body of water without geese. The resident population in the East ... has been growing at the light-speed rate of 17 percent a year since the late 1980s ....” Meanwhile, the number of breeding pairs in the Atlantic population in northern Quebec plummeted, causing the U.S. Fish and Wildlife Service to close Fall Canada Goose hunting in order to preserve the breeding population. This in turn has had economic repercussions and has disabled efforts at controlling the non-migratory goose population. The explosion of the Canada Goose population also has a negative effect on other waterfowl species. Their size and aggressiveness drives away other nesting ducks. The beaver pond near my home has had no nesting Wood Ducks or Mallards for the last three years. Instead, Canada Geese have laid claim to the pond. Thirty years ago when the first Vermont Breeding Bird Atlas was done, the Canada Goose was a confirmed breeder in only a few survey areas in the entire state; nearly all of those were in the Champlain lowlands. During the second VBBA, 2003-2007, the Canada Goose was one of the first species to be confirmed in many, if not most, survey blocks throughout the state. Nearly every survey block in Windham County had confirmed breeding of Canada Goose. Thirty ago, the Canada Goose was not reported anywhere in Windham County as a possible breeder, much less a confirmed breeder. The Putney Mountain Hawk Watch often sees the “v” formation of geese overhead, occasionally with snow geese mixed in. These are probably birds from the Canadian north who are migrating south for the winter. On the other hand, most of the several hundred Canada Geese typically counted during Christmas Bird Count in the Brattleboro area are probably non-migratory birds. They have never learned to migrate, and when the water is open and the ground is free of snow, they can find ample food. Canada Geese, like the European Starling, are not indicators of good birding. Rather they are reminders of humanly induced environmental havoc. Post of "Tailfeathers," Brattleboro Reformer, Friday, June 26, 2009
<urn:uuid:dee55e9c-3427-443d-a2a3-d35d6c01e180>
CC-MAIN-2013-20
http://tailsofbirding.blogspot.com/2009/06/pond-starling.html
2013-05-22T08:19:01Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.965105
1,282
Many parents ask: "How can I help my child?" *Ask him/her about what is happening in science! We have begun our pebbles, sand, and silt unit. We can't wait to find out what silt is! Please encourage your child to use the different descriptive words we are *Help them practice their math facts. Play math games to encourage automaticity with facts. *Read together and set a good example of being a reader yourself. *Allow your child time to solve his/her own problems. Start by modeling and "thinking out loud" what you would do. Then guide him/her by allowing them to talk to you about the different options. Not only will they become better problem solvers, but they will learn to communicate with you. (Trust me, that will come in handy when they are teenagers!) *Allow him/her to become more independent. Let them pick out their own clothes, pack their own bookbag, and maybe even their snack. The more they learn to do for themselves, the more empowered they will feel. Giving choices can also be helpful and can begin the road to independence. (I know that solved a lot of problems with my own kids!) *Teach them how to tie their laces. I know in this world of velcro and slip-ons it seems to be a lost art, but it is an important skill.
<urn:uuid:54065664-2704-4588-a592-13972e7a2778>
CC-MAIN-2013-20
http://teacherweb.com/NJ/HeightsElementary-Oakland/MrsAliha/h0.aspx
2013-05-22T08:02:11Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.969193
306
Compiled by Arthur Paul Moser Between the admission of Missouri into the Union, and the creation of Stone County, this Stone County area had been included in the Counties of Wayne, Crawford, (created in 1829), Greene (created in 1833) and Taney, (created in 1837). Taney County then included most of its present limits, parts of the present County of Christian and all of the present limits of Stone County. Forsyth was its County Seat. Due to its size, a portion of its area was severed in 1851 to become Stone County. The 16th General Assembly of Missouri, convened on December 30, 1850. By its Act of February 10, 1851, Stone County was created and was named "in honor of William Stone, late of Taney County, Missouri." (--The White River Valley Historical Society Quarterly, Fall, 1964, pp. 6 & 7. Charles Henson wrote this article, and cited the laws of Mo. 1851, 186-8, to support his statement. Stone County is said to have received its first pioneer as early as 1790. He located among the Delawares, who were the original inhabitants. In 1833 Tennessee and Kentucky sent their sons into the wilderness to open up the country near the confluence of the James and White Rivers. (--A Reminiscent History of the Ozark Region, 1894, p. 31.) The first white man in Stone County was Joseph Philabert*, a fur trapper. There is a cemetery named for him, near the Shepherd of the Hills Estate. (--Mrs. Porter Lucas, Crane, Mo.) *Other spellings of this name: Philibert; Filabert. (--Miss Roblee, Springfield & Greene County Public Libraries.) The old Ozark to Galena horse-back mail route went past these places: Dutch Store, in Christian County not far from the present site of Highlandville; Gilmore Mill, Robertson's Mill, Sinclair Post-Office, Wheeler Branch, later Oto, then Galena. The Sinclair post-office was located across James River from the arch on the John Inmon farm. Present day Hootentown is not far away from the site. (--Stone County Newspapers Centennial Edition, May 1951.) Gilmore Mill and Sinclair Post-Office could not be located in the earliest lists of post-offices available in the Reference Department of the Public Library. The earliest list available is in Missouri Manuals. Also, Gazetteer of Missouri, of 1874 does not list these post-offices. The following post-offices are listed, but could not be located on existing maps: This post-office was nine miles northeast of Galena. This post-office was seven miles southeast of Galena. (--Gazetteer of Missouri, Campbell, 1874, p. 611.) The following post-offices are listed on various maps. No other information could be found: This post-office was in the southeast corner of the county, northeast of Blue Eye. This post-office was in the southwest corner of the county, northeast of Carr. This small town was southeast of Reed's Spring, on the Missouri-Pacific Railroad. (--Rand, McNally Commercial Map of Missouri, 1924.) This post-office was southwest of Jamesville. (--Rand, McNally Commercial Map of Missouri, 1894.) The above listed post-offices are listed on page 60 of Missouri Manual, 1913-1914. (--"Samanthy said so." Regarding all of the above.) Marvel Cave (Once known as Marble Cave) This spot is now known as Silver Dollar City. (--Rand, McNally Commercial Map of Missouri, 1924.) The White River Railway Company The White River Railway was incorporated under the general railroad laws of Arkansas, February 8, 1901. The company was organized and its capital stock owned by the St. Louis Iron Mountain & Southern Railway, which company purchased the property franchises by deed dated Jan. 31, 1903. The money involved in the purchase of this line and its construction was provided for by the sale of bonds. The Springfield subdivision from Springfield to Crane was originally constructed as the Springfield Southwestern Railway Company. This company was organized in 1903 by George J. Gould and associates as a wholly-owned subsidiary of the St. Louis Iron Mountain & Southern Railway. The line was completed from Crane, Missouri to Gulf Street in Springfield on April 20, 1907, and was extended 1.5 miles to its present end of track in Springfield in July 1909. The Springfield Southwestern was later acquired by the Iron Mountain and with the Iron Mountain merged in 1917 into the present Missouri Pacific Railroad Company. (--The above information was furnished by the Public Relations Department, Missouri Pacific Railroad, 210 N. 13th St., St. Louis, Mo., 63103.
<urn:uuid:0dd66d1f-e9c9-4ef4-93c2-cfee299f4a92>
CC-MAIN-2013-20
http://thelibrary.org/lochist/moser/stoneco.html
2013-05-22T08:13:25Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.965289
1,014
To understand the basic mechanism let me give you an analogy. Imagine a sink with a water tap. The tap is on and water flows at a constant rate into the sink ; while the drainage hole of the sink is unblocked the sink will not overflow. Consider the eye: fluid (aqueous) is produced at a constant rate by the ciliary processes (see figure 1) which flows around the lens into the anterior chamber and drains into the angle between the cornea (window of the eye) and the iris (colored part of the eye) . This drainage angle is like the drainage hole of the sink; if it becomes blocked then fluid builds up and because the eye is a closed chamber, instead of overflowing the pressure rises within the eye. There are three main consequences of raised intraocular pressure:- In congenital glaucoma there is abnormal development of the drainage angle such that it is blocked by abnormal tissue referred to as Barkans membrane after the man who first suggested its presence in such eyes. In our analogy imagine that someone has placed a film of cellophane over the drainage hole of the sink; now the sink will overflow. Similarly the pressure in the eye will go up.
<urn:uuid:38e387e9-b109-4f05-9239-09f49a9749aa>
CC-MAIN-2013-20
http://throughgabeseyes.blogspot.com/2012/09/img-0326.html
2013-05-22T07:54:17Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.958017
243
HISTORICAL BACKGROUNDDuring the reign of the seventh-century king, Songtsen Gampo, Tibet was one of the mightiest empires in Central Asia. Tibet, then, had an army of 2,860,000 men. Each regiment of the army had its own banner. The banner of Ya-ru To regiment had a pair of snow lions facing each other, that of Ya-ru Ma a snow lion standing upright, springing upwards towards the sky, and that of U-ru To a white flame against a red background. This tradition continued until the Thirteenth Dalai Lama designed a new banner and issued a proclamation for its adoption by all the military establishments. This banner became the present Tibetan national flag. Explanation of the Symbolism of the Tibetan National Flag - In the centre stands a magnificient snow-clad mountain, which represents the great nation of Tibet, widely known as the Land Surrounded by Snow Mountains. - The Six red bands spread across the dark blue sky represent the original ancestors of the Tibetan people: the six tribes called Se, Mu, Dong, Tong, Dru, and Ra which in turn gave rise to the (twelve) descendants. The combination of six red bands (for the tribes) and six dark blue bands (for the sky) represents the unceasing enactment of the virtuous deeds of protection of the spiritual teachings and secular life by the black and red guardian protector deities with which Tibet has been connected since times immemorial. - At the top of the snowy mountain, the sun with its rays shinning brilliantly in all directions represents the equal enjoyment of freedom, spiritual and material happiness and prosperity by all beings in the land of Tibet. - On the slopes of the mountain a pair of snow lions stand proudly, blazing with the manes of fearlessness, which represent the country’s victorious accomplishment of a unified spiritual and secular life. - The beautiful and radiant three-coloured jewel held aloft represents the ever-present reverence respectfully held by the Tibetan people towards the three supreme gems, the objects of refuge: Buddha, Dharma and Sangha. - The two coloured swirling jewel held between the two lions represents the people’s guarding and cherishing of the self discipline of correct ethical behavior, principally represented by the practices of the ten exalted virtues and the 16 humane modes of conduct. Lastly, the adornment with a yellow border symbolises that the teachings of the Buddha, which are like pure, refined gold and unbounded in space and time, are flourishing and spreading.
<urn:uuid:aebd7e9a-541b-44b5-83e6-533c0e9dcb94>
CC-MAIN-2013-20
http://tibet.net/about-tibet/the-tibetan-national-flag/
2013-05-22T08:19:08Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.938855
524
Brief explanation on the regional characteristics of Sanriku Coast The 2011 East Japan Earthquake Bulletin of the Tohoku 1. IntroductionThis short note provides a brief description on the regional characteristics of Sanriku Coast, northeast Honshu Island(Fig.1). Sanriku Coast is a local name covers the Pacific side of northeast Honshu island. Almost entire part of Sanriku Coast was severely devastated by the Huge earthquake and Tsunami at 11, March, 2011. Many of fishing ports and coastal towns or villages were entirely devastated by the huge Tsumani up to 10m or higher. 2. Physical backgroundsRemarkable characteristics of land form are saw-toothed coastline with narrow flat land, which is known as “Ria coast”. Such shape of coastline turns to be a factor which increase the height of tsumami(Fig.2). In fact, Sanriku coast suffered from severe tsumami disasters at three times in the past, Meiji Sanriku Tsumami in 1896, Showa Sanriku Tsunami in 1933, and Chile Earthquake Tsunami in 1960. Through such experiences, people in Sanriku Coast had taken many preventive measures against periodically coming tsunami, such as upward move of the settlements, construction huge seawalls, making map for escaping from natural disasters including tsunami, and periodical practice for it. But it would be truly regrettable that such measures couldn’t work effectively this time, because of its scale beyond prediction. As for the climate, Sanriku Coast, like as the most of north-eastern part of Japan, suffer sometimes from the cold summer caused mainly by the cold current and the cold northeast wind blowing from the cold air mass staying over Okhotsk Sea. Owing it, Sanriku Coast has least advantages for agriculture.On the other hand, Sanriku Coast has one of the richest sea for the fishery resources, which made the area one of major fishery regions not only in Japan but also in the world. 3. Population and citiesAs shown in Fig.3, population in Sanriku Coast is relatively dispersed than the inland area along the arterial traffic route between Sendai and Morioka. The Change Ratios (1990〜2000) show “minus” in most of municipalities in Sanriku Coast. Major cities in Sanriku Coast such as Hachinohe(八戸), Miyako(宮古), Kamaishi(釜石), Ofunato(大船渡), Kesennuma(気仙沼), Onagawa(女川), Ishinomaki(石巻), developed with their functions of fishing base port during mainly Japan’s rapid economic growth after 1960’s. Hachinohe is known as the biggest base port of far-sea squid fishing in the world. Kesennuma is famous as the world biggest base port for Tuna fishing and shark’s fin production. Many merchants and manufacturers of the marine products and seamen gathered from adjacent villages to these major fishing ports. In 1970’s when the northern Pacific fishery was prosperous, and in 1980’s when resources of the migratory fishes were rich, these fishing ports thrived. After 1990’s, catch of fishes and fishery workers had rapidly decreased, because of the movement of international regulation for the fishing, and decease of the marine resources itself. In the result, population itself decreased as shown in Fig.3. But even now, Sanriku Coast is a major fishery and marine food production region in Japan (Fig.4). Copyright(C)2006- The Tohoku Geographical Association
<urn:uuid:74517fc1-ded9-4d40-8ee9-b49bd245c1e7>
CC-MAIN-2013-20
http://tohokugeo.jp/disaster/articles/e-contents11.html
2013-05-22T07:54:10Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.93563
821
In 1914, when England declared war, the people in its colonies followed the call to arms, despite being miles of oceans away. England was the mother country and they'd defend it to the death. And die they did, in the thousands. The young men and women of Australia and New Zealand proudly served their countries in numerous wars, collectively the were known as the ANZACs. The ANZAC Acronym ANZAC is the acronym formed from the initial letters of the Australian and New Zealand Army Corps, the formation into which Australian and New Zealand soldiers were grouped in Egypt prior to the landing at Gallipoli in April 1915. First written as A. & N. Z. Army Corps, it soon became A. N. Z. A. C. and the new word was so obvious that the full stops were omitted. The word was initially used to refer to the cove where the Australians and New Zealanders landed and soon after, to the men themselves. An ANZAC was a man who was at the Landing and who fought at Gallipoli, but later it came to mean any Australian or New Zealand soldier of the First World War. An ANZAC who served at Gallipoli was given an A badge which was attached to his colour patch. In later years the term ANZAC was applied to any man or women serving in any of the wars. The Anzacs lost 8,000 men in Gallipoli and a further 18,000 were wounded. The Anzacs went on to serve with distinction in Palestine and on the western front in France. The Anzacs were in many of the major battles and were commended for their bravery and courage. Many times the Anzacs were the first soldiers sent into a battle or an occupied area by their British superior officers. They were thought to be expendable, but they soon showed the world how the men from the harsh pioneering countries had the endurance and spirit to overcome such a fearsome enemy. Australia had a population of five million--330,000 served in the war, and 59,000 were killed. New Zealand with a population of one million lost 18,000 men out of 110,000 and had 55,000 wounded. These New Zealand figures (62%) represent the highest percentage of all units from the Anglo-Saxon world. The picture above right is the Casualty clearing station, Menin Road, Belgium. Every year Australians and New Zealand celebrate the mateship, the bond of our two countries that were united against a common enemy. April 25th is ANZAC Day, (the day the men landed at Gallipoli). It's a day when a whole nation comes together to remember those who fought for our freedom and those who made the greatest sacrifice of their lives. Lest We Forget For more information read on the ANZACS and their contribution to the fighting forces of world wars visit the following websites. Australian War Memorial
<urn:uuid:71fa352d-6e73-41a9-992b-c36f9fa1b776>
CC-MAIN-2013-20
http://unusualhistoricals.blogspot.com/2008/08/weapons-and-armies-anzacs.html
2013-05-22T08:18:59Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.984158
605
The National Vaccine Information Center (NVIC) is a national charitable, non-profit educational organization founded in 1982. NVIC launched the vaccine safety and informed consent movement in America in the early 1980′s and is the oldest and largest consumer led organization advocating for the institution of vaccine safety and informed consent protections in the public health system. The National Vaccine Information Center (NVIC) is dedicated to the prevention of vaccine injuries and deaths through public education and to defending the informed consent ethic in medicine. As an independent clearinghouse for information on diseases and vaccines, NVIC does not advocate for or against the use of vaccines. We support the availability of all preventive health care options, including vaccines, and the right of consumers to make educated, voluntary health care choices. Uncensored information about vaccines and how they affect our children. Authorities often claim that ”anti-vaccine” websites do not provide valid documentation. We provide hundreds of peer-reviewed studies from scientific and medical journals. Many of these studies link vaccines to the onset of new diseases. Dr. Tenpenny’s vaccine information is essential for anyone who is intent on making a fully informed decision about vaccination. The International Medical Council on Vaccination is an association of medical doctors, registered nurses and other qualified medical professionals whose purpose is to counter the messages asserted by pharmaceutical companies, the government and medical agencies that vaccines are safe, effective and harmless. Our conclusions have been reached individually by each member of the Council, after thousands of hours of personal research, study and observation. This site is dedicated to the promotion of safer immunization practices through the application of scientific principles to vaccine research. The website highlights research into the long term effects of vaccines and discusses potential harmful effects of vaccines. The content of this site is not intended to be anti-immunization but instead to promote the concept that the goal of immunization is to promote health not eradicate infections. It is hoped that through the collection and dissemination of information about the chronic effects of vaccines, safer immunization practices will become available for those who choose to be immunized. You are about to find out what I never wanted to know about vaccines. I would love to be proven wrong about what I have discovered. So please, if you have information to the contrary, share it with me. ”At the present time there are growing public and professional concerns about the safety of currently mandated childhood vaccine programs, as reflected in by a series of annual Congressional hearings in Washington DC that have taken place since 1999, sponsored by the U.S. House Government Reform Committee under the chairmanship of Congressman Dan Burton. At an annual conference of the American College for the Advancement of Medicine during April 2001, with several hundred physicians in attendance, when one of the speakers asked how many in attendance had concerns about the safety of current childhood vaccines, a large majority raised their hands. The Autism Research Institute of San Diego is now widely known as an active support group for families with autistic children and is one of the more active organizations in this field. Its founding director, Bernard Rimland, Ph.D., has provided the statistics that, in their experience, from 50 to 60% of parents with autistic children believe that their children were damaged by vaccines. In our own office we have seen many autistic children in recent years, and our own experience has been very similar, many parents reporting that deterioration of their children took place following vaccines.” Dr. Buttram Viera Scheibner.org is a website repository of relevant information which will help parents, health practitioners, lawyers, politicians and other interested parties to obtain a more balanced viewpoint on pertinent subjects such as vaccine safety, vaccine efficacy, the ethics of vaccination, and the public policy debacles involving vaccination. ”I highly recommend this vaccine website for its educational value.” – Dr Alan Cantwell Our mission is to prevent vaccine injury and death and to promote and protect the right of every person to make informed independent vaccination decisions for themselves and their families. Vaccination Liberation is part of a national grassroots network dedicated to providing information on vaccinations not often made available to the public. We want to expand our awareness of alternatives in healthcare and reveal the myth that vaccines are necessary, safe and effective. Dr. Mercola has made significant milestones in his mission to bring people practical solutions to their health problems. Does the FDA have your best interest at heart? Watch the movie
<urn:uuid:11a5c006-2361-4c74-bd0c-abd2a5b8852a>
CC-MAIN-2013-20
http://vaccin.me/2012/03/18/webbsajter-med-ocensurerad-information-om-vaccin/
2013-05-22T08:12:28Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.959333
901
SeaWiFS: Phytoplankton Blooms along the South African Coast Provided by the SeaWiFS Project, NASA/Goddard Space Flight Center, and ORBIMAGE This SeaWiFS image captures the second week of autumn in southern Africa. Prevailing southerly winds push surface waters toward the equator along South Africa and Namibia's west coasts. Because the Earth rotates and because of frictional forces between the wind driven surface water and the water just beneath it, the net effect of the wind forcing from the south is that there is a net transport of surface water away from the African coast. This is referred to as Ekman transport. As water in the upper layer of the ocean moves westward, colder, nutrient-rich water upwells along the coast fueling the blooms of phytoplankton that are visible.
<urn:uuid:abe00dd0-d55d-4633-bc7b-2b27d943dcfa>
CC-MAIN-2013-20
http://visibleearth.nasa.gov/view.php?id=55634
2013-05-22T08:33:13Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.925439
180
There are two groups of igneous rocks: those that hardened beneath the earth's surface are called intrusive or plutonic; those that hardened on the surface are called extrusive or volcanic. All igneous rocks are classified by the types of minerals present and by the size of their crystals. While the minerals reflect the chemistry of the original magma, the size and shapes of the crystals indicate how long it took for the magma to cool. Plutonic rocks, such as granite, have crystals large enough to be seen with the naked eye, indicating a slowly cooling magma within the earths crust. Volcanic rocks, however, usually have microscopic crystals because the magma cooled very quickly when it was exposed to cool air on the surface. Basalt is the most abundant type of volcanic rock. Most of Alberta's exposed bedrock is sedimentary but there are some outcrops of igneous rocks. Volcanic rocks can be found in the mountains of Waterton Lakes National Park and north to the Crowsnest Pass. Intrusive rocks, such as granites, form extensive parts of the Canadian Shield in northeastern Alberta. They can be seen particularly well at Fort Chipewyan and the Slave River Rapids. There are also small outcrops of such rocks south of the Milk River. Even though outcrops of igneous rocks are uncommon in Alberta, the advancing continental glaciers during the last Ice Age plucked blocks of both igneous and metamorphic rocks from the Shield and scattered them across most of the province. Thus, such rocks are abundant in gravel pits, gravel bars, and rock piles in farmer's fields.
<urn:uuid:bd56f045-d3e5-40d0-91fb-9e25b9dfca60>
CC-MAIN-2013-20
http://wayback.archive-it.org/2217/20101208163514/http:/www.abheritage.ca/abnature/geological/igneous.htm
2013-05-22T08:27:38Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.952139
346
Chaos theory is the study of nonlinear dynamics, in which seemingly random events are actually predictable from simple deterministic equations. Embracing chaos theory as part of enterprise IT strategy Enterprise IT strategy has long been based on s...(SearchDataCenter.com) In systems engineering, obliquity is a theory ...(SearchCRM.com) In a scientific context, the word chaos has a slightly different meaning than it does in its general usage as a state of confusion, lacking any order. Chaos, with reference to chaos theory, refers to an apparent lack of order in a system that nevertheless obeys particular laws or rules; this understanding of chaos is synonymous with dynamical instability, a condition discovered by the physicist Henri Poincare in the early 20th century that refers to an inherent lack of predictability in some physical systems. The two main components of chaos theory are the ideas that systems - no matter how complex they may be - rely upon an underlying order, and that very simple or small systems and events can cause very complex behaviors or events. This latter idea is known as sensitive dependence on initial conditions , a circumstance discovered by Edward Lorenz (who is generally credited as the first experimenter in the area of chaos) in the early 1960s. Lorenz, a meteorologist, was running computerized equations to theoretically model and predict weather conditions. Having run a particular sequence, he decided to replicate it. Lorenz reentered the number from his printout, taken half-way through the sequence, and left it to run. What he found upon his return was, contrary to his expectations, these results were radically different from his first outcomes. Lorenz had, in fact, entered not precisely the same number, .506127, but the rounded figure of .506. According to all scientific expectations at that time, the resulting sequence should have differed only very slightly from the original trial, because measurement to three decimal places was considered to be fairly precise. Because the two figures were considered to be almost the same, the results should have likewise been similar. Since repeated experimentation proved otherwise, Lorenz concluded that the slightest difference in initial conditions - beyond human ability to measure - made prediction of past or future outcomes impossible, an idea that violated the basic conventions of physics. As the famed physicist Richard Feynman pointed out, "Physicists like to think that all you have to do is say, these are the conditions, now what happens next?" Newtonian laws of physics are completely deterministic: they assume that, at least theoretically, precise measurements are possible, and that more precise measurement of any condition will yield more precise predictions about past or future conditions. The assumption was that - in theory, at least - it was possible to make nearly perfect predictions about the behavior of any physical system if measurements could be made precise enough, and that the more accurate the initial measurements were, the more precise would be the resulting predictions. Poincare discovered that in some astronomical systems (generally consisting of three or more interacting bodies), even very tiny errors in initial measurements would yield enormous unpredictability, far out of proportion with what would be expected mathematically. Two or more identical sets of initial condition measurements - which according to Newtonian physics would yield identical results - in fact, most often led to vastly different outcomes. Poincare proved mathematically that, even if the initial measurements could be made a million times more precise, that the uncertainty of prediction for outcomes did not shrink along with the inaccuracy of measurement, but remained huge. Unless initial measurements could be absolutely defined - an impossibility - predictability for complex - chaotic - systems performed scarcely better than if the predictions had been randomly selected from possible outcomes. The butterfly effect , first described by Lorenz at the December 1972 meeting of the American Association for the Advancement of Science in Washington, D.C., vividly illustrates the essential idea of chaos theory. In a 1963 paper for the New York Academy of Sciences, Lorenz had quoted an unnamed meteorologist's assertion that, if chaos theory were true, a single flap of a single seagull's wings would be enough to change the course of all future weather systems on the earth. By the time of the 1972 meeting, he had examined and refined that idea for his talk, "Predictability: Does the Flap of a Butterfly's Wings in Brazil set off a Tornado in Texas?" The example of such a small system as a butterfly being responsible for creating such a large and distant system as a tornado in Texas illustrates the impossibility of making predictions for complex systems; despite the fact that these are determined by underlying conditions, precisely what those conditions are can never be sufficiently articulated to allow long-range predictions. Although chaos is often thought to refer to randomness and lack of order, it is more accurate to think of it as an apparent randomness that results from complex systems and interactions among systems. According to James Gleick, author of Chaos : Making a New Science , chaos theory is "a revolution not of technology, like the laser revolution or the computer revolution, but a revolution of ideas. This revolution began with a set of ideas having to do with disorder in nature: from turbulence in fluids, to the erratic flows of epidemics, to the arrhythmic writhing of a human heart in the moments before death. It has continued with an even broader set of ideas that might be better classified under the rubric of complexity."
<urn:uuid:4f775d28-8237-4802-94c2-f800e6755db5>
CC-MAIN-2013-20
http://whatis.techtarget.com/definition/chaos-theory
2013-05-22T08:33:57Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.963965
1,111
- Non-native privet is a semi-evergreen shrub or small tree that grows to 20 ft. (6.1 m) in height. Trunks usually occur as multiple stems with many long, leafy branches. - Leaves are opposite, lanceolate, 1-2.4 in. (2.5-6 cm) long and 0.2-0.6 in. (0.5-1.5 cm) wide. - Flowering occurs from April to June, when panicles of white to cream flowers develop in terminal and upper axillary clusters. Pollen can cause an allergic reaction in some people. - The abundant fruits are spherical, 0.3–0.05 in. (1-1.3 cm) long. Fruit begins green and ripens to a dark purple to black color and persists into winter. Birds and wildlife eat the fruit and disperse the seeds. Seed soil viability is about one year. It also colonizes by root sprouts. - Ecological Threat - Ligustrums can tolerate a wide range of conditions. They form dense thickets invading fields, fencerows, roadsides, forest understories, and riparian sites. They can shade out and exclude native understory species, perhaps even reduce tree recruitment. Native to Europe and Asia, they are commonly used as ornamental shrubs and for hedgerows.
<urn:uuid:4db72796-9642-4f98-b328-2d00ba2409b5>
CC-MAIN-2013-20
http://wiki.bugwood.org/Ligustrum_vulgare
2013-05-22T08:00:45Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.937693
287
Does Vitamin B12 Work? Many people may wonder if vitamin B12 works for conditions other than a deficiency. The vitamin (when used in combination with folic acid or vitamin B6) may be useful for treating hyperhomocysteinemia; it may also be beneficial for relieving fatigue. However, not much scientific evidence is available to support the effectiveness of vitamin B12 for these claimed uses. Vitamin B12 is claimed to work for a wide variety of conditions, often with little scientific evidence to back up such claims. This article will address the effectiveness of vitamin B12 for several uses. As you might guess, taking vitamin B12 is effective for treating a deficiency. It is also effective for preventing a deficiency in people at high risk for such problems. Although it was once thought that injections were the only way to treat vitamin B12 deficiencies due to low or absent intrinsic factor, it is now known that oral forms can be just as effective as injectable forms, although much higher doses are required. Small amounts of vitamin B12 can be absorbed after oral consumption, even without any intrinsic factor. Many healthcare providers choose to initially treat with injections to build up the stores of vitamin B12 in the body, then follow with oral supplementation. Vitamin B12 also works for the various problems related to a deficiency, such as pernicious anemia.
<urn:uuid:3bf58288-65c0-4c1b-a508-83eaf4233a07>
CC-MAIN-2013-20
http://women.emedtv.com/vitamin-b12/does-vitamin-b12-work.html
2013-05-22T08:13:47Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.95469
276
The symbol of the Four Days Marches Nijmegen, the eldest city in the Netherlands, renames the St. Annastraat to Via Gladiola once a year and on that day welcomes the walkers on the last day of the Four Days Marches as true heroes. Traditionally, the spectators hand out gladioli to the walkers. Why the gladiolus on Friday? The Dutch have a saying, roughly translated, ‘death or the gladioli', meaning all or nothing. It is thought that this phrase was being chanted in the arena in Roman times by the frantic spectators on the stands who were watching the gladiators fight each other to the death in a thrilling sword fight. After a heroic fight the victor was buried in gladioli by the cheering crowds. So, why the gladiolus? The name is derived from the Latin word ‘gladius' meaning sword, after the sword-like shape of the flower. The gladiolus has become a sign of strength and victory; a flower earned after a great achievement. Several millennia later the expression has been annexed by the Four Days Marches. And when they are walking on the St. Annastraat - or rather the Via Gladiola - being cheered on by the spectators, the walkers of the Four Days Marches are in fact as heroic as the gladiators once were.
<urn:uuid:e341be08-8901-459b-a3ca-bf4779245210>
CC-MAIN-2013-20
http://www.4daagse.nl/en/event/gladiolus.html
2013-05-22T07:54:39Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.949539
288
A Prince-Bishop is a bishop who is a territorial Prince of the Church on account of one or more secular principalities, usually pre-existent titles of nobility held concurrently with their inherent clerical office... |See bishopric of Speyer The Bishopric of Speyer was a state, ruled by Prince-Bishops, in what is today the German state of Rhineland-Palatinate. It was secularized in 1803... |1802 to 5 February 1818 Sede vacante is an expression, used in the Canon Law of the Catholic Church, that refers to the vacancy of the episcopal see of a particular church... |Secularization and division of the bishopric of Speyer |5 February 1818 to 30 June 1826 ||Matthäus Georg von Chandelle ||Priest of Mainz;ordained 9 December 1821; died in office |22 July 1826 to 25 March 1835 ||Johann Martin Manl ||Priest of Mainz; confirmed 9 April 1827; ordained 25 April 1827; appointed Bishop of Eichstätt |23 March 1835 to 20 September 1836 ||Johann Peter von Richarz ||Priest of Würzburg; confirmed 24 July 1835; ordained 1 November 1835; Appointed Bishop of Augsburg The Bishop of Augsburg is the Ordinary of the Roman Catholic Diocese of Augsburg in the Ecclesiastical province of München und Freising.The diocese covers an area of 13,250 km².The current bishop is Konrad Zdarsa who was appointed in 2010.... |20 September 1836 to 23 May 1842 ||Johannes von Geissel Johannes von Geissel was a German Catholic Archbishop of Cologne and Cardinal.-Life:Gessel was born in Gimmeldingen in the Electoral Palatinate.... |Priest of Speyer; confirmed 19 May 1837; ordained 13 August 1837; Appointed Bishop of Cologne |5 March 1842 to 13 December 1869 ||Nicolaus von Weis Nicolaus von Weis was from 1842 to 1869 Bishop of the Roman Catholic Diocese of Speyer, in the Rhenish Palatinate Nicolaus von Weis (b. Rimling, Moselle, France, 8 March 1796 - d. Speyer, 13 December 1869) was from 1842 to 1869 Bishop of the Roman Catholic Diocese of Speyer, in the Rhenish... |Priest of Speyer; confirmed 23 May 1842; ordained 10 July 1842; died in office |6 May 1870 to 4 April 1871 ||Priest of Speyer; confirmed 27 June 1870; ordained 18 September 1870; died in office |23 May 1872 to 31 May 1876 ||Bonifatius von Haneberg, OSB ||Priest of the Order of Saint Benedict Benedictine refers to the spirituality and consecrated life in accordance with the Rule of St Benedict, written by Benedict of Nursia in the sixth century for the cenobitic communities he founded in central Italy. The most notable of these is Monte Cassino, the first monastery founded by Benedict... ; confirmed 29 July 1872; ordained 25 August 1872; died in office |9 June 1878 to 18 March 1905 ||Joseph Georg von Ehrler ||Priest of Würzburg; confirmed 9 June 1878; ordained 15 July 1878; died in office |21 March 1905 to 9 September 1910 ||Konrad von Busch ||Priest of Speyer; confirmed 30 May 1905; ordained 16 July 1905; died in office |4 November 1910 to 26 May 1917 ||Michael von Faulhaber || Priest of Speyer; confirmed 7 January 1911; ordained 19 February 1911; appointed Archbishop of München und Freising |28 May 1917 to 20 May 1943 ||Priest of Bamberg; confirmed 31 July 1917; ordained 23 September 1917; died in office |20 May 1943 to 9 August 1952 ||Coadjutor Bishop of Speyer; installed 4 June 1943; Appointed Archbishop of München und Freising |22 December 1952 to 10 February 1968 ||Isidor Markus Emanuel ||Priest of Speyer; ordained 1 February 1953; resigned |28 May 1968 to 28 October 1982 ||Priest of Speyer; ordained 29 June 1968; Appointed Archbishop of München und Freising |25 August 1983 to 10 February 2007 Anton Schlembach is bishop emeritus and the former 95th Bishop of Speyer.Schlembach was born in Großwenkheim, in Münnerstadt, Germany in the archdiocese of Würzburg. There he became vicar general.... | Priest of Würzburg; ordained 16 October 1983 |19 December 2007 to present Karl-Heinz Wiesemann is the 96th Bishop of Speyer.Wiesemann was born in Herford, North Rhine-Westphalia in the archdiocese of Paderborn. He became a chaplain on the 10th of October 1985 in Rome. He later served as a priest in Bösperde, a suburb of Menden and provost of St... An auxiliary bishop, in the Roman Catholic Church, is an additional bishop assigned to a diocese because the diocesan bishop is unable to perform his functions, the diocese is so extensive that it requires more than one bishop to administer, or the diocese is attached to a royal or imperial office... of Paderborn; ordained 2 March 2008
<urn:uuid:19ffa331-037a-473f-8311-8d02a615e73f>
CC-MAIN-2013-20
http://www.absoluteastronomy.com/topics/Bishop_of_Speyer
2013-05-22T08:12:02Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.951982
1,199
Government of The Republic of The Gambia. Gambia is a multi-party democratic republic within the Commonwealth; independent since 1965; and an Executive Presidency established in 1970. The 1970 constitution was revoked following the July 1994 The Constitution of the Second Republic of The Gambia, which was approved in a national referendum on 8 August 1996, came into effect on 16 January 1997. Under its terms, the Head of State is the President of the Republic, who is directly elected by universal adult suffrage and holds executive authority. Legislative authority is vested in the National Assembly, which serves a five-year term and comprises 53 members - 48 of which are directly elected and 5 appointed members. The President appoints government members, who are responsible both to the Head of State and to the National Assembly. The current president is Yahya Jammeh. The president's official residence is State House. After 200 years of British Colonial rule, The Gambia became independent on 18th February 1965 and 5 years later in April 1970-adopted a republican constitution. The Gambia, a multi-party republic within the Commonwealth, is administered by an Executive President. Under the current constitution general elections through secret ballots are held every five years to elect candidates who constitute the country's House of Parliament. For administrative purposes the country is divided into The Capital and Seat of Government together with the adjoining Kombo St. Mary and the provinces are in turn divided into five Divisions (now known as regions), each headed by a Commissioner who is the administrative head. These divisions, are further sub-divided into 35 districts locally administered by Seyfos (chiefs). Each district covers a number of villages and settlements with the Alkalo as the The Gambia judicial system is similar to the system found in most countries with Common Law Jurisdiction. There is only one system of courts which form a hierarchy. The subordinate courts consist of (a) Khadis (Muslim) Courts, (b) District tribunals, and (c) Magistrates courts. These courts have limited jurisdiction to hear both civil and criminal matters before them. At the higher level, there are the Supreme Court and The Gambia Court Main Political Parties: APRC - Alliance for Patriotic Reorientation and Construction GPP - Gambian People's Party PPP - Progressive People's Party UDP - United Democratic Party The parliament of Gambia is called the National Assembly and is a Unicameral parliament. Consisting of 53 members, 48 of which are directly elected for a term of 5 years. The Gambia's earlier Constitution came into force on 24th April 1970, when the country became a republic. Its major provisions are summarised below: See also the current constitution. Executive power is vested in the President of State and Commander-in-Chief of the armed forces. Following a constitutional amendment in March 1982, the President is elected by direct universal suffrage, and serves five-year term. The President appoints the vice-president, who is leader of government business in the House of Representatives, and other Cabinet Ministers from members of the Legislative power is vested in the unicameral National Assembly, with 53 members: 48 elected by universal adult suffrage and 5 appointed by Suffrage: 18 years of age; universal. Since the Military take-over on July 22nd 1994, the APRC Government has made a few amendments to the constitution of The Gambia's but the Judiciary has remained the same. The new government has also established a constitutional review commission (CRC) which is charged with the responsibility of reviewing the present constitution in order to make it more responsive to the needs and aspirations of the people of The Gambia. Both the British and United States governments as well as the United Nations Development Programme (UNDP) provided technical and financial assistance to The Gambia for the APRC Transition Programme for economic and social development and the steady return to democratic civilian rule in July 1996.
<urn:uuid:ce6d54ba-37ce-4192-a3b7-69dfb0cbfaa3>
CC-MAIN-2013-20
http://www.accessgambia.com/information/government.html
2013-05-22T08:27:42Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.937472
888
My Students: We use real world, hands on activities to develop skills pertaining to electrical, fluid, and mechanical systems. All intro level students must complete a series of basic house wiring circuits and a complete small gasoline motor tear-down and rebuild. The problem: We are aiming to outfit our classroom with a classroom set of 10 Briggs and Stratton Motors, 20 textbooks and 20 workbooks. This will end the problems that arose with student funded engine rebuilds as well as issues with multiple engine manufacturers and lack of available parts for outdated motors. The classroom set of motors will allow us to focus on engine theory, and complete tasks individually yet simultaneusly as a class. These activities teach students how to read and interpret technical manuals, use micrometers for measuring, as well as safely use common hand tools. text book 85.95 171.90 work book 26.95 53.90 For every 2 text and 2 work books purchased a Vo-ed engine will be donated.
<urn:uuid:e25a4f44-bc66-4441-a64d-586c3da27846>
CC-MAIN-2013-20
http://www.adoptaclassroom.org/classroomdonation/results_school.aspx?ps=0&schoolid=145153
2013-05-22T08:02:00Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.933427
199
AfriGeneas Africa Research Forum Ghana - The Door of Return Ghana's Uneasy Embrace of Slavery's Diaspora CAPE COAST, Ghana - For centuries, Africans walked through the infamous "door of no return" at Cape Coast castle directly into slave ships, never to set foot in their homelands again. These days, the portal of this massive fort so central to one of history's greatest crimes has a new name, hung on a sign leading back in from the roaring Atlantic Ocean: "The door of return." Ghana, through whose ports millions of Africans passed on their way to plantations in the United States, Latin America and the Caribbean, wants its descendants to come back. Taking Israel as its model, Ghana hopes to persuade the descendants of enslaved Africans to think of Africa as their homeland - to visit, invest, send their children to be educated and even retire here. "We want Africans everywhere, no matter where they live or how they got there, to see Ghana as their gateway home," J. Otanka Obetsebi-Lamptey, the tourism minister, said on a recent day. "We hope we can help bring the African family back together again." Messages In This Thread
<urn:uuid:5005beb2-f3dd-421f-a67c-d476ebd5b518>
CC-MAIN-2013-20
http://www.afrigeneas.com/forum-africa/index.cgi/md/read/id/262/sbj/ghana-the-door-of-return/
2013-05-22T08:01:22Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.944336
259
Question 86740This question is from textbook Mathematics for elementary teachers : I need help with this problem. The problem states: Two digits of this number were erased: 273*49*5. However we know that 9 and 11 divide the number. What is it? What I have done so far: 2+7+3+*+4+9+*+5 = 30+*+* The only number divisible by both 9 and 11 is 99. However, to make the sum of this number equal 99 the two digits would have to be 35 and 34. I am not sure if these numbers qualify? I tried other single numbers and haven't found a sum divided by both 9 and 11. Your help would be most appreciated. Thanks. R. FrankeThis question is from textbook Mathematics for elementary teachers a number is divisible by 9 if the sum of the digits is divisible by 9 ... so 30+x+y=36 or 30+x+y=45 ... x+y=6 or x+y=15 a number is divisible by 11 if the difference between the sum of the odd numbered digits and the sum of the even numbered digits is divisible by 11 ... so 2+3+4+y-(7+x+9+5) or y-x-12=-11 ... y-x=1 ... this means an ODD sum
<urn:uuid:86007d92-c632-4be0-a171-f8ca8d625721>
CC-MAIN-2013-20
http://www.algebra.com/algebra/homework/divisibility/Divisibility_and_Prime_Numbers.faq.question.86740.html
2013-05-22T08:13:10Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.933457
292
USS Fort Jackson American Civil War Union Army Tug USS Fort Jackson (1863-1865). Built as the civilian steamship Union (1862). USS Fort Jackson , a 1850-ton (burden) wooden side-wheel cruiser, was built at New York City in 1862 as the civilian steamship Union . The U.S. Navy purchased her in July 1863 and, after conversion to a warship, placed her in commission as USS Fort Jackson in August of that year. A boiler casualty kept her out of combat service until late in 1863, when she joined the North Atlantic Blockading Squadron. During the next year, Fort Jackson worked to enforce the blockade of the Confederate Atlantic coast. While performing this duty, she assisted in destroying the blockade runner Bendigo (3 January 1864) and captured the steamers Thistle (4 June), Boston (8 July) and Wando (21 October 1864). In December 1864 and January 1865, Fort Jackson participated in the operations that finally captured Fort Fisher, North Carolina, thus ending blockade running into the port of Wilmington. She was transferred to the West Gulf Blockading Squadron in February 1865 and served off Texas until after the final surrender of Confederate positions there in June. USS Fort Jackson was decommissioned and sold in August 1865. She subsequently became commercial steamer North America and was not broken up until 1879. Photographed during the Civil War, circa 1863-1865 Ironclad of the Roanoke Gilbert Elliott's Albemarle The story of a Confederate Ironcald that was a powerful force until sunk by a Union Torpedo Boat after its brief stormy life. Ironic in the fact it was built in a Cornfield. Confederate Ingenunity at it finest! Halls of Honor The U.S. Navy Museum takes you on an informed and entertaining romp through one of North America s oldest and finest military museums. The museum has been in continuous operation at the Washington Navy Yard since the American Civil War Raise The Alabama She was known as "the ghost ship." During the Civil War, the CSS Alabama sailed over 75,000 miles and captured more than 60 Union vessels. But her career came to an end in June of 1864 when she was sunk by the USS Kearsarge off the coast of Northern France The Blue and the Gray The Complete Miniseries The Civil War proved a backdrop for this 1982 miniseries. Complete and uncut three disc set. Two families divided by the War Between the States. A Southerner caught when he becomes a war correspondent for the Northern newspaper. He finds himself where history's in the making from the Battle of Bull Run to Abraham Blue Vs. Gray - Killing Fields Relive the most vicious fighting of the Civil War, in which General Ulysses S. Grant forcibly reversed the tide of the conflict by paying with the blood of thousands. It was a desperate time for the Union Sources: U.S. National Park Service U.S. Library of Congress US Naval Archives
<urn:uuid:ad336aa5-4e7f-4573-8974-8456dbfbf1dc>
CC-MAIN-2013-20
http://www.americancivilwar.com/tcwn/civil_war/Navy_Ships/USS_Fort_Jackson.html
2013-05-22T08:32:31Z
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00000-ip-10-60-113-184.ec2.internal.warc.gz
en
0.958718
628